Asynchronous Bulk Export API
Introduction
The Bulk Export API is designed to export large datasets.
It follows an asynchronous pattern: you first initiate an export job, then poll for its status, and finally download the resulting file.
Export Workflow
Step 1: Initiate an Export Job
-
Endpoint:
POST /api/v1/idm/{firm-id}/[objects]/export
-
Request Body Parameters:
cursor
:string
- ThenextCursor
from previous export job
-
Example Request:
{ "cursor": "..." }
-
Response
200 Success
:{ "requestId": "export-8f7g6h5e-4d3c-2b1a-9098-f7e6d5c4b3a2" }
Step 2: Check Job Status
-
Endpoint:
GET /api/v1/request/{request-id}
-
Possible Statuses:
Pending
: the request is received but the execution is not yet startedInProgress
: the request is executingCompleted
: the request is completedFailed
: the request is failed
-
Example
InProcessing
response{ "requestId": "export-8f7g6h5e-4d3c-2b1a-9098-f7e6d5c4b3a2", "status": "InProgress", "progress": { "processed": 123 } }
-
Example
Completed
response{ "requestId": "export-8f7g6h5e-4d3c-2b1a-9098-f7e6d5c4b3a2", "status": "Completed", "result": { "total": 100000, "fileUrl": "...", "nextCursor": "..." } }
Step 3: Download the Exported File
Use the fileUrl
from the completed job status response to download the file.
Handle Large Exports with Cursors
If your export contains more objects than the job size limit, the job response will include a nextCursor
.
To get the next batch of data, initiate a new export job, passing this cursor value in the request body. Repeat until the next_cursor
field is null
.
Limits
Operation | Limit | Description |
---|---|---|
Export Job Size | 100,000 objects per job | A single export job will process a maximum of 100,000 objects. Use cursors to retrieve more. |
Export Concurrency | 1 active job per user | A user can only have one export job in a Pending or InProgress state at a time. |
Updated about 9 hours ago