Importing items in bulk
Learn how to create and update items in bulk via the platform API using ingest_items, backfill_items, and fetch_job_status
Only available in API versions
2026-07and later
Using the platform API, you can import large quantities of items into a monday.com board through an asynchronous, three-step workflow:
- Start an import job (
ingest_itemsorbackfill_items) and receive a pre-signedupload_urlandjob_id - Upload the CSV file to the
upload_url - Poll job status with
fetch_job_status
The API exposes two ways to start an import. ingest_items is the default for almost every use case. backfill_items is intended for a one-time, account-admin-only initial load — for example, seeding a board before it goes into day-to-day use.
Choose how to import
ingest_items (recommended) | backfill_items | |
|---|---|---|
| Use for | Integrations, recurring imports, creating or updating items as part of normal board activity | A single large initial setup import before users or agents work in the board day-to-day |
| Maximum rows per job | 10,000 | 20,000 |
| Who can call it | Any user with board edit access via API (boards:write) | Account admin and boards:write |
| Update or skip existing items | Yes, using on_match | Not supported; always creates new items |
| Hourly item create/update budget | Counts toward the 19,000 items per account per hour budget | Does not consume this budget |
| Side effects on other monday features | None — automations and Activity Log behave normally | Automations are not triggered. Item creation is not logged in the Activity Log. |
Supported column types
Both endpoints support the following column types:
- Date
- Dropdown
- Link
- Long Text
- Number
- People
- Phone
- Status
- Text
- Timeline
Rows that include unsupported column types may fail validation. See Column type validation for the per-type CSV format.
Limitations
| Category | backfill_items | ingest_items |
|---|---|---|
| Maximum rows per file | 20,000 | 10,000 |
| Maximum file size | 150 MB | 150 MB |
| Upload URL expiration | 10 minutes | 10 minutes |
| Report URL expiration | 10 minutes | 10 minutes |
| Mutation calls per account per hour | 100 (shared with ingest_items) | 100 (shared with backfill_items) |
| Item create/update budget per hour | Not consumed | 19,000 items per account |
Notes
- File size is enforced after upload by the import service (not by the upload URL itself). Oversized files are rejected with
failure_reason: FILE_TOO_LARGE. backfill_itemsandingest_itemsshare the same hourly limit on starting jobs.- Cancellation of a running job is not supported.
Getting started
Pre-requisites
- API authentication token
- CSV file with valid headers and values (see CSV format)
- Requests must include the
API-Version: 2026-07header
Step 1: Start the import job
- Retrieve the
board_idandgroup_idwhere you want to import items by queryingboardsandgroups. - Call
ingest_itemsunless you specifically need a one-time admin-only initial load viabackfill_items. - Save the returned
job_idandupload_url. Theupload_urlis only valid for 10 minutes.
Ingest example (default path)
mutation {
ingest_items(
board_id: "1234567890"
group_id: "topics"
on_match: { behaviour: UPSERT, match_column_id: "email" }
) {
job_id
upload_url
}
}Backfill example (one-time initial load only)
mutation {
backfill_items(board_id: "1234567890", group_id: "topics") {
job_id
upload_url
}
}Step 2: Upload the CSV file
Upload your CSV to the upload_url returned in Step 1.
PUT <upload_url>
Content-Type: text/csv
<CSV file content>A successful upload returns HTTP 200 with an ETag header.
HTTP/1.1 200 OK
ETag: "abc123def456"
Step 3: Monitor the import status
Poll fetch_job_status every ~10 seconds until the job reaches a terminal state (COMPLETED, FAILED, or REJECTED).
query {
fetch_job_status(job_id: "7c9e6679-7425-40de-944b-e07fc1f90ae7") {
... on ItemsJobStatus {
status
counts {
submitted
invalid
skipped
created
updated
failed
}
progress_percentage
failure_reason
failure_message
fully_imported
report_created
report_url
}
}
}When report_created is true, download report_url promptly. The report URL expires after 10 minutes.
Reference
CSV format
- UTF-8 encoding is required.
- Use comma (
,) as the column separator. - Wrap values that contain commas, double quotes, or newlines in double quotes. Escape an embedded double quote by doubling it (
"").
Headers
- The first header value must be
name. - The remaining header values must be exact board
column_idvalues (case-sensitive). - Query
columnsto retrieve the correct column IDs for the target board.
Rows
- The first value in each row is the item name.
- The remaining values map to the column IDs in the header.
- Empty values are allowed. Whitespace-only values are treated as empty.
- In UPSERT mode (
ingest_itemsonly), an empty cell on a matched row is ignored — the existing column value is preserved. - In UPSERT mode, a cell containing exactly the string
<NULL>clears the column value on the matched item.
Example
name,text,status,date,email,numeric,dropdown
Task 1,Description text,Working on it,2025-12-31,[email protected],100,"Option A, Option B"
Task 2,Another task,Done,2026-01-15,[email protected],200,Option C
Task 3,Third task,Stuck,2026-02-28,[email protected],300,
Task 4,,,,,,Column type validation
| Column type | CSV value format | Validation rules |
|---|---|---|
| Date | Valid date | ISO format (YYYY-MM-DD). Other formats may be normalized. |
| Dropdown | Comma-separated labels | Each label must already exist on the column and be active. All labels in a comma-separated list must be valid; otherwise the row fails. |
Valid email or Name <email> | Must be a valid email format. | |
| Link | Valid URL or [Display Text](URL) | URL must start with http:// or https://. |
| Long Text | Text string | Empty values allowed. |
| Number | Numeric string | Integer or decimal. |
| People | Comma-separated identifiers, each one of: email ([email protected]), user:<id> or user:<name>, team:<id> or team:<name>, or a bare name (Jeremy) resolved as user or team. | Each identifier must resolve to exactly one active user or team in the account. Fails on no match, multiple matches, or resolution errors. Duplicate identifiers are deduped. Respects the column's max_people_allowed setting. |
| Phone | Phone number string | Various formats accepted; optional country hints. |
| Status | Label text (exact match) | Label must already exist on the column and be active. Case-sensitive. |
| Text | Text string | Empty values allowed. |
| Timeline | YYYY-MM-DD/YYYY-MM-DD for a range, or single YYYY-MM-DD (sets from = to = date) | Both dates must be valid ISO dates. Other separators (dash with spaces, pipe, JSON) are not accepted. |
Best practices
- Prefer
ingest_itemsfor integrations and any import that should behave like normal board activity. - Use
backfill_itemsonly for a planned, one-time initial load. - Poll job status about every 10 seconds — not faster.
- Validate the CSV header against the board schema before uploading.
- Handle GraphQL errors from the API separately from HTTP errors from the file upload step.
- Download the report as soon as
report_createdistrue. - On
RATE_LIMIT_EXCEEDED, wait usingextensions.retryAfterMsinstead of retrying immediately. - For ingest, spread large volumes across the hour where possible to reduce
ACCOUNT_CAPACITY_EXCEEDEDrejections.
Troubleshooting
| Issue | Symptoms | Resolution |
|---|---|---|
| Upload URL expired | HTTP 403 on the PUT upload (S3 returns "Request has expired") | Start a new job and upload within 10 minutes. |
| Invalid CSV format | Status REJECTED, failure_reason: INVALID_UPLOAD | Verify that the first header is name, all column IDs exist on the board, the file is UTF-8, the separator is comma, and embedded quotes are escaped (""). |
| Items not created | Status COMPLETED with high counts.invalid or counts.failed and low counts.created | Download the report from report_url and inspect per-row errors. Check column ID typos, status/dropdown label spelling and case, date format, and that referenced users or teams exist. |
Status stuck at UPLOAD_PENDING | No progress after uploading the CSV | Confirm the upload returned HTTP 200 with an ETag header. If not, start a new job and re-upload. |
| Backfill permission denied | Status REJECTED, failure_reason: PERMISSION_DENIED, or a GraphQL error "Only admin users can perform this operation." | backfill_items requires an account admin. Use ingest_items instead. |
| File too large | Status REJECTED, failure_reason: FILE_TOO_LARGE | Split the file into chunks at or below 150 MB. |
| Hourly capacity exceeded (ingest) | Status REJECTED, failure_reason: ACCOUNT_CAPACITY_EXCEEDED | The job's valid row count would exceed the remaining 19,000 items-per-hour budget. Spread the load across the hour or wait for the budget to reset. |
| Rate limit on starting jobs | GraphQL error RATE_LIMIT_EXCEEDED (HTTP 429) on ingest_items or backfill_items | Honor extensions.retryAfterMs before retrying. The 100-call-per-hour limit is shared between ingest_items and backfill_items. |
Related reference pages
Updated about 6 hours ago
