Skip to content

title: "Bulk import overview" description: "Bulk import is a file-based mechanism for loading large volumes of data into the system in a controlled, scheduled manner." It is designed for administrativ...


Bulk import overview

Bulk import is a file-based mechanism for loading large volumes of data into the system in a controlled, scheduled manner.
It is designed for administrative data ingestion where row-level errors are expected and should not block valid data from being written.

Bulk import operates independently from normal interactive writes and follows its own validation and execution model.

What bulk import is

Bulk import processes CSV files packaged in a single ZIP archive and applies the contained mutations as a scheduled batch operation.

Each CSV file maps to a specific table using the table’s API name.
Many-to-many relations are imported through dedicated link files.

The import executes in distinct phases:

  • upload and structural validation
  • dry run on temporary tables
  • explicit scheduling
  • execution against live data

Once execution has started, the operation is irreversible.

View process
flowchart TD
    A[ZIP upload] --> B{Structural validation}
    B -->|Fail| B1[Rejected - Import cannot proceed]
    B -->|Pass| C[Dry run - Temporary tables, No live data changes]

    C --> D[Row-level errors produced - Downloadable reports]
    D --> E{Administrator action}

    E -->|Do nothing| E1[Import remains idle - No live changes]
    E -->|Cancel| E2[Import deleted - No live changes]
    E -->|Schedule| F[Scheduled execution]

    F --> G[Execution on live data - System-wide write lock]
    G --> I[Execution completed]

Supported operations

Bulk import supports row-level mutation operations defined per file row:

  • INSERT
  • UPDATE
  • DELETE

Operations are evaluated independently per row.

Row-level failures do not block other rows from being processed.
Successful rows are written even if other rows fail.

Write rules and classifications are bypassed during bulk import.
Only structural schema constraints apply.

File types and targets

Bulk import accepts:

  • a single ZIP archive per import
  • CSV files located at the root of the archive
  • one CSV file per target table

File-to-target mapping rules:

  • regular tables are mapped using the table API name
  • many-to-many relations are handled via dedicated link files

All files are interpreted using the active schema at execution time.

Constraints and limits

Bulk import operates under the following constraints:

  • only one import can exist at a time per system
  • execution blocks all other write operations
  • execution is non-transactional across files and tables
  • errors are isolated per row and per file
  • imports cannot be modified after scheduling
  • executed imports cannot be rolled back

These constraints are intentional and ensure predictable behavior during large data mutations.