Data API limits

The Data API includes guardrails to ensure best practices, foster availability, and promote optimal configurations for your Astra DB Serverless databases.

In the following table, the term property refers to either a field of a document in a collection or a column in a table.

Entity Limit Notes

Number of collections per database

5 or 10 (approx.)

Indexes determine the number of collections you can have in a database.

Serverless (Vector) databases created after June 24, 2024 can have approximately 10 collections. Databases created before this date can have approximately 5 collections. The collection limit is based on the number of indexes. For more information, see Indexes in collections.

Number of tables per database

50

A Serverless (Vector) database can have up to 50 tables.

Page size

20

For certain operations, a page can contain up to 20 documents or rows. After reaching the page limit, you can load any additional responses on the next page:

  • For clients, you must iterate over a cursor.

  • For HTTP, you must use the nextPageState ID returned by paginated Data API responses.

Some operations, such as deleteMany and vector search, don’t return a cursor or nextPageState, even if there are additional matches:

  • For vector search, the response is a single page of up to 1,000 documents, unless you set a lower limit.

  • For deleteMany on collections, HTTP requests delete up to 20 documents per request, and then return a count but not a nextPageState. Reissue the HTTP request until the response indicates that fewer than 20 documents were deleted. The Data API clients automatically issue multiple HTTP requests until all matching documents are deleted.

  • Some other operations may not trigger pagination, such as certain combinations of sort and filter operations. For more information, see Sort clauses for collections and Sort clauses for tables.

In-memory sort limit

10,000

Operations that require fetching and sorting chunks of data can support no more than 10,000 rows in-memory. In this case, rows refers to either documents in collections or rows in tables.

If your queries hit this limit, try restructuring your application to avoid running queries on excessively large chunks of data. For example, in tables, you can adjust your table’s partitionSort in the primary key for more efficient clustering.

If you frequently hit this limit in collections, consider whether your data needs to be stored in tables, which can be more performant for large datasets.

Maximum property name length

100

Maximum of 100 characters in a property name.

Maximum path length

1,000

Maximum of 1,000 characters in a path name. This is calculated as the total for all segments, including any dots (.) between properties in a path.

Maximum indexed string size in bytes

8,000

Maximum of 8,000 bytes (UTF-8 encoded) for string length in an indexed field in a collection. The Data API uses UTF-8 encoding regardless of the original encoding in the request.

Maximum number property length

100

Maximum of 100 characters for the length of a number type value.

Maximum elements per array

1,000

Maximum number of elements in an array. This limit applies to indexed fields in collections only.

Maximum vector dimensions

4,096

Maximum size of dimensions you can define for a vector-enabled collection.

Maximum properties per JSON object

1,000

Maximum number of properties for a JSON object, including top-level properties and nested properties. This limit applies to JSON objects stored in indexed fields in collections only.

A JSON object can have nested objects, also known as sub-documents. The maximum of 1,000 includes all indexed properties in the main document and those in each sub-document, if any. For more information, see Indexes in collections.

Maximum depth per JSON object

16

Maximum depth of nested properties (sub-documents) in a JSON object.

Maximum properties per JSON document

5,000

The maximum number of properties allowed in for an entire JSON document is 5,000, including intermediate properties and leaf properties.

For example, the following document has three properties that apply to this limit: root, root.branch, and root.branch.leaf.

{
  "root": {
    "branch": {
      "leaf": 42
    }
  }
}

Maximum document or row size in characters

4 million

A single document in a collection can have a maximum of 4 million characters.

Maximum inserted batch size in characters

20 million

An entire batch of documents submitted through an insertMany or updateMany command can have up to 20 million characters.

Maximum number of deletions per transaction

20

Maximum number of documents that can be deleted in each deleteMany HTTP transaction against a collection.

Maximum number of updates per transaction

20

Maximum number of documents that can be updated in each updateMany HTTP transaction against a collection.

Maximum number of insertions per transaction

100

Maximum number of documents or rows that can be inserted in each insertMany HTTP transaction.

Maximum size of _id values array

100

When using the $in operator to send an array of _id values, the maximum size of the array is 100. This limit applies to operations on collections only because _id is a reserved field for collections.

Maximum number of vector search results

1,000

For vector search, the response is a single page of up to 1,000 documents or rows, unless you set a lower limit.

Exceeded limit returns 200 OK with error

If your request is valid but the command exceeds a limit, the Data API responds with HTTP 200 OK and an error message.

It is also possible to receive a response containing both data and errors. Always inspect the response for error messages.

For example, if you exceed the per-transaction limit of 100 documents in an insertMany command, the Data API response contains the following message:

{
  "errors": [
    {
      "message": "Request invalid: field 'command.documents' value \"[...]\" not valid. Problem: amount of documents to insert is over the max limit (101 vs 100).",
      "errorCode": "COMMAND_FIELD_INVALID"
    }
  ]
}

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2025 DataStax | Privacy policy | Terms of use | Manage Privacy Choices

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000, info@datastax.com