Database limits

Astra DB Serverless is a fully managed serverless database-as-a-service (DBaaS), powered by Apache Cassandra®, an open-source NoSQL distributed database. To ensure high availability and optimum performance, Astra DB Serverless databases have guardrails on underlying Cassandra functionality.

Database terminology

Astra DB limits can apply to database structures, such as collections, tables, documents, and rows.

Depending on your background and experience, you might be familiar with various terminology for database components. For example, structured databases use terms like tables, rows, and columns. Whereas semi-structured databases use collections, documents, and fields to refer to functionally similar or identical components.

In Astra DB, the terminology you encounter depends on the database types and features that you use. The following table explains some of these terms. Each set of terms describe similar components, but these components are not necessarily functional equivalents. For example, a single field of vector data doesn’t necessarily translate directly to a single column of structured, non-vector data.

Serverless (Vector) databases Serverless (Non-Vector) databases and CQL Description

Keyspace

Keyspace

A container for one or more collections or tables within a database.

Namespace is a deprecated term for a keyspace in a Serverless (Vector) database.

Collection/Table

Table

A container for data within a database.

The difference depends on the schema type:

  • Collections use dynamic schemas and store data in documents. With a dynamic schema, each document can have different fields. Collections are best for semi-structured data.

  • Tables use fixed schemas and store data in rows. With a fixed schema, all rows must have the same columns, and every column must have a value (which can be null). Tables are best for structured data.

Serverless (Vector) databases can have collections and tables. In the Astra Portal’s Data Explorer, you can only create collections. You must use CQL or the Data API to create tables.

Additionally, the Data Explorer uses the Collection label for both collections and tables.

Document

Row

A piece of data, having one or more properties, that is stored in a collection or table in a database.

Document properties are stored in fields, and row properties are stored in columns.

Field

Column

Any properties or attributes of data, such as vectors, identifiers, customer contact information, purchase history, account statuses, and metadata. Properties can be stored as various data types like text, numbers, arrays, and booleans.

Replicas

Astra DB Serverless replicates data across three availability zones within each deployed region to promote uptime and ensure data integrity.

By default, each database is deployed to a single region, and you can deploy databases to additional regions.

Astra DB follows the eventual consistency model, and you can’t independently manage, restrict, or modify replicas. When you insert data into an Astra DB database, it is eventually replicated across all of the database’s replicas, regardless of the original region where you loaded the data.

For multi-region databases, it can take time to replicate data across all regions. For more information, see Data consistency in multi-region databases.

Consistency levels

The Data API and clients always use the LOCAL_QUORUM consistency level for reads and writes.

With CQL and drivers, you can use all read consistency levels and all write consistency levels except ONE, ANY, and LOCAL_ONE.

LOCAL_QUORUM is generally sufficient for balanced workloads due to Astra DB’s eventual consistency model. It has lower latency than multi-region consistency levels because it only requires acknowledgement from replicas in the target read/write region.

For more information about consistency levels, see the CQL documentation on CONSISTENCY and Data consistency in multi-region databases.

Table limits

  • A Serverless (Vector) database can have up to 50 tables.

  • For multi-region databases, the Astra Portal’s Data Explorer accesses and manipulates keyspaces, collections, tables, and data from the primary region. If you need to manage your database from a secondary region, you must use the Data API or CQL shell. Generally, accessing secondary regions is for latency optimization when the primary region is geographically distant from the caller or when the primary region is experiencing an outage. However, because multi-region databases follow an eventual consistent model, changes to data in any region are eventually replicated to the database’s other regions.

  • A single value in a column/row can’t exceed 10MB of data.

Collection limits

In the Astra Portal and the Data API, collections are containers for data within a database, similar to tables. This term has no relationship to collection data types (maps, lists, and sets) for columns in a table.

The following limits apply to collections, including read and write operations against data in collections:

  • Serverless (Vector) databases created after June 24, 2024 can have approximately 10 collections. Databases created before this date can have approximately 5 collections. The collection limit is based on the number of indexes.

  • For multi-region databases, the Astra Portal’s Data Explorer accesses and manipulates keyspaces, collections, tables, and data from the primary region. If you need to manage your database from a secondary region, you must use the Data API or CQL shell. Generally, accessing secondary regions is for latency optimization when the primary region is geographically distant from the caller or when the primary region is experiencing an outage. However, because multi-region databases follow an eventual consistent model, changes to data in any region are eventually replicated to the database’s other regions.

  • Only the UnifiedCompactionStrategy compaction strategy is allowed. This strategy combines ideas from SizeTieredCompactionStrategy, LeveledCompactionStrategy, and TimeWindowCompactionStrategy, along with token range sharding. This all-inclusive compaction strategy works well for Astra DB Serverless use cases.

  • A warning is issued when reading or compacting a partition that exceeds 100MB.

  • You cannot UPDATE or DELETE a list value by index because Astra DB Serverless does not allow list operations that perform a read-before-write. Operations that are not by index work normally.

  • A collection can’t have more than 64 fields, as extracted from all documents inserted into the collection.

  • Data API limits apply when inserting documents into collections, either through the Astra Portal or the Data API.

Index limits

There is a limit to the number of indexes you can have:

  • A database created before June 24, 2024 can have up to 50 indexes (approximately 5 collections).

  • A database created after June 24, 2024 can have up to 100 indexes (approximately 10 collections).

  • A single collection in a database can have up to 10 indexes.

  • This limit doesn’t apply to tables.

For more information, see Indexes in collections.

Rate limits

The default Astra DB Serverless rate limit is approximately 12,000 operations per second (ops/sec). Batches count as a single operation, regardless of the number of operations in a batch.

Astra DB Serverless workloads can be limited to 4,096 ops/sec for new or idle database when traffic suddenly increases suddenly. If you expect a sudden increase in traffic for an idle database, try to warm up your database before the workload increases. Make sure your database is active, and then send requests to your database to gradually increase the workload to the desired level over a period of time.

For production databases that require consistent capacity for traffic spikes, consider Provisioned Capacity Units for Astra DB Serverless.

If you encounter rate limiting exceptions or errors, you might need to allow some time for the database to scale up before retrying your request. Alternatively, you can use programmatic strategies to balance and retry workloads, such as request batching, generic retry policies, and retry policies with backoff strategies. How you implement these strategies depends on your chosen connection method and use case.

Data API limits

The Data API includes guardrails to ensure best practices, foster availability, and promote optimal configurations for your Astra DB Serverless databases.

In the following table, the term property refers to either a field of a document in a collection or a column in a table. For more information, see Database terminology.

Entity Limit Notes

Number of collections per database

5 or 10 (approx.)

Indexes determine the number of collections you can have in a database.

Serverless (Vector) databases created after June 24, 2024 can have approximately 10 collections. Databases created before this date can have approximately 5 collections. The collection limit is based on the number of indexes. For more information, see Indexes in collections.

Number of tables per database

50

A Serverless (Vector) database can have up to 50 tables.

Page size

20

For certain operations, a page can contain up to 20 documents or rows. After reaching the page limit, you can load any additional responses on the next page:

  • For clients, you must iterate over a cursor.

  • For HTTP, you must use the nextPageState ID returned by paginated Data API responses.

Some operations, such as deleteMany and vector search, don’t return a cursor or nextPageState, even if there are additional matches:

  • For vector search, the response is a single page of up to 1,000 documents, unless you set a lower limit.

  • For deleteMany on collections, HTTP requests delete up to 20 documents per request, and then return a count but not a nextPageState. Reissue the HTTP request until the response indicates that fewer than 20 documents were deleted. The Data API clients automatically issue multiple HTTP requests until all matching documents are deleted.

  • Some other operations may not trigger pagination, such as certain combinations of sort and filter operations. For more information, see Sort clauses for collections and Sort clauses for tables.

In-memory sort limit

10,000

Operations that require fetching and sorting chunks of data can support no more than 10,000 rows in-memory. In this case, rows refers to either documents in collections or rows in tables.

If your queries hit this limit, try restructuring your application to avoid running queries on excessively large chunks of data. For example, in tables, you can adjust your table’s partitionSort in the primary key for more efficient clustering.

If you frequently hit this limit in collections, consider whether your data needs to be stored in tables, which can be more performant for large datasets.

Maximum property name length

100

Maximum of 100 characters in a property name.

Maximum path length

1,000

Maximum of 1,000 characters in a path name. This is calculated as the total for all segments, including any dots (.) between properties in a path.

Maximum indexed string size in bytes

8,000

Maximum of 8,000 bytes (UTF-8 encoded) for string length in an indexed field in a collection. The Data API uses UTF-8 encoding regardless of the original encoding in the request.

Maximum number property length

100

Maximum of 100 characters for the length of a number type value.

Maximum elements per array

1,000

Maximum number of elements in an array. This limit applies to indexed fields in collections only.

Maximum vector dimensions

4,096

Maximum size of dimensions you can define for a vector-enabled collection.

Maximum properties per JSON object

1,000

Maximum number of properties for a JSON object, including top-level properties and nested properties. This limit applies to JSON objects stored in indexed fields in collections only.

A JSON object can have nested objects, also known as sub-documents. The maximum of 1,000 includes all indexed properties in the main document and those in each sub-document, if any. For more information, see Indexes in collections.

Maximum depth per JSON object

16

Maximum depth of nested properties (sub-documents) in a JSON object.

Maximum properties per JSON document

5,000

The maximum number of properties allowed in for an entire JSON document is 5,000, including intermediate properties and leaf properties.

For example, the following document has three properties that apply to this limit: root, root.branch, and root.branch.leaf.

{
  "root": {
    "branch": {
      "leaf": 42
    }
  }
}

Maximum document or row size in characters

4 million

A single document in a collection can have a maximum of 4 million characters.

Maximum inserted batch size in characters

20 million

An entire batch of documents submitted through an insertMany or updateMany command can have up to 20 million characters.

Maximum number of deletions per transaction

20

Maximum number of documents that can be deleted in each deleteMany HTTP transaction against a collection.

Maximum number of updates per transaction

20

Maximum number of documents that can be updated in each updateMany HTTP transaction against a collection.

Maximum number of insertions per transaction

100

Maximum number of documents or rows that can be inserted in each insertMany HTTP transaction.

Maximum size of _id values array

100

When using the $in operator to send an array of _id values, the maximum size of the array is 100. This limit applies to operations on collections only because _id is a reserved field for collections.

Maximum number of vector search results

1,000

For vector search, the response is a single page of up to 1,000 documents or rows, unless you set a lower limit.

Exceeded limit returns 200 OK with error

If your request is valid but the command exceeds a limit, the Data API responds with HTTP 200 OK and an error message.

It is also possible to receive a response containing both data and errors. Always inspect the response for error messages.

For example, if you exceed the per-transaction limit of 100 documents in an insertMany command, the Data API response contains the following message:

{
  "errors": [
    {
      "message": "Request invalid: field 'command.documents' value \"[...]\" not valid. Problem: amount of documents to insert is over the max limit (101 vs 100).",
      "errorCode": "COMMAND_FIELD_INVALID"
    }
  ]
}

CQL limits

For information about guardrails and limits for CQL commands, see Cassandra Query Language (CQL) for Astra DB.

Free plans

In addition to the limits described elsewhere on this page, organizations on the Free plan have the following additional limits:

To increase or remove these limits, upgrade your plan.

Cassandra configuration properties

The cassandra.yaml file is not configurable for Astra DB Serverless databases. This file is defined as follows:

cassandra.yaml
# Read requests
in_select_cartesian_product_failure_threshold: 25
partition_keys_in_select_failure_threshold: 20
tombstone_warn_threshold: 1000
tombstone_failure_threshold: 100000

# Write requests
batch_size_warn_threshold_in_kb: 64
batch_size_fail_threshold_in_kb: 640
unlogged_batch_across_partitions_warn_threshold: 10
user_timestamps_enabled: true
column_value_size_failure_threshold_in_kb: 5 * 2048L
read_before_write_list_operations_enabled: false
max_mutation_size_in_kb: 16384
write_consistency_levels_disallowed: ANY, ONE, LOCAL_ONE

# Schema
fields_per_udt_failure_threshold: 60
collection_size_warn_threshold_in_kb:  5 * 1024L
items_per_collection_warn_threshold:  20
columns_per_table_failure_threshold: 75
tables_warn_threshold: 100
tables_failure_threshold: 200

# SAI indexes failure threshold
sai_indexes_per_table_failure_threshold: 10
sai_indexes_total_failure_threshold: 100   # 50 for databases created before June 24, 2024

The limits defined in cassandra.yaml inform the limits described elsewhere on this page.

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2025 DataStax | Privacy policy | Terms of use | Manage Privacy Choices

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000, info@datastax.com