About your Astra DB database
Welcome! Let’s cover some basics and review how you can get connected.
Your paid database starts the following specifications:
-
A single region
-
A single keyspace
-
Storage based on your selected plan
-
Capacity for up to 200 tables
-
Replication factor of three to provide optimal uptime and data integrity
To better understand your database capabilities, review the Astra DB database guardrails and limits.
Astra DB plan options
Serverless databases
DataStax Astra DB offers a serverless database, which is an elastic cloud-native database that scales with your workload. You can scale your compute and storage capabilities independently, allowing you to focus on developing your application and less on your infrastructure costs.
Astra DB offers three plans: Free, Pay As You Go, and Enterprise (annual commitment). Each plan is billed according to the Astra DB pricing, which is based on the Read and Write requests, data storage, and data transfer. For more, see DataStax Astra DB Pricing.
Limitations
Serverless databases do not support materialized views or VPC peering. Serverless does offer private endpoints. For additional guardrails and limits, see Guardrails and limits.
Classic database options
This information applies to only classic databases. |
Classic databases can no longer be created through the Astra DB console. We recommend migrating your database to our current serverless option, which could save you money and allow you to manage your compute and storage capabilities separately. |
Dev and Test with Shared Resources
Dev and Test databases offer an entry point for developers. Only select AWS and Google Cloud regions are available for the Dev and Test databases. Dev and Test databases don’t support VPC peering or multi-region databases.
Plan | Description |
---|---|
A5 |
3 vCPU, 12GB DRAM, 10GB total usable storage |
A10 |
6 vCPU, 24GB DRAM, 20GB total usable storage |
A20 |
12 vCPU, 48GB DRAM, 40GB total usable storage |
A40 |
24 vCPU, 96GB DRAM, 80GB total usable storage |
Production Workloads with Dedicated Resources
VPC peering and multi-region databases are available on Production Workload databases.
Plan | Description |
---|---|
C10 |
12 vCPU, 48GB DRAM, 500GB total usable storage |
C20 |
24 vCPU, 96GB DRAM, 500GB total usable storage |
C30 |
48 vCPU, 192GB DRAM, 500GB total usable storage |
High-Density Production Workloads with Dedicated Resources
High-Density Production Workload databases offer greater disk capacity and performance than other service tiers. VPC peering and multi-region databases are available on High-Density Product Workload databases.
Plan |
Description |
D10 |
12 vCPU, 48GB DRAM, 1500GB total usable storage |
D20 |
24 vCPU, 96GB DRAM, 1500GB total usable storage |
D40 |
48 vCPU, 192GB DRAM, 1500GB total usable storage |
Database regions
When creating a database, select a region for your database. Choose a region that is geographically close to your users to optimize performance.
If you are adding multiple regions to your database, you can use each region only once. You cannot add the same region to the same database more than one time. |
Serverless database regions
Google Cloud
Region | Location | Pricing Type |
---|---|---|
us-east1 |
Moncks Corner, South Carolina, US |
Standard |
us-east4 |
Ashburn, Virginia, US |
Standard |
us-west1 |
The Dalles, Oregon, US |
Standard |
us-central1 |
Council Bluffs, Iowa, US |
Standard |
northamerica-northeast1 |
Montréal, Quebec, Canada |
Standard |
europe-west1 |
Saint-Ghislain, Belgium |
Standard |
europe-west2 |
London, England |
Premium |
asia-south1 |
Mumbai, India |
Standard |
southamerica-east1 |
Osasco, São Paulo, Brazil |
Premium Plus |
AWS
Region | Location | Pricing Type |
---|---|---|
us-east-1 |
Northern Virginia, US |
Standard |
us-east-2 |
Ohio, US |
Standard |
us-west-2 |
Oregon, US |
Standard |
eu-central-1 |
Frankfurt, Germany |
Standard |
eu-west-1 |
Ireland |
Standard |
ap-south-1 |
Mumbai, India |
Standard |
ap-southeast-1 |
Singapore |
Standard |
Azure
Region | Location | Pricing Type |
---|---|---|
northeurope |
Ireland |
Standard |
westeurope |
West Europe |
Standard |
eastus |
Washington, DC US |
Standard |
eastus2 |
Virginia, US |
Standard |
southcentralus |
South Central, US |
Standard |
westus2 |
Washington (state), US |
Standard |
canadacentral |
Toronto, Ontario, Canada |
Standard |
australiaeast |
Australia East |
Standard |
centralindia |
Central India (Pune) |
Standard |
brazilsouth |
Brazil |
Premium Plus |
Classic database regions
Google Cloud
Region | Location | Pricing Type |
---|---|---|
asia-east1 |
Changhua County, Taiwan |
Standard |
asia-east2 |
Hong Kong |
Standard |
australia-southeast1 |
Sydney, Australia |
Standard |
europe-north1 |
Hamina, Finland |
Standard |
europe-west1 |
Saint-Ghislain, Belgium |
Standard |
europe-west4 |
Eemshaven, Netherlands |
Standard |
northamerica-northeast1 |
Montréal, Quebec, Canada |
Standard |
us-central1 |
Council Bluffs, Iowa, US |
Standard |
us-east1 |
Moncks Corner, South Carolina, US |
Standard |
us-east4 |
Ashburn, Northern Virginia, US |
Standard |
us-west1 |
The Dalles, Oregon, US |
Standard |
AWS
Region | Location | Pricing Type |
---|---|---|
ap-northeast-1 |
Tokyo, Japan |
Standard |
ap-southeast-1 |
Singapore |
Standard |
ap-southeast-2 |
Sydney, Australia |
Premium |
ap-south-1 |
Mumbai, India |
Standard |
ca-central-1 |
Canada (Central) |
Standard |
eu-central-1 |
Frankfurt, Germany |
Standard |
eu-west-1 |
Ireland |
Standard |
eu-west-2 |
London, England |
Standard |
us-east-1 |
Northern Virginia, US |
Standard |
us-east-2 |
Ohio, US |
Standard |
us-west2 |
Oregon, US |
Standard |
Azure
Region | Location | Pricing Type |
---|---|---|
centralcanada |
Toronto, Ontario, Canada |
Standard |
australiaeast |
New South Wales, Australia |
Premium |
australiasoutheast |
Victoria, Australia |
Premium |
eastus |
Virginia, US |
Standard |
northeurope |
North Europe (Ireland) |
Premium |
westeurope |
West Europe (Netherlands) |
Standard |
westus2 |
Washington (state) US |
Standard |
How do you want to connect?
Options | Description |
---|---|
I don’t want to create or manage a schema. Just let me get started. |
Use schemaless JSON Documents with the Document API. |
I want to start using my database now with APIs. |
Use the REST API or GraphQL API to begin interacting with your database and self manage the schema. |
I have an application and want to use the DataStax drivers. |
Initialize one of the DataStax drivers to manage database connections for your application. |
I know CQL and want to connect quickly to use my database. |
Use the integrated CQL shell or the standalone CQLSH tool to interact with your database using CQL. |
Astra DB database guardrails and limits
DataStax Astra DB includes guardrails and sets limits to ensure good practices, foster availability, and promote optimal configurations for your databases.
Astra DB offers a $25.00 free credit per month, allowing you to create an Astra DB database for free. Create a database with just a few clicks and start developing within minutes.
Each plan includes a $25.00 free credit per month. The $25 credit is good for approximately 30 million reads, 5 million writes, and 40GB of storage per month on a serverless database. |
Limited access to administrative tools
Because Astra DB hides the complexities of database management to help you focus on developing applications, Astra DB is not compatible with DataStax Enterprise (DSE) administrative tools, such as nodetool
and dsetool
.
Use the DataStax Astra DB console to get statistics and view database and health metrics. Astra DB does not support access to the database using the Java Management Extensions (JMX) tools, like JConsole.
Simplified security without compromise
Astra DB provides a secure cloud-based database without dramatically changing the way you currently access your internal database:
-
New user management flows avoid the need for superusers and global keyspace administration in CQL.
-
Endpoints are secured using mutual authentication, either with mutual-TLS or secure tokens issued to the client.
-
TLS provides a secure transport layer you can trust, ensuring that in-flight data is protected.
-
Data at rest is protected by encrypted volumes.
Additionally, Astra DB incorporates role-based access control (RBAC).
See Security guidelines for more information about how Astra DB implements security.
Replication within regions
Each Astra DB database uses replication across three availability zones within the launched region to promote uptime and ensure data integrity.
Serverless database limits
The following limits are set for serverless databases created using Astra DB. These limits ensure good practices, foster availability, and promote optimal configurations for your database.
Databases
There is a limit of 5 databases per organization. If you need to increase the number of databases for your organization, contact DataStax Support.
Columns
Parameter | Limit | Notes |
---|---|---|
Size of values in a single column |
10 MB |
Hard limit. |
Number of columns per table |
75 |
Hard limit. |
Tables
Parameter | Limit | Notes |
---|---|---|
Number of tables per database |
200 |
A warning is issued when the database exceeds 100 tables. |
Table properties |
Fixed |
All table properties are fixed except for Expiring data with time-to-live. |
Secondary indexes and materialized views are not available for serverless databases. Our team is working to offer materialized views for serverless databases soon.
Workloads
Astra workloads are limited to 4096 ops/sec by default. If you see a "Rate limit reached" error in your application and want your limit raised, please open a support ticket.
Storage-Attached Indexing (SAI) limits
The maximum number of SAI indexes on a table is 10. There can be no more than 50 SAI indexes in a single database.
Automated backup and restore
Serverless databases created using Astra DB are automatically backed up once an hour. The hourly backups are stored for 20 days and only include sstables. This backup could exclude data written that is in the memtable and the commit log, but has not been flushed to the sstables.
The commit logs are not included in the hourly backups. Data in memtables that are not yet flushed to sstables is in the commit logs and not available in the hourly backups.
For databases with multiple datacenters in various regions, backup management is only available for the original datacenter selected when you created the database. Restoring from backup removes all other datacenters, restoring to the primary datacenter, and adding the other datacenters with a restored version of the primary datacenter.
If the database was terminated, all data is destroyed and is unrecoverable. |
If data is accidentally deleted or corrupted, contact DataStax Support to restore data from one of the available backups. This window ensures that the data to restore exists as a saved backup.
When restoring data, DataStax Support allows you to restore data to the same database, replacing the current data with data from the backup. All data added to the database after the backup is no longer available in the database.
Classic database limits
The following limits are set for classic databases created using Astra DB. These limits ensure good practices, foster availability, and promote optimal configurations for your database.
Columns
Parameter | Limit | Notes |
---|---|---|
Size of values in a single column |
5 MB |
Hard limit. |
Number of columns per table |
50 |
Hard limit. |
Tables
Parameter | Limit | Notes |
---|---|---|
Number of tables per database |
200 |
A warning is issued when the database exceeds 100 tables. |
Table properties |
Fixed |
All table properties are fixed except for Expiring data with time-to-live. |
Secondary index |
1 |
For classic databases, limit is per table. |
Materialized view |
2 |
Limit is per table. A warning is issued if the materialized view creates large partitions. |
Workloads
Astra DB workloads for Classic databases do not have a rate limit.
Storage-Attached Indexing (SAI) limits
The maximum number of SAI indexes on a table is 10. There can be no more than 100 SAI indexes in a single database.
Automated backup and restore
Classic databases created using Astra DB are automatically backed up every four hours. The latest six backups are stored, providing flexibility in which point in time you can restore to, if necessary.
If the database was terminated, all data is destroyed and is unrecoverable. |
If data is accidentally deleted or corrupted, contact DataStax Support within 12 hours to restore data from one of the available backups. This window ensures that the data to restore exists as a saved backup.
When restoring data, DataStax Support allows you to restore data to the same database, replacing the current data with data from the backup. All data added to the database after the backup is no longer available in the database.
Cassandra Query Language (CQL)
At this time, user-defined functions (UDFs) and user-defined aggregate functions (UDAs) are not enabled. |
Parameter | Limit | Notes |
---|---|---|
Consistency level |
Fixed |
Supported consistency levels: Reads: Any supported consistency level is permitted Writes: LOCAL_QUORUM and LOCAL_SERIAL |
Compaction strategy |
Fixed |
UnifiedCompactionStrategy is a more efficient compaction strategy that combines ideas from STCS (SizeTieredCompactionStrategy), LCS (LeveledCompactionStrategy), and TWCS (TimeWindowCompactionStrategy) along with token range sharding. This all-inclusive compaction strategy works well for all use cases. |
Lists |
Fixed |
Cannot |
Page size |
Fixed |
The proper page size is configured automatically. |
Large partition |
Warning |
A warning is issued if reading or compacting a partition that exceeds 100 MB. |
CQL commands
The following CQL commands are not supported in Astra DB:
-
ALTER KEYSPACE
-
ALTER SEARCH INDEX CONFIG
-
ALTER KEYSPACE
-
ALTER SEARCH INDEX CONFIG
-
ALTER SEARCH INDEX SCHEMA
-
COMMIT SEARCH INDEX
-
CREATE KEYSPACE
-
CREATE SEARCH INDEX
-
CREATE TRIGGER
-
CREATE FUNCTION
-
DESCRIBE FUNCTION
-
DROP FUNCTION
-
DROP KEYSPACE
-
DROP SEARCH INDEX CONFIG
-
DROP TRIGGER
-
LIST PERMISSIONS
-
REBUILD SEARCH INDEX
-
RELOAD SEARCH INDEX
-
RESTRICT
-
RESTRICT ROWS
-
UNRESTRICT
-
UNRESTRICT ROWS
For supported CQL commands, see the Astra DB CQL quick reference.
cassandra.yaml
If you are an experienced Cassandra or DataStax Enterprise user, you are likely familiar with editing the cassandra.yaml
file.
For Astra DB, the cassandra.yaml
file cannot be configured.
The following limits are included in Astra DB:
// for read requests
page_size_failure_threshold_in_kb = 512
in_select_cartesian_product_failure_threshold = 25
partition_keys_in_select_failure_threshold = 20
tombstone_warn_threshold = 1000
tombstone_failure_threshold = 100000
// for write requests
batch_size_warn_threshold_in_kb = 5
batch_size_fail_threshold_in_kb = 50
unlogged_batch_across_partitions_warn_threshold = 10
user_timestamps_enabled = true
column_value_size_failure_threshold_in_kb = 5 * 1024L
read_before_write_list_operations_enabled = false
max_mutation_size_in_kb = 16384 (Classic) or 4096 (Serverless)
// for schema
fields_per_udt_failure_threshold = 30 (Classic) or 60 (Serverless)
collection_size_warn_threshold_in_kb = 5 * 1024L
items_per_collection_warn_threshold = 20
columns_per_table_failure_threshold = 50 (Classic) or 75 (Serverless)
secondary_index_per_table_failure_threshold = 1
tables_warn_threshold = 100
tables_failure_threshold = 200
// for node status
disk_usage_percentage_warn_threshold = 70
disk_usage_percentage_failure_threshold = 80
partition_size_warn_threshold_in_mb = 100
// SAI Table Failure threshold
sai_indexes_per_table_failure_threshold = 10