• Glossary
  • Support
  • Downloads
  • DataStax Home
Get Live Help
Expand All
Collapse All

DataStax Astra DB Classic Documentation

    • Overview
      • Release notes
      • Astra DB FAQs
      • Astra DB glossary
      • Get support
    • Getting Started
      • Grant a user access
      • Load and retrieve data
        • Use DSBulk to load data
        • Use Data Loader in Astra Portal
      • Connect a driver
      • Build sample apps
      • Use integrations
        • Connect with DataGrip
        • Connect with DBSchema
        • Connect with JanusGraph
        • Connect with Strapi
    • Planning
      • Plan options
      • Database regions
    • Securing
      • Security highlights
      • Security guidelines
      • Default user permissions
      • Change your password
      • Reset your password
      • Authentication and Authorization
      • Astra DB Plugin for HashiCorp Vault
    • Connecting
      • Connecting to a VPC
      • Connecting Change Data Capture (CDC)
      • Connecting CQL console
      • Connect the Spark Cassandra Connector to Astra
      • Drivers for Astra DB
        • Connecting C++ driver
        • Connecting C# driver
        • Connecting Java driver
        • Connecting Node.js driver
        • Connecting Python driver
        • Drivers retry policies
      • Connecting Legacy drivers
      • Get Secure Connect Bundle
    • Migrating
      • FAQs
      • Preliminary steps
        • Feasibility checks
        • Deployment and infrastructure considerations
        • Create target environment for migration
        • Understand rollback options
      • Phase 1: Deploy ZDM Proxy and connect client applications
        • Set up the ZDM Automation with ZDM Utility
        • Deploy the ZDM Proxy and monitoring
          • Configure Transport Layer Security
        • Connect client applications to ZDM Proxy
        • Manage your ZDM Proxy instances
      • Phase 2: Migrate and validate data
      • Phase 3: Enable asynchronous dual reads
      • Phase 4: Change read routing to Target
      • Phase 5: Connect client applications directly to Target
      • Troubleshooting
        • Troubleshooting tips
        • Troubleshooting scenarios
      • Additional resources
        • Glossary
        • Contribution guidelines
        • Release Notes
    • Managing
      • Managing your organization
        • User permissions
        • Pricing and billing
        • Audit Logs
        • Configuring SSO
          • Configure SSO for Microsoft Azure AD
          • Configure SSO for Okta
          • Configure SSO for OneLogin
      • Managing your database
        • Create your database
        • View your databases
        • Database statuses
        • Use DSBulk to load data
        • Use Data Loader in Astra Portal
        • Monitor your databases
        • Manage multiple keyspaces
        • Using multiple regions
        • Terminate your database
        • Resize your classic database
        • Park your classic database
        • Unpark your classic database
      • Managing with DevOps API
        • Managing database lifecycle
        • Managing roles
        • Managing users
        • Managing tokens
        • Managing multiple regions
        • Get private endpoints
        • AWS PrivateLink
        • Azure PrivateLink
        • GCP Private Service
    • Astra CLI
    • Developing with Stargate APIs
      • Develop with REST
      • Develop with Document
      • Develop with GraphQL
        • Develop with GraphQL (CQL-first)
        • Develop with GraphQL (Schema-first)
      • Develop with gRPC
        • gRPC Rust client
        • gRPC Go client
        • gRPC Node.js client
        • gRPC Java client
      • Develop with CQL
      • Tooling Resources
      • Node.js Document API client
      • Node.js REST API client
    • Stargate QuickStarts
      • Document API QuickStart
      • REST API QuickStart
      • GraphQL API CQL-first QuickStart
    • API References
      • DevOps REST API v2
      • Stargate Document API v2
      • Stargate REST API v2
  • DataStax Astra DB Classic Documentation
  • Planning

About your Astra DB database

Welcome! Let’s cover some basics and review how you can get connected.

Your paid database starts the following specifications:

  • A single region

  • A single keyspace

  • Storage based on your selected plan

  • Capacity for up to 200 tables

  • Replication factor of three to provide optimal uptime and data integrity

To better understand your database capabilities, review the Astra DB database guardrails and limits.

Astra DB plan options

Classic database options

This information applies to only classic databases.

Classic databases can no longer be created through Astra Portal. We recommend migrating your database to our current serverless option, which could save you money and allow you to manage your compute and storage capabilities separately.

Production Workloads with Dedicated Resources

VPC peering and multi-region databases are available on Production Workload databases.

Plan Description

C10

12 vCPU, 48GB DRAM, 500GB total usable storage

C20

24 vCPU, 96GB DRAM, 500GB total usable storage

C40

48 vCPU, 192GB DRAM, 500GB total usable storage

C40i

48 vCPU, 192GB DRAM, 500GB total usable storage, High IOPS

High-Density Production Workloads with Dedicated Resources

High-Density Production Workload databases offer greater disk capacity and performance than other service tiers. VPC peering and multi-region databases are available on High-Density Product Workload databases.

Plan

Description

D10

12 vCPU, 48GB DRAM, 1500GB total usable storage

D20

24 vCPU, 96GB DRAM, 1500GB total usable storage

D40

48 vCPU, 192GB DRAM, 1500GB total usable storage

Astra DSE Edition Workloads

Databases with the advanced functionality of DSE Search and DSE Graph Worloads available on Production Workload databases.

Plan

Description

E60

48 vCPU, 366GB DRAM, 2TB total usable storage

E120

96 vCPU, 732GB DRAM, 2TB total usable storage

Database regions

When creating a database, select a region for your database. Choose a region that is geographically close to your users to optimize performance.

If you are adding multiple regions to your database, you can use each region only once. You cannot add the same region to the same database more than one time.

Classic database regions

AWS

Region Location Pricing

us-east-1

Northern Virginia, US

Standard

us-west2

Oregon, US

Standard

us-east-2

Ohio, US

Standard

ca-central-1

Canada (Central)

Standard

eu-central-1

Frankfurt, Germany

Standard

eu-west-1

Ireland

Standard

eu-west-2

London, England

Standard

ap-southeast-1

Singapore

Standard

ap-south-1

Mumbai, India

Standard

ap-southeast-2

Sydney, Australia

Premium

ap-northeast-1

Tokyo, Japan

Standard

Google Cloud

Region Location Pricing

us-east1

Moncks Corner, South Carolina, US

Standard

us-east4

Ashburn, Northern Virginia, US

Standard

us-central1

Council Bluffs, Iowa, US

Standard

us-west1

The Dalles, Oregon, US

Standard

northamerica-northeast1

Montréal, Québec, Canada

Standard

asia-east1

Changhua County, Taiwan

Standard

asia-east2

Hong Kong

Standard

australia-southeast1

Sydney, Australia

Premium

europe-north1

Hamina, Finland

Standard

europe-west1

Saint-Ghislain, Belgium

Standard

europe-west4

Eemshaven, Netherlands

Standard

Azure

Region Location Pricing

eastus

Virginia, US

Standard

westus2

Washington (state) US

Standard

westeurope

West Europe (Netherlands)

Standard

northeurope

North Europe (Ireland)

Premium

australiaeast

New South Wales, Australia

Premium

australiasoutheast

Victoria, Australia

Premium

How do you want to connect?

Options Description

I don’t want to create or manage a schema. Just let me get started.

Use schemaless JSON Documents with the Document API.

I want to start using my database now with APIs.

Use the REST API or GraphQL API to begin interacting with your database and self manage the schema.

I have an application and want to use the DataStax drivers.

Initialize one of the DataStax drivers to manage database connections for your application.

I know CQL and want to connect quickly to use my database.

Use the integrated CQL shell or the standalone CQLSH tool to interact with your database using CQL.

Astra DB database guardrails and limits

DataStax Astra DB includes guardrails and sets limits to ensure good practices, foster availability, and promote optimal configurations for your databases.

Astra DB offers a $25.00 free credit per month, allowing you to create an Astra DB database for free. Create a database with just a few clicks and start developing within minutes.

Limited access to administrative tools

Because Astra DB hides the complexities of database management to help you focus on developing applications, Astra DB is not compatible with DataStax Enterprise (DSE) administrative tools, such as nodetool and dsetool.

Use the DataStax Astra Portal to get statistics and view database and health metrics. Astra DB does not support access to the database using the Java Management Extensions (JMX) tools, like JConsole.

Simplified security without compromise

Astra DB provides a secure cloud-based database without dramatically changing the way you currently access your internal database:

  • New user management flows avoid the need for superusers and global keyspace administration in CQL.

  • Endpoints are secured using mutual authentication, either with mutual-TLS or secure tokens issued to the client.

  • TLS provides a secure transport layer you can trust, ensuring that in-flight data is protected.

  • Data at rest is protected by encrypted volumes.

Additionally, Astra DB incorporates role-based access control (RBAC).

See Security guidelines for more information about how Astra DB implements security.

Replication within regions

Each Astra DB database uses replication across three availability zones within the launched region to promote uptime and ensure data integrity.

Classic database limits

The following limits are set for classic databases created using Astra DB. These limits ensure good practices, foster availability, and promote optimal configurations for your database.

Columns

Parameter Limit Notes

Size of values in a single column

5 MB

Hard limit.

Number of columns per table

50

Hard limit.

Tables

Parameter Limit Notes

Number of tables per database

200

A warning is issued when the database exceeds 100 tables.

Table properties

Fixed

All table properties are fixed except for Expiring data with time-to-live.

Secondary index

1

For classic databases, limit is per table.

Materialized view

2

Limit is per table. A warning is issued if the materialized view creates large partitions.

Workloads

Astra DB workloads for Classic databases do not have a rate limit.

Storage-Attached Indexing (SAI) limits

The maximum number of SAI indexes on a table is 10. There can be no more than 100 SAI indexes in a single database.

Automated backup and restore

Classic databases created using Astra DB are automatically backed up every four hours. The latest six backups are stored, providing flexibility in which point in time you can restore to, if necessary.

If the database was terminated, all data is destroyed and is unrecoverable.

If data is accidentally deleted or corrupted, contact DataStax Support within 12 hours to restore data from one of the available backups. This window ensures that the data to restore exists as a saved backup.

When restoring data, DataStax Support allows you to restore data to the same database, replacing the current data with data from the backup. All data added to the database after the backup is no longer available in the database.

Cassandra Query Language (CQL)

At this time, user-defined functions (UDFs) and user-defined aggregate functions (UDAs) are not enabled.

Parameter Limit Notes

Consistency level

Fixed

Supported consistency levels: Reads: Any supported consistency level is permitted.
Single-region writes: LOCAL_QUORUM and LOCAL_SERIAL

Compaction strategy

Fixed

UnifiedCompactionStrategy is a more efficient compaction strategy that combines ideas from STCS (SizeTieredCompactionStrategy), LCS (LeveledCompactionStrategy), and TWCS (TimeWindowCompactionStrategy) along with token range sharding. This all-inclusive compaction strategy works well for all use cases.

Lists

Fixed

Cannot UPDATE or DELETE a list value by index because Astra DB does not allow list operations that perform a read-before-write. Note: INSERT operations work the same way in Astra DB as in Apache Cassandra® and DataStax Enterprise (DSE). Also, UPDATE and DELETE operations that are not by index work the same in Astra DB, Cassandra, and DSE.

Page size

Fixed

The proper page size is configured automatically.

Large partition

Warning

A warning is issued if reading or compacting a partition that exceeds 100 MB.

CQL commands

The following CQL commands are not supported in Astra DB:

  • ALTER KEYSPACE

  • ALTER SEARCH INDEX CONFIG

  • ALTER KEYSPACE

  • ALTER SEARCH INDEX CONFIG

  • ALTER SEARCH INDEX SCHEMA

  • COMMIT SEARCH INDEX

  • CREATE KEYSPACE

  • CREATE SEARCH INDEX

  • CREATE TRIGGER

  • CREATE FUNCTION

  • DESCRIBE FUNCTION

  • DROP FUNCTION

  • DROP KEYSPACE

  • DROP SEARCH INDEX CONFIG

  • DROP TRIGGER

  • LIST PERMISSIONS

  • REBUILD SEARCH INDEX

  • RELOAD SEARCH INDEX

  • RESTRICT

  • RESTRICT ROWS

  • UNRESTRICT

  • UNRESTRICT ROWS

For supported CQL commands, see the Astra DB CQL quick reference.

cassandra.yaml

If you are an experienced Cassandra or DataStax Enterprise user, you are likely familiar with editing the cassandra.yaml file. For Astra DB, the cassandra.yaml file cannot be configured.

The following limits are included in Astra DB:

// for read requests
        page_size_failure_threshold_in_kb =  512
        in_select_cartesian_product_failure_threshold =  25
        partition_keys_in_select_failure_threshold = 20
        tombstone_warn_threshold = 1000
        tombstone_failure_threshold = 100000

// for write requests
        batch_size_warn_threshold_in_kb = 5
        batch_size_fail_threshold_in_kb = 50
        unlogged_batch_across_partitions_warn_threshold = 10
        user_timestamps_enabled = true
        column_value_size_failure_threshold_in_kb = 5 * 1024L
        read_before_write_list_operations_enabled = false
        max_mutation_size_in_kb = 16384

// for schema
        fields_per_udt_failure_threshold = 30 (Classic) or 60 (Serverless)
        collection_size_warn_threshold_in_kb =  5 * 1024L
        items_per_collection_warn_threshold =  20
        columns_per_table_failure_threshold = 50 (Classic) or 75 (Serverless)
        secondary_index_per_table_failure_threshold = 1
        tables_warn_threshold = 100
        tables_failure_threshold = 200

// for node status
        disk_usage_percentage_warn_threshold =  70
        disk_usage_percentage_failure_threshold =  80
        partition_size_warn_threshold_in_mb =  100

// SAI Table Failure threshold
        sai_indexes_per_table_failure_threshold = 10
Connect with Strapi Plan options

General Inquiries: +1 (650) 389-6000 info@datastax.com

© DataStax | Privacy policy | Terms of use

DataStax, Titan, and TitanDB are registered trademarks of DataStax, Inc. and its subsidiaries in the United States and/or other countries.

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries.

Kubernetes is the registered trademark of the Linux Foundation.

landing_page landingpage