• Glossary
  • Support
  • Downloads
  • DataStax Home
Get Live Help
Expand All
Collapse All

DataStax Astra DB Classic Documentation

    • Overview
      • Release notes
      • Astra DB FAQs
      • Astra DB glossary
      • Get support
    • Getting Started
      • Grant a user access
      • Load and retrieve data
        • Use DSBulk to load data
        • Use Data Loader in Astra Portal
      • Connect a driver
      • Build sample apps
      • Use integrations
        • Connect with DataGrip
        • Connect with DBSchema
        • Connect with JanusGraph
        • Connect with Strapi
    • Planning
      • Plan options
      • Database regions
    • Securing
      • Security highlights
      • Security guidelines
      • Default user permissions
      • Change your password
      • Reset your password
      • Authentication and Authorization
      • Astra DB Plugin for HashiCorp Vault
    • Connecting
      • Connecting to a VPC
      • Connecting Change Data Capture (CDC)
      • Connecting CQL console
      • Connect the Spark Cassandra Connector to Astra
      • Drivers for Astra DB
        • Connecting C++ driver
        • Connecting C# driver
        • Connecting Java driver
        • Connecting Node.js driver
        • Connecting Python driver
        • Drivers retry policies
      • Connecting Legacy drivers
      • Get Secure Connect Bundle
    • Migrating
      • FAQs
      • Preliminary steps
        • Feasibility checks
        • Deployment and infrastructure considerations
        • Create target environment for migration
        • Understand rollback options
      • Phase 1: Deploy ZDM Proxy and connect client applications
        • Set up the ZDM Automation with ZDM Utility
        • Deploy the ZDM Proxy and monitoring
          • Configure Transport Layer Security
        • Connect client applications to ZDM Proxy
        • Manage your ZDM Proxy instances
      • Phase 2: Migrate and validate data
      • Phase 3: Enable asynchronous dual reads
      • Phase 4: Change read routing to Target
      • Phase 5: Connect client applications directly to Target
      • Additional resources
        • Glossary
        • Troubleshooting
          • Troubleshooting tips
          • Troubleshooting scenarios
        • Contribution guidelines
        • Release Notes
    • Managing
      • Managing your organization
        • User permissions
        • Pricing and billing
        • Audit Logs
        • Configuring SSO
          • Configure SSO for Microsoft Azure AD
          • Configure SSO for Okta
          • Configure SSO for OneLogin
      • Managing your database
        • Create your database
        • View your databases
        • Database statuses
        • Use DSBulk to load data
        • Use Data Loader in Astra Portal
        • Monitor your databases
        • Manage multiple keyspaces
        • Using multiple regions
        • Terminate your database
        • Resize your classic database
        • Park your classic database
        • Unpark your classic database
      • Managing with DevOps API
        • Managing database lifecycle
        • Managing roles
        • Managing users
        • Managing tokens
        • Managing multiple regions
        • Get private endpoints
        • AWS PrivateLink
        • Azure PrivateLink
        • GCP Private Service
    • Astra CLI
    • Developing with Stargate APIs
      • Develop with REST
      • Develop with Document
      • Develop with GraphQL
        • Develop with GraphQL (CQL-first)
        • Develop with GraphQL (Schema-first)
      • Develop with gRPC
        • gRPC Rust client
        • gRPC Go client
        • gRPC Node.js client
        • gRPC Java client
      • Develop with CQL
      • Tooling Resources
      • Node.js Document API client
      • Node.js REST API client
    • Stargate QuickStarts
      • Document API QuickStart
      • REST API QuickStart
      • GraphQL API CQL-first QuickStart
    • API References
      • DevOps REST API v2
      • Stargate Document API v2
      • Stargate REST API v2
  • DataStax Astra DB Classic Documentation
  • Managing
  • Managing your database
  • Using multiple regions

Using multiple regions

You can replicate data to multiple regions for high availability scenarios to ensure active-active applications failover models. Multiple regions also ensure application data availability for locality purposes with the added value of cost savings.

Having multiple regions increase may your billing. For more, see Pricing and billing.

Video introduction

See this short video introduction to the Astra DB multi-region implementation:

Eventual consistency model and multi-region updates

DataStax Astra DB follows the eventual consistency model. Depending on the selected consistency level, data written to one region might not be immediately accessible to other regions in the same database.

If you are using the EACH_QUORUM consistency level for normal updates or SERIAL consistency level for Lightweight Transactions (LWT), then data is immediately accessible on all regions provided the operation successfully completes. These consistency levels are for only write requests.

For all other consistency levels, data might not be immediately accessible. The time span is normally within a few minutes to fully replicate the data. However, it could take longer, and possibly span one or more days. There are several contributing factors to the latter scenario, such as the workload volume, the number of regions, the data repair operations, and network resources.

For more, see the FAQs in this topic.

Data sovereignty

Astra DB serverless replicates all data in the database to all of a database’s regions. By contrast, multiple keyspaces in Apache Cassandra® and DataStax Enterprise (DSE) allow a database to replicate some tables to a subset of regions. To achieve the same behavior as Cassandra or DSE, create a separate Astra DB instance that adheres to the necessary region restrictions. The database client will need to add a separate connection for the additional database and send queries to the appropriate connection depending on the table being queried.

Classic databases

If you are adding multiple regions to your database, you can use each region only once. You cannot add the same region to the same database more than one time.

Adding a region to your classic database

  1. Open a browser, navigate to DataStax Astra, and log in.

  1. On the Dashboard page, select the database name to access the Overview page for your selected database.

  1. Select Add Region.

  1. Select the region you want to add from the Add Region menu.

  1. Select Add.

  1. Confirm you want to add the region by selecting Confirm.

You’ll see a screen confirming the new datacenter region is being added. Once you add a region, a maintenance period starts that might take up to 30 minutes. Your database will have limited availability during this window. Some actions, such as viewing database connection details, might be unavailable until maintenance is finished.

Removing a region from your database

  1. Open a browser, navigate to DataStax Astra, and log in.

  1. On the Dashboard page, select the database name to access the Overview page for your selected database.

  1. Select Remove Region from the overflow menu for the region you want to remove.

  1. Enter your datacenter id, which is provided in the prompt.

  1. Select Remove region.

Removing a region is not reversible. Proceed with caution.

You’ll see a screen confirming the datacenter location will be removed. Once you remove a location, a maintenance period starts that might take up to 30 minutes. Your database will have limited availability during this window. Some actions, such as viewing database connection details, might be unavailable until maintenance is finished.

What’s next?

See additional database management topics.

Manage multiple keyspaces Terminate your database

General Inquiries: +1 (650) 389-6000 info@datastax.com

© DataStax | Privacy policy | Terms of use

DataStax, Titan, and TitanDB are registered trademarks of DataStax, Inc. and its subsidiaries in the United States and/or other countries.

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries.

Kubernetes is the registered trademark of the Linux Foundation.

landing_page landingpage