• Glossary
  • Support
  • Downloads
  • DataStax Home
Get Live Help
Expand All
Collapse All

DataStax Astra DB Serverless Documentation

    • Overview
      • Release notes
      • Astra DB FAQs
      • Astra DB glossary
      • Get support
    • Getting Started
      • Grant a user access
      • Load and retrieve data
        • Use DSBulk to load data
        • Use Data Loader in Astra Portal
      • Connect a driver
      • Build sample apps
      • Use integrations
    • Planning
      • Plan options
      • Database regions
    • Securing
      • Security highlights
      • Security guidelines
      • Default user permissions
      • Change your password
      • Reset your password
      • Authentication and Authorization
      • Astra DB Plugin for HashiCorp Vault
    • Connecting
      • Connecting private endpoints
        • AWS Private Link
        • Azure Private Link
        • GCP Private Endpoints
        • Connecting custom DNS
      • Connecting Change Data Capture (CDC)
      • Connecting CQL console
      • Connect the Spark Cassandra Connector to Astra
      • Drivers for Astra DB
        • Connecting C++ driver
        • Connecting C# driver
        • Connecting Java driver
        • Connecting Node.js driver
        • Connecting Python driver
        • Drivers retry policies
      • Connecting Legacy drivers
      • Get Secure Connect Bundle
    • Migrating
      • Components
      • FAQs
      • Preliminary steps
        • Feasibility checks
        • Deployment and infrastructure considerations
        • Create target environment for migration
        • Understand rollback options
      • Phase 1: Deploy ZDM Proxy and connect client applications
        • Set up the ZDM Proxy Automation with ZDM Utility
        • Deploy the ZDM Proxy and monitoring
        • Configure Transport Layer Security
        • Connect client applications to ZDM Proxy
        • Leverage metrics provided by ZDM Proxy
        • Manage your ZDM Proxy instances
      • Phase 2: Migrate and validate data
      • Phase 3: Enable asynchronous dual reads
      • Phase 4: Change read routing to Target
      • Phase 5: Connect client applications directly to Target
      • Troubleshooting
        • Troubleshooting tips
        • Troubleshooting scenarios
      • Glossary
      • Contribution guidelines
      • Release Notes
    • Managing
      • Managing your organization
        • User permissions
        • Pricing and billing
        • Audit Logs
        • Bring Your Own Key
          • BYOK AWS Astra DB console
          • BYOK GCP Astra DB console
          • BYOK AWS DevOps API
          • BYOK GCP DevOps API
        • Configuring SSO
          • Configure SSO for Microsoft Azure AD
          • Configure SSO for Okta
          • Configure SSO for OneLogin
      • Managing your database
        • Create your database
        • View your databases
        • Database statuses
        • Use DSBulk to load data
        • Use Data Loader in Astra Portal
        • Monitor your databases
        • Export metrics to third party
          • Export metrics via Astra Portal
          • Export metrics via DevOps API
        • Manage access lists
        • Manage multiple keyspaces
        • Using multiple regions
        • Terminate your database
      • Managing with DevOps API
        • Managing database lifecycle
        • Managing roles
        • Managing users
        • Managing tokens
        • Managing BYOK AWS
        • Managing BYOK GCP
        • Managing access list
        • Managing multiple regions
        • Get private endpoints
        • AWS PrivateLink
        • Azure PrivateLink
        • GCP Private Service
    • Astra CLI
    • Astra Block
      • Quickstart
      • FAQ
      • Data model
      • About NFTs
    • Developing with Stargate APIs
      • Develop with REST
      • Develop with Document
      • Develop with GraphQL
        • Develop with GraphQL (CQL-first)
        • Develop with GraphQL (Schema-first)
      • Develop with gRPC
        • gRPC Rust client
        • gRPC Go client
        • gRPC Node.js client
        • gRPC Java client
      • Develop with CQL
      • Tooling Resources
      • Node.js Document API client
      • Node.js REST API client
    • Stargate QuickStarts
      • Document API QuickStart
      • REST API QuickStart
      • GraphQL API CQL-first QuickStart
    • API References
      • DevOps REST API v2
      • Stargate Document API v2
      • Stargate REST API v2
  • DataStax Astra DB Serverless Documentation
  • Migrating

Introduction to Zero Downtime Migration

Enterprises today depend on the ability to reliably migrate mission-critical client applications and data to cloud environments with zero downtime during the migration.

At DataStax, we’ve developed a set of thoroughly-tested self-service tools, automation scripts, examples, and documented procedures that walk you through well-defined migration phases.

We call this product suite DataStax Zero Downtime Migration (ZDM).

ZDM provides a simple and reliable way for you to migrate applications from any CQL-based cluster (Apache Cassandra®, DataStax Enterprise (DSE), Astra DB, or any type of CQL-based database) to any other CQL-based cluster, without any interruption of service to the client applications and data.

  • You can move your application to Astra DB, DSE, or Cassandra with no downtime and with minimal configuration changes.

  • Your clusters will be kept in sync at all times by a dual-write logic configuration.

  • You can rollback at any point, for complete peace of mind.

This suite of tools allows for zero downtime migration only if your database meets the minimum requirements. If your database does not meet these requirements, you can complete the migration from Origin to Target, but downtime might be necessary to finish the migration.

The Zero Downtime Migration process requires you to be able to perform rolling restarts of your client applications during the migration.

This is standard practice for client applications that are deployed over multiple instances and is a widely used approach to roll out releases and configuration changes.

Supported releases

Overall, you can use ZDM Proxy to migrate:

  • From: Any Cassandra 2.1.6 or higher release, or from any DSE 4.7.1 or higher release

  • To: Any equivalent or higher release of Cassandra, or to any equivalent or higher release of DSE, or to Astra DB

Migration scenarios

There are many reasons why you may decide to migrate your data and client applications from one cluster to another, for example:

  • Moving to a different type of CQL database, for example an on-demand cloud-based proposition such as Astra DB.

  • Upgrading a cluster to a newer version, or newer infrastructure, in as little as one step while leaving your existing cluster untouched throughout the process.

  • Moving one or more client applications out of a shared cluster and onto a dedicated one, in order to manage and configure each cluster independently.

  • Consolidating client applications, which may be currently running on separate clusters, onto a shared one in order to reduce overall database footprint and maintenance overhead.

Here are just a few examples of migration scenarios that are supported when moving from one type of CQL-based database to another:

  • From an existing self-managed Cassandra or DSE cluster to cloud-native Astra DB. For example:

    • Cassandra 2.1.6+, 3.11.x, 4.0.x, or 4.1.x to Astra DB

    • DSE 4.7.1+, 4.8.x, 5.1.x, or 6.8.x to Astra DB

  • From an existing Cassandra or DSE cluster to another Cassandra or DSE cluster. For example:

    • Cassandra 2.1.6+ or 3.11.x to Cassandra 4.0.x or 4.1.x

    • DSE 4.7.1+, 4.8.x, or 5.1.x to DSE 6.8.x

    • Cassandra 2.1.6+, 3.11.x, 4.0.x, or 4.1.x to DSE 6.8.x

    • DSE 4.7.1+ or 4.8.x to Cassandra 4.0.x or 4.1.x

  • From Astra DB Classic to Astra DB Serverless

  • From any CQL-based database type/version to the equivalent CQL-based database type/version.

Migration phases

First, a couple of key terms used throughout the Zero Downtime Migration documentation and software components:

  • Origin: This cluster is your existing Cassandra-based environment, whether it’s open-source Apache Cassandra, DSE, or Astra DB Classic.

  • Target: This cluster is the new environment to which you want to migrate client applications and data.

For additional terms, see the glossary.

Your migration project occurs through a sequence of phases, which matches the structure of this ZDM documentation.

Migration phases from start to finish

Before we walk through illustrations of each phase, let’s look at a pre-migration, high-level view. At this point, your client applications are performing read/write operations with an existing CQL-compatible database. That is, Apache Cassandra, DSE, or Astra DB.

Diagram shows existing CQL-compatible environment before migration starts.

Before your migration begins, you’ll need to satisfy prerequisites, prepare your environment, and set up the recommended infrastructure.

Phase 1: Deploy ZDM Proxy and connect client applications

Let’s look at Phase 1 of the migration. We’ll deploy the ZDM Proxy instances and connect client applications to the proxies. This step activates the dual-write logic. Writes will be "bifurcated" (sent to both Origin and Target), while reads will be executed on Origin only.

Phase 1 diagram shows deployed ZDM Proxy instances

Phase 2: Migrate data

In this phase, we migrate existing data using Cassandra Data Migrator and/or DSBulk Migrator. Validate that the migrated data is correct, while continuing to perform dual writes.

Phase 2 diagram shows using tools to migrate data from Origin to Target.

Phase 3: Async dual reads

In this phase, you can optionally enable asynchronous dual reads. The idea is to test performance and verify that Target can handle your application’s live request load before cutting over from Origin to Target.

Phase 3 diagram shows optional step enabling async dual reads to test performance of Target.

Phase 4: Route reads to Target

In this phase, the read routing on the ZDM Proxy is switched to Target so that all reads are executed on it, while writes are still sent to both clusters. In other words, Target becomes the primary cluster.

Phase 4 diagram shows read routing on ZDM Proxy was switched to Target.

Phase 5: Connect directly to Target

In this phase, you’ll move your client applications off the ZDM Proxy and connect the apps directly to Target. Once that happens, the migration is complete.

Phase 5 diagram shows apps no longer using proxy and instead connected directly to Target.

A fun way to learn: Zero Downtime Migration Interactive Lab

We’ve built a complementary learning resource that is a companion to this comprehensive ZDM documentation. It’s the Zero Downtime Migration Interactive Lab, available for you here:

https://www.datastax.com/dev/zdm

  • All you need is a browser and a GitHub account.

  • There’s nothing to install for the lab, which opens in a pre-configured GitPod environment.

  • You’ll learn about a full migration without leaving your browser!

To run the lab, all major browsers are supported, except Safari. For more, see the lab’s start page.

We encourage you to explore this free hands-on interactive lab from DataStax Academy. It’s an excellent, detailed view of the migration process. The lab describes and demonstrates all the steps and automation performed to prepare for, and complete, a migration from any Cassandra/DSE/Astra DB database to another Cassandra/DSE/Astra DB database across clusters.

The interactive lab spans the pre-migration prerequisites and each of the five key migration phases illustrated above.

Get Secure Connect Bundle Components

General Inquiries: +1 (650) 389-6000 info@datastax.com

© DataStax | Privacy policy | Terms of use

DataStax, Titan, and TitanDB are registered trademarks of DataStax, Inc. and its subsidiaries in the United States and/or other countries.

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries.

Kubernetes is the registered trademark of the Linux Foundation.

landing_page landingpage