Introduction to Zero Downtime Migration

Zero Downtime Migration provides a simple and reliable way for you to migrate applications from a CQL-based cluster to another CQL-based cluster with little or no downtime and minimal interruption of service to your client applications and data.

You can use ZDM Proxy to migrate from a cluster running Apache Cassandra® version 2.1.6 and later or DataStax Enterprise (DSE) version 4.7.1 and later.

You can migrate to Astra DB or a cluster running the same or later version of Cassandra or DSE

ZDM keeps your clusters in sync at all times by a dual-write logic configuration, and you can roll back at any point.

  • True zero downtime migration is only possible if your database meets the minimum requirements. If your database doesn’t meet these requirements, you can still complete the migration, but downtime might be necessary to finish the migration.

  • The Zero Downtime Migration process requires you to be able to perform rolling restarts of your client applications during the migration. This is standard practice for client applications that are deployed over multiple instances, and it is a widely used approach to roll out releases and configuration changes.

Migration scenarios

There are many reasons why you may decide to migrate your data and client applications from one cluster to another, for example:

  • Moving to a different type of CQL database, for example an on-demand cloud-based proposition such as Astra DB.

  • Upgrading a cluster to a newer version, or newer infrastructure, in as little as one step while leaving your existing cluster untouched throughout the process.

  • Moving one or more client applications out of a shared cluster and onto a dedicated one, in order to manage and configure each cluster independently.

  • Consolidating client applications, which may be currently running on separate clusters, onto a shared one in order to reduce overall database footprint and maintenance overhead.

Here are just a few examples of migration scenarios that are supported when moving from one type of CQL-based database to another:

  • From an existing self-managed Apache Cassandra® or DSE cluster to cloud-native Astra DB. For example:

    • Apache Cassandra 2.1.6+, 3.11.x, 4.0.x, or 4.1.x to Astra DB.

    • DSE 4.7.1+, 4.8.x, 5.1.x, 6.7.x or 6.8.x to Astra DB.

  • From an existing Cassandra or DSE cluster to another Cassandra or DSE cluster. For example:

    • Cassandra 2.1.6+ or 3.11.x to Cassandra 4.0.x or 4.1.x.

    • DSE 4.7.1+, 4.8.x, 5.1.x or 6.7.x to DSE 6.8.x.

    • Cassandra 2.1.6+, 3.11.x, 4.0.x, or 4.1.x to DSE 6.8.x.

    • DSE 4.7.1+ or 4.8.x to Cassandra 4.0.x or 4.1.x.

  • From Astra DB Classic to Astra DB Serverless.

  • From any CQL-based database type/version to the equivalent CQL-based database type/version.

Migration phases

A migration project includes preparation for the migration and five migration phases.

The following sections describe the major events in each phase and how your client applications perform read and write operations on your origin and target clusters during each phase.

The origin is is your existing Cassandra-based environment, which can be Apache Cassandra, DSE, or Astra DB. The target is your new Cassandra-based environment where you want to migrate your data and client applications.

Pre-migration client application operations

Here’s a look at a pre-migration from a high-level view. At this point, your client applications are performing read/write operations with an existing CQL-compatible database such as Apache Cassandra, DSE, or Astra DB.

Pre-migration environment.

For the migration to succeed, the origin and target clusters must have matching schemas.

A CQL statement that your client application sends to ZDM Proxy must be able to succeed on both the origin and target clusters.

This means that any keyspace that your client application uses must exist on both the origin and target clusters with the same name. The table names, column names, and data types must also match. For more information, see Schema/keyspace compatibility.

Phase 1: Deploy ZDM Proxy and connect client applications

In this first phase, deploy the ZDM Proxy instances and connect client applications to the proxies. This phase activates the dual-write logic. Writes are bifurcated (sent to both the origin and target), while reads are executed on the origin only.

Migration Phase 1.

Phase 2: Migrate data

In this phase, migrate existing data using Cassandra Data Migrator or DSBulk Loader. Validate that the migrated data is correct, while continuing to perform dual writes.

Migration Phase 2.

Phase 3: Enable asynchronous dual reads

In this phase, you can optionally enable asynchronous dual reads. The idea is to test performance and verify that the target cluster can handle your application’s live request load before cutting over from the origin to the target permanently.

Migration Phase 3.

Phase 4: Route reads to the target cluster

In this phase, read routing on the ZDM Proxy is switched to teh target cluster so that all reads are executed on the target. Writes are still sent to both clusters.

At this point, the target becomes the primary cluster.

Migration Phase 4.

Phase 5: Connect directly to the target cluster

In this phase, move your client applications off the ZDM Proxy and connect them directly to the target cluster.

Once this happens, the migration is complete, and you now exclusively use the target cluster.

Migration Phase 5.

Zero Downtime Migration interactive lab

As a companion to the ZDM documentation, you can use the Zero Downtime Migration interactive lab to try the entire migration process in a demo environment.

The lab only requires a GitHub account and a supported browser. All browsers are supported except Safari.

You don’t need to install anything because the lab uses a pre-configured GitPod environment.

This lab provides an interactive, detailed walkthrough of the migration process, including pre-migration preparation and each of the five migration phases. The lab describes and demonstrates all steps and automation required to prepare for and complete a migration from any supported origin database to any supported target database.

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2025 DataStax | Privacy policy | Terms of use | Manage Privacy Choices

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000, info@datastax.com