Feasibility checks

Before starting your migration, refer to the following considerations to ensure that your client application workload and Origin are suitable for this Zero Downtime Migration process.

True zero downtime migration is only possible if your database meets the minimum requirements described on this page. If your database doesn’t meet these requirements, you can still complete the migration, but downtime might be necessary to finish the migration.

Cassandra Native Protocol version and cluster version support

ZDM Proxy supports protocol versions v3, v4, DSE_V1, and DSE_V2.

ZDM Proxy technically doesn’t support v5. If v5 is requested, the proxy handles protocol negotiation so that the client application properly downgrades the protocol version to v4. This means that any client application using a recent driver that supports protocol version `v5`can be migrated usingZDM Proxy as long as the application doesn’t use v5-specific functionality.

Thrift is not supported by ZDM Proxy

If you are using a very old driver or cluster version that only supports Thrift, you need to change your client application to use CQL and potentially upgrade your cluster before starting the migration process.

Supported cluster versions and migration paths

You can use ZDM Proxy to support migrations from Apache Cassandra®, DataStax Enterprise (DSE), Hyper-Converged Database (HCD), Astra DB, and other Cassandra-based databases to any other Cassandra-based database of the equivalent type or version:

Compatible origin clusters

Migrate from one of the following:

  • DataStax Enterprise (DSE) version 4.7.1 and later

  • Apache Cassandra® version 2.1.6 and later

  • Other Cassandra-based databases that are based on a compatible Cassandra version, such as Astra DB Classic, ScyllaDB, and Yugabyte.

Compatible target clusters

Migrate to one of the following:

Before you begin the migration process, test directly connecting your client application to your target cluster, without ZDM Proxy. This ensures that you know the connection will work when you disconnect ZDM Proxy at the end of the migration.

Schema/keyspace compatibility

ZDM Proxy does not modify or transform CQL statements besides the optional feature that replaces now() functions with timestamp literals. See Server-side non-deterministic functions in the primary key for more information about this feature.

A CQL statement that your client application sends to ZDM Proxy must be able to succeed on both clusters. This means that any keyspace that your client application uses must exist on both the origin and target clusters with the same name (although they can have different replication strategies and durable writes settings). Table names must also match.

The schema doesn’t have to be an exact match as long as the CQL statements can be executed successfully on both clusters. For example, if a table has 10 columns but your client application only uses 5 of those columns then you could create that table on the target with just those 5 columns.

You can also change the primary key in some cases. For example, if your compound primary key is PRIMARY KEY (A, B) and you always provide parameters for the A and B columns in your CQL statements then you could change the key to PRIMARY KEY (B, A) when creating the schema on the target because your CQL statements will still run successfully.

Considerations for Astra DB migrations

Astra DB implements guardrails and sets limits to ensure good practices, foster availability, and promote optimal configurations for your databases. Check the list of guardrails and limits to make sure that your application workload can be successful within these limits.

If you need to make changes to the application or data model to ensure that your workload can run successfully in Astra DB, then you need to do these changes before you start the migration process.

It is also highly recommended to perform tests and benchmarks when connected directly to Astra DB prior to the migration, so that you don’t find unexpected issues during the migration process.

Read-only applications

Read-only applications require special handling only if you are using ZDM Proxy versions older than 2.1.0.

If you have an existing ZDM Proxy deployment, you can check your ZDM Proxy version.

For upgrade instructions, see Upgrade the proxy version.

Versions older than 2.1.0

If a client application only sends SELECT statements to a database connection then you may find that ZDM Proxy terminates these read-only connections periodically, which may result in request errors if the driver is not configured to retry these requests in these conditions.

This happens because Astra DB terminates idle connections after some inactivity period (usually around 10 minutes). If Astra DB is your target, and a client connection is only sending read requests to ZDM Proxy, then the Astra DB connection that is paired to that client connection will remain idle and will be eventually terminated.

A potential workaround is to not connect these read-only client applications to ZDM Proxy, but you need to ensure that these client applications switch reads to the target at any point after all the data has been migrated and all validation and reconciliation has completed.

Another work around is to implement a mechanism in your client application that creates a new Session periodically to avoid the Astra DB inactivity timeout. You can also implement some kind of meaningless write request that the application sends periodically to make sure the Astra DB connection doesn’t idle.

Version 2.1.0 and newer

This issue is solved in version 2.1.0 of ZDM Proxy, which introduces periodic heartbeats to keep alive idle cluster connections. We strongly recommend using version 2.1.0 (or newer) to benefit from this improvement, especially if you have a read-only workload.

Lightweight Transactions and other non-idempotent operations

Examples of non-idempotent operations in CQL are:

  • Lightweight Transactions (LWTs)

  • Counter updates

  • Collection updates with += and -= operators

  • Non-deterministic functions like now() and uuid()

For more information on how to handle non-deterministic functions, see Server-side non-deterministic functions in the primary key.

Given that there are two separate clusters involved, the state of each cluster may be different. For conditional writes, this may create a divergent state for a time.

If non-idempotent operations are used, DataStax recommends adding a reconciliation phase to your migration before and after Phase 4, where you switch reads to the target.

For details about using the Cassandra Data Migrator, see Migrate and validate data.

Some application workloads can tolerate inconsistent data in some cases (especially for counter values) in which case you may not need to do anything special to handle those non-idempotent operations.

Lightweight transactions and the applied flag

ZDM Proxy handles LWTs as write operations. The proxy sends the LWT to the origin and target clusters concurrently, and then waits for a response from both. ZDM Proxy will return a success status to the client if both the origin and target send successful acknowledgements. Otherwise, it will return a failure status if one or both do not return an acknowledgement.

What sets LWTs apart from regular writes is that they are conditional. In other words, a LWT can appear to have been successful (its execution worked as expected). However, the change will be applied only if the LWT’s condition was met. Whether the condition was met depends on the state of the data on the cluster. In a migration, the clusters will not be in sync until all existing data has been imported into the target. Up to that point, an LWT’s condition can be evaluated differently on each side, leading to a different outcome even though the LWT was technically successful on both sides.

The response that a cluster sends after executing a LWT includes a flag called applied. This flag tells the client whether the LWT update was actually applied. The status depends on the condition, which in turn depends on the state of the data. When ZDM Proxy receives a response from both the origin and target, each response would have its own applied flag.

However, ZDM Proxy can only return a single response to the client. Recall that the client has no knowledge that there are two clusters behind the proxy. Therefore, ZDM Proxy returns the applied flag from the cluster that is currently used as primary. If your client has logic that depends on the applied flag, be aware that during the migration, you will only have visibility of the flag coming from the primary cluster; that is, the cluster to which synchronous reads are routed.

To reiterate, ZDM Proxy only returns the applied value from the primary cluster, which is the cluster from where read results are returned to the client application. By default, this is the origin cluster. This means that when you set the target cluster as your primary cluster, then the applied value returned to the client application will come from the target cluster.

Advanced workloads (DSE)

Graph

ZDM Proxy handles all DSE Graph requests as write requests even if the traversals are read-only. There is no special handling for these requests, so you need to take a look at the traversals that your client application sends and determine whether the traversals are idempotent. If the traversals are non-idempotent then the reconciliation step is needed.

Keep in mind that our recommended tools for data migration and reconciliation are CQL-based, so they can be used for migrations where the origin cluster is a database that uses the new DSE Graph engine released with DSE 6.8, but cannot be used for the old Graph engine that older DSE versions relied on. See this section for more information about non-idempotent operations.

Read-only DSE Search workloads can be moved directly from the origin to the target without ZDM Proxy being involved. If your client application uses Search and also issues writes, or if you need the read routing capabilities from ZDM Proxy, then you can connect your Search workloads to it as long as you are using DataStax-compatible drivers to submit these queries. This approach means the queries are regular CQL SELECT statements, so ZDM Proxy handles them as regular read requests.

If you use the HTTP API then you can either modify your applications to use the CQL API instead or you will have to move those applications directly from the origin to the target when the migration is complete if that is acceptable.

Client compression

The binary protocol used by Cassandra, DSE, HCD, and Astra DB supports optional compression of transport-level requests and responses that reduces network traffic at the cost of CPU overhead.

ZDM Proxy doesn’t support protocol compression.

This compression type is disabled by default in DataStax-compatible drivers. If it is enabled in your client application, you must disable it before starting the migration process.

This isn’t related to storage compression, which you can configure on specific tables with the compression table property. Storage/table compression doesn’t affect the client application or ZDM Proxy in any way.

Authenticator and Authorizer configuration

ZDM Proxy supports the following cluster authenticator configurations:

  • No authenticator

  • PasswordAuthenticator

  • DseAuthenticator with internal or ldap scheme

ZDM Proxy does not support DseAuthenticator with kerberos scheme.

While the authenticator has to be supported, the authorizer does not affect client applications or ZDM Proxy so you should be able to use any kind of authorizer configuration on both of your clusters.

The authentication configuration on each cluster can be different between the origin and target clustesr, as ZDM Proxy treats them independently.

Server-side non-deterministic functions in the primary key

Statements with functions like now() and uuid() will result in data inconsistency between the origin and target clusters because the values are computed at the cluster level.

If these functions are used for columns that are not part of the primary key, you may find it acceptable to have different values in the two clusters depending on your application business logic. However, if these columns are part of the primary key, the data migration phase will not be successful as there will be data inconsistencies between the two clusters and they will never be in sync.

ZDM does not support the uuid() function currently.

ZDM Proxy is able to compute timestamps and replace now() function references with such timestamps in CQL statements at proxy level to ensure that these parameters will have the same value when these statements are sent to both clusters. However, this feature is disabled by default because it might result in performance degradation. We highly recommend that you test this properly before using it in production. Also keep in mind that this feature is only supported for now() functions at the moment. To enable this feature, set the configuration variable replace_cql_function to true. For more, see Change a mutable configuration variable.

If you find that the performance is not acceptable when this feature is enabled, or the feature doesn’t cover a particular function that your client application is using, then you will have to make a change to your client application so that the value is computed locally (at client application level) before the statement is sent to the database. Most drivers have utility methods that help you compute these values locally. For more information, see your driver’s documentation.

Driver retry policy and query idempotence

The ZDM process requires you to perform rolling restarts of your client applications during the migration. This is standard practice for client applications that are deployed over multiple instances, and it is a widely used approach to roll out releases and configuration changes.

As part of the normal migration process, the ZDM Proxy instances will have to be restarted in between phases to apply configuration changes. From the point of view of the client application, this is a similar behavior to a DSE or Cassandra cluster going through a rolling restart in a non-migration scenario.

If your application already tolerates rolling restarts of your current cluster then you should see no issues when there is a rolling restart of ZDM Proxy instances.

To ensure that your client application retries requests when a database connection is closed you should check the section of your driver’s documentation related to retry policies.

Most DataStax-compatible drivers require a statement to be marked as idempotent in order to retry it in case of a connection error (such as the termination of a database connection). This means that, by default, these drivers treat statements as non-idempotent, and the drivers don’t automatically retry them in the event of a connection error.

For more information, see the following driver documentation:

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2025 DataStax | Privacy policy | Terms of use | Manage Privacy Choices

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000, info@datastax.com