Migrate and validate data

In Phase 2 of Zero Downtime Migration, you migrate data from the origin to the target, and then validate the migrated data.

In ZDM Phase 2

To move and validate data, you can use a dedicated data migration tool, such as Astra DB Sideloader, Cassandra Data Migrator, or DSBulk Migrator, or your can create your own custom data migration script.

Astra DB Sideloader

Astra DB Sideloader is a service running in Astra DB that imports data from snapshots of your existing Apache Cassandra®-based cluster. This tool is exclusively for migrations that move data to Astra DB.

You can use Astra DB Sideloader alone or with ZDM Proxy.

For more information, see Use Astra DB Sideloader with ZDM Proxy.

Cassandra Data Migrator

You can use Cassandra Data Migrator (CDM) for data migration and validation between Cassandra-based databases. It offers extensive functionality and configuration options to support large and complex migrations as well as post-migration data validation.

You can use CDM by itself, with ZDM Proxy, or for data validation after using another data migration tool.

For more information, see Use Cassandra Data Migrator with ZDM Proxy.

DSBulk Migrator

DSBulk Migrator extends DSBulk Loader with migration-specific commands: migrate-live, generate-script, and generate-ddl.

It is best for smaller migrations or migrations that don’t require extensive data validation, aside from post-migration row counts.

You can use DSBulk Migrator alone or with ZDM Proxy.

For more information, see Use DSBulk Migrator with ZDM Proxy.

Other data migration processes

Depending on your source and target databases, there might be other ZDM-compatible data migration tools available, or you can write your own custom data migration processes with a tool like Apache Spark™.

To use a data migration tool with ZDM Proxy, it must meet the following requirements:

  • Built-in data validation functionality or compatibility with another data validation tool, such as CDM. This is crucial to a successful migration.

  • Preserves the data model, including column names and data types, so that ZDM Proxy can send the same read/write statements to both databases successfully.

    Migrations that perform significant data transformations might not be compatible with ZDM Proxy. The impact of data transformations depends on your specific data model, database platforms, and the scale of your migration.

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2025 DataStax | Privacy policy | Terms of use | Manage Privacy Choices

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000, info@datastax.com