Migrate to Astra DB Serverless

The migration process includes exporting data from its original location, uploading or importing your data to your new Astra DB databases, and then updating your applications to connect to your new databases.

Migrate your data

DataStax offers several options to help migrate your data to Astra DB.

Migrate from DSE, HCD, or Apache Cassandra®

The following tools are designed to migrate Cassandra table data into a Cassandra-compatible cluster, such as Astra DB:

  • Cassandra Data Migrator (CDM): Migrate and validate tables between origin Cassandra clusters and target Astra DB databases, with available logging and reconciliation support.

    You can use CDM alone or in conjunction with ZDM.

  • DataStax Bulk Migrator: DSBulk Migrator is an extension of DSBulk Loader that you can use to read data from a table from your origin database, and then write that data to a table in your target Astra DB database.

    You can use DSBulk Migrator alone or in conjunction with ZDM.

  • DataStax Bulk Loader is an OSS command-line tool that you can use to extract and load CSV and JSON files containing Cassandra table data. You can use DSBulk to bring data from Cassandra, DataStax Enterprise (DSE), or Hyper-Converged Database (HCD) into Astra DB, as well as move data between collections and tables in Astra DB databases.

For more information about all of these options, see the DataStax data migration documentation.

Migrate from non-Cassandra sources

Because Astra DB is based on Apache Cassandra, it expects data to be in a format that is compatible with Cassandra table schemas.

When migrating from a schemaless source, you can use the Data API to insert documents into Astra DB collections. However, the Data API cannot transform your data if it is incompatible with Data API limits or functionality. For example, if fields exceed the maximum character limit or contain invalid values, the Data API throws an error. You must modify the incompatible data, and then reattempt the insert operation.

You can also use techniques like super shredding to flatten, normalize, and map schemaless or semi-structured JSON/CSV data into a Cassandra-compatible fixed schema, and then load the data into Astra DB with DSBulk Loader or other tools. However, super shredding can be complex and cumbersome, depending on the structure (or lack thereof) of the source data. For more information, see Building Data Services with Apache Cassandra.

Migrate your code

After migrating your data to Astra DB, your applications can connect exclusively to your Astra DB databases. For more information about connecting to Astra DB and migrating your applications, see Connect to a database and Migrate to the Data API.

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2025 DataStax | Privacy policy | Terms of use | Manage Privacy Choices

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000, info@datastax.com