Create the target environment for your migration

You must create and prepare a new cluster to be the target for your migration.

This section covers in detail the steps to prepare an Astra DB Serverless database, and also outlines how to create and prepare a different cluster, which could be for example Cassandra 4.0.x or DSE 6.8.x.

Using an Astra DB database as the target

If you intend to use Astra DB as the target for the migration, you will need to:

  • Create an Astra DB Serverless database.

  • Retrieve its Secure Connect Bundle (SCB) and upload it to the application instances.

  • Create Astra DB access credentials for your database.

  • Create the client application schema.

Create an Astra DB Serverless database

Log into the Astra Portal and create an Astra DB Serverless database. You can start with a Free plan, but consider upgrading during your migration project to an Astra Pay As You Go or Enterprise plan, to take advantage of additional functionality — such as Exporting Metrics to external third-party applications, Bring Your Own Keys, and other features.

The Pay As You Go and Enterprise plans have many benefits over the Free plan, such as the ability to lift rate limiting, and avoiding hibernation timeouts.

Assign your preferred values for the serverless database:

  • Name.

  • Keyspace: this is a handle that establishes the database’s context in subsequent DDL and DML statements.

  • Cloud provider: You can choose your preferred cloud provider among AWS, GCP and Azure (only GCP is available to Free Tier accounts).

  • Region: choose your geographically preferred region - you can subsequently add more regions.

When the Astra DB database reaches Active status, create an application token in the Astra Portal with the Read/Write User role. This role will be used by the client application, the ZDM Proxy, and the ZDM Proxy Automation.

Save the generate token and credentials (Client ID, Client Secret, and Token) in a clearly-named secure file.

Get the Secure Connect Bundle and upload to client instances

Download your Astra DB database’s Secure Connect Bundle (SCB). The SCB is a zip file that contains TLS encryption certificates and other metadata required to connect to your database.

The SCB contains sensitive information that establishes a connection to your database, including key pairs and certificates. Treat is as you would any other sensitive values, such as passwords or tokens.

Your client application uses the SCB to connect directly to Astra DB near the end of the migration, and Cassandra Data Migrator or DSBulk Migrator use the SCB to migrate and validate data in Astra DB.

Use scp to copy the SCB to your client application instance:

scp -i <your_ssh_key> secure-connect-<target cluster name>.zip <linux user>@<public IP of client application instance>:

Create the client application schema on your Astra DB database

To complete the preparation work, create the client application schema in your new Astra DB database.

In the Astra Portal, create each corresponding keyspace and table. The keyspace names, table names, column names, data types, and primary keys must be identical to the schema on the origin cluster.

Note the following limitations and exceptions for tables in Astra DB:

  • In Astra DB, you must create keyspaces in the Astra Portal or with the DevOps API because CQL for Astra DB doesn’t support CREATE KEYSPACE. For instructions, see Manage keyspaces.

  • You can use typical CQL statements to create tables in Astra DB. However, the only optional table properties that Astra DB supports are default_time_to_live and comment. As a best practice, omit unsupported table properties, such as compaction strategy and gc_grace_seconds, when creating tables in Astra DB. For more information, see CQL for Astra DB: Unsupported values are ignored.

  • Astra DB doesn’t support Materialized Views (MVs) and certain types of indexes. You must replace these with supported indexes. For more information, see CQL for Astra DB.

To help you prepare the schema from the DDL in your origin cluster, consider using the generate-ddl functionality in the DSBulk Migrator. However, this tool doesn’t automatically convert MVs or indexes.

CQL statements, such as those used to reproduce the schema on the target database, can be executed in Astra DB using the built-in CQL shell or the standalone CQL shell. For more information, see CQL for Astra DB: CQL shell.

Using a generic CQL cluster as the target

To use a generic Cassandra or DSE cluster, you will have to:

  • Provision the infrastructure for your new cluster.

  • Create the cluster with the desired version of Cassandra or DSE.

  • Configure the cluster according to your requirements.

  • Create the client application schema.

Create and configure the cluster

ZDM can be used to migrate to any type of CQL cluster, running in any cloud or even on-premise.

Here are the steps that you’ll need to follow:

  • Determine the correct topology and specifications for your new cluster, then provision infrastructure that meets these requirements. This can be in your cloud provider of choice, in your own private cloud or on bare metal machines.

  • Create your cluster using your chosen version of Cassandra or DSE. Refer to the documentation specific to the version that you are installing for detailed information, and pay particular attention at configuration that must be done at installation time.

  • Configure your new cluster as desired: for example, you may decide to enable internal authentication or configure TLS encryption. You should also consider testing your new cluster to ensure it meets your performance requirements and tune it as necessary.

    Your new cluster can be configured as you wish, independently of how the origin was configured. ZDM Proxy allows you to specify a separate set of configuration to connect to each cluster.

  • If you enabled authentication, create a user with the required permissions to be used for your client application.

Create the client application schema on the cluster

At this point, the only thing that is left to do is creating the schema for your client application on the new cluster.

Make sure that all keyspaces and tables being migrated are identical to the corresponding ones on the origin cluster,including keyspace, table, and column names.

  • To copy the schema, you can run CQL describe on the origin cluster to get the schema that is being migrated, and then run the output on your new cluster. Bear in mind that, if you are migrating from an old version, you may need to adapt some CQL clauses that are no longer supported in newer versions (e.g. COMPACT STORAGE). Please refer to the documentation of the relevant versions for more information.

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2025 DataStax | Privacy policy | Terms of use | Manage Privacy Choices

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000, info@datastax.com