Deploy ZDM Proxy

After you set up the jumphost and ZDM Proxy Automation, use the Ansible playbooks to deploy your ZDM Proxy instances. Then, deploy the monitoring stack to complete your ZDM Proxy deployment.

Aside from preparing the infrastructure, you don’t need to install any ZDM Proxy dependencies on the ZDM Proxy machines. The playbook automatically installs all required software packages.

Access the Control Host and locate the configuration files

  1. Connect to the Ansible Control Host Docker container. You can do this from the jumphost machine by running the following command:

    docker exec -it zdm-ansible-container bash
    Result
    ubuntu@52772568517c:~$
  2. List (ls) the contents of the Ansible Control Host Docker container, and then find the zdm-proxy-automation directory.

  3. Change (cd) to the zdm-proxy-automation/ansible directory.

  4. List the contents of the ansible directory, and then find the following YAML configuration files:

    • zdm_proxy_container_config.yml: Internal configuration for the proxy container itself.

    • zdm_proxy_cluster_config.yml: Configuration properties to connect ZDM Proxy to the origin and target clusters. This is always required.

    • zdm_proxy_core_config.yml: Important configuration properties that are commonly used and changed during the migration.

    • zdm_proxy_advanced_config.yml: Advanced configuration properties that aren’t always required. Leave these to their default values unless you have a specific use case that requires changing them.

    • zdm_proxy_custom_tls_config.yml: Optional TLS encryption properties.

The following sections explain how to configure each of these files before deploying your ZDM Proxy instances.

Configure ZDM Proxy

After you locate the configuration files on the Control Host, you must prepare the configuration properties for your ZDM Proxy deployment.

Container configuration

  1. Edit the zdm_proxy_container_config.yml file.

  2. Set your ZDM Proxy version.

  3. Create a strategy to inject configuration parameters.

    In all versions of ZDM Proxy, you can use environment variables to set the ZDM Proxy configuration.

    In versions 2.3.0 and later, you can inject the configuration with a YAML file generated from automation scripts.

  4. Save and close the zdm_proxy_container_config.yml file.

Cluster and core configuration

For the cluster and core configuration, you need to provide connection credentials and details for both the origin and target clusters.

ZDM Proxy Automation version 2.1.0 or earlier

Starting in version 2.2.0 of ZDM Proxy Automation, all origin and target cluster configuration variables are stored in zdm_proxy_cluster_config.yml. In earlier versions, these variables are in the zdm_proxy_core_config.yml file.

This change is backward compatible. If you previously populated the variables in zdm_proxy_core_config.yml, these variables are honored and take precedence over any variables in zdm_proxy_cluster_config.yml, if both files are present. However, consider updating your configuration to use the new file and take advantage of new features in later releases.

  1. Get connection credentials for your origin and target clusters.

    • Self-managed clusters

    • Astra DB

    For self-managed clusters with authentication enabled, you need a valid username and password for the cluster.

    If authentication isn’t enabled, no credentials are required.

    For Astra DB databases, generate an application token with a role that can read and write to your database, such as the Database Administrator role, and then store the token securely.

    At minimum, store the core token that is prefixed by AstraCS:…​.

    For legacy authentication to earlier Astra DB databases with an older token generated prior to the unified AstraCS token, you can use the clientId and secret instead of the core token.

  2. Edit the zdm_proxy_cluster_config.yml file. The vi and nano text editors are available in the container.

  3. In the ORIGIN CONFIGURATION and TARGET CONFIGURATION sections, uncomment and configure all variables that are required for ZDM Proxy to connect to the origin and target clusters.

    You must provide connection details for both ORIGIN CONFIGURATION and TARGET CONFIGURATION. If either set is missing, ZDM Proxy cannot connect to both clusters.

    The variables are the same in both sections, but they are set separately for each cluster. The origin cluster variables are prefixed with origin, and the target cluster variables are prefixed with target. For example, origin_username and target_username.

    The expected values depend on the type of cluster (self-managed or Astra DB). For example, if your target cluster is an Astra DB database, provide the Astra DB connection details in the TARGET CONFIGURATION section.

    • Origin/target configuration for a self-managed cluster

    • Origin/target configuration for Astra DB

    The following configuration is required to connect to a self-managed cluster:

    • *_username and *_password: For a self-managed cluster with authentication enabled, provide a valid username and password to access the cluster. If authentication isn’t enabled, leave both variables unset.

    • *_contact_points: Provide a comma-separated list of IP addresses for the cluster’s seed nodes.

    • *_port: Provide the port on which the cluster listens for client connections. The default is 9042.

    • *_astra_secure_connect_bundle_path, *_astra_db_id, and *_astra_token: All of these must be unset.

    The following configuration is required to connect to an Astra DB database:

    • *_username and *_password: Set username to the literal string token, and set password to your Astra DB application token (AstraCS:…​).

      For legacy authentication to earlier Astra DB databases with an older token generated prior to the unified token approach, set the username to the token’s clientId, and set the password to the token’s secret.

    • *_contact_points: Must be unset.

    • *_port: Must be unset.

    • *_astra_secure_connect_bundle_path, *_astra_db_id, and *_astra_token: Provide either *_astra_secure_connect_bundle_path only, or both *_astra_db_id and *_astra_token.

      • If you want ZDM Proxy Automation to automatically download your database’s Secure Connect Bundle (SCB), use *_astra_db_id and *_astra_token. Set *_astra_db_id to your database’s ID, and set *_astra_token to your application token (AstraCS:…​).

      • If you want to manually provide database’s SCB to the jumphost, use *_astra_secure_connect_bundle_path, and manually upload the SCB to the jumphost:

        1. Download your database’s SCB.

        2. Upload it to the jumphost.

        3. Open a new shell on the jumpost, and then run docker cp /path/to/scb.zim zdm-ansible-container:/home/ubuntu to copy the SCB to the container.

        4. Set *_astra_secure_connect_bundle_path to the path to the SCB on the jumphost.

      The unused option must be unset. For example, if you use target_astra_db_id and target_astra_token, then target_astra_secure_connect_bundle_path must be unset.

    Example: Cluster configuration

    The following example shows the cluster configuration for a migration from a self-managed origin cluster to an Astra DB target:

    zdm_proxy_cluster_config.yml
    ##############################
    #### ORIGIN CONFIGURATION ####
    ##############################
    
    ## Origin credentials
    origin_username: "my_user"
    origin_password: "my_password"
    
    ## Set the following two parameters only if the origin is a self-managed, non-Astra cluster
    origin_contact_points: "191.100.20.135,191.100.21.43,191.100.22.18"
    origin_port: 9042
    
    ##############################
    #### TARGET CONFIGURATION ####
    ##############################
    
    ## Target credentials (partially redacted)
    target_username: "dqhg...NndY"
    target_password: "Yc+U_2.gu,9woy0w...9JpAZGt+CCn5"
    
    ## Set the following two parameters only if the target is an Astra DB database
    ## and you want the automation to download the Secure Connect Bundle for you
    target_astra_db_id: "d425vx9e-f2...c871k"
    target_astra_token: "AstraCS:dUTGnRs...jeiKoIqyw:01...29dfb7"
  4. Save and close the zdm_proxy_cluster_config.yml file.

  5. Edit the zdm_proxy_core_config.yml file.

    This file contains global variables that are referenced and modified throughout the migration. Take time to familiarize yourself with these values, but don’t change any of them yet:

    • primary_cluster: The cluster that serves as the primary source of truth for read requests during the migration. For the majority of the migration, leave this set to the default value of ORIGIN.

      At the end of the migration, when you’re preparing to switch over to the target cluster permanently, you can change it to TARGET after migrating all data from the origin cluster.

    • read_mode: Controls the asynchronous dual reads feature. Until you reach Phase 3, leave this set to the default value of PRIMARY_ONLY.

    • log_level: You might need to modify the log level when troubleshooting issues. Unless you are investigating an issue, leave this set to the default value of INFO.

The origin credentials, target credentials, and the primary_cluster variable are mutable variables that you can change after deploying ZDM Proxy. All other cluster connection configuration variables are immutable; the only way to change these values is by completely recreating the ZDM Proxy deployment. For more information, see Manage your ZDM Proxy instances.

Advanced configuration

Typically the advanced configuration variables don’t need to be changed. Only modify the variables in zdm_proxy_advanced_config.yml if you have a specific use case that requires changing them.

If the following advanced configuration variables need to be changed, only do so before deploying ZDM Proxy:

Multi-datacenter clusters

For multi-datacenter origin clusters, specify the name of the datacenter that ZDM Proxy should consider local. To do this, set the origin_local_datacenter property to the local datacenter name. Similarly, for multi-datacenter target clusters, set the target_local_datacenter property to the local datacenter name. These two variables are stored in zdm_proxy_advanced_config.yml.

This configuration isn’t necessary for multi-region Astra DB databases, which specify the local datacenter through each region’s specific Secure Connect Bundle (SCB). For information about downloading a region-specific SCB, see Download and use a Secure Connect Bundle (SCB) with Astra DB Serverless.

Ports

Each ZDM Proxy instance listens on port 9042 by default, like a regular Cassandra cluster. This can be overridden by setting zdm_proxy_listen_port to a different value. This can be useful if the origin nodes listen on a port that is not 9042 and you want to configure ZDM Proxy to listen on that same port to avoid changing the port in your client application configuration.

ZDM Proxy exposes metrics on port 14001 by default. This port is used by Prometheus to scrape the application-level proxy metrics. This can be changed by setting metrics_port to a different value if desired.

All other advanced configuration variables in zdm_proxy_advanced_config.yml are mutable. You can seamlessly change them after deploying ZDM Proxy with a rolling restart. Immutable variables require you to recreate the entire deployment and result in downtime for your ZDM Proxy deployment. For more information, see Manage your ZDM Proxy instances.

Enable TLS encryption

Transportation Layer Security (TLS) encryption is optional and disabled by default.

ZDM Proxy supports TLS encryption between ZDM Proxy and either or both clusters, and between ZDM Proxy and your client application.

To enable TLS encryption, you must provide the necessary files and configure TLS settings in the zdm_proxy_custom_tls_config.yml file.

  • Proxy-to-cluster TLS

  • Client application-to-proxy TLS

Use these steps to enable TLS encryption between ZDM Proxy and one or both clusters if required.

Each cluster has its own TLS configuration. One-way TLS and Mutual TLS (mTLS) are both supported, and they can be enabled as needed for each cluster’s requirements.

In this case, ZDM Proxy acts as the TLS client and the cluster acts as the TLS server.

This is required for self-managed clusters only. For Astra DB, ZDM Proxy uses mTLS automatically with the SCB.

  1. Find the required files for each cluster where you want to enable TLS encryption. All files must be in plain-text, non-binary format.

    • One-way TLS: Find the server CA.

    • Mutual TLS: Find the server CA, the client certificate, and the client key.

    If your client application and origin cluster already use TLS encryption, then the required files should already be used in the client application’s configuration (TLS client files) and the origin cluster’s configuration (TLS Server files).

  2. For each self-managed cluster (origin or target) where you want to enable TLS encrypted ZDM Proxy connections, do the following:

    1. If your TLS files are in a JKS keystore, you must extract them as plain text because ZDM Proxy cannot accept a JKS keystore. You must provide the raw files.

      1. Get the files contained in your JKS keystore and their aliases:

        keytool -list -keystore PATH/TO/KEYSTORE.JKS

        Replace PATH/TO/KEYSTORE.JKS with the path to your JKS keystore.

      2. Extract each file from your JKS keystore:

        keytool -exportcert -keystore PATH/TO/KEYSTORE.JKS -alias FILE_ALIAS -file PATH/TO/DESTINATION/FILE -rfc

        Replace the following:

        • PATH/TO/KEYSTORE.JKS: The path to your JKS keystore

        • FILE_ALIAS: The alias of the file you want to extract

        • PATH/TO/DESTINATION/FILE: The path where you want to save the extracted file

      The -rfc option extracts the files in non-binary PEM format. For more information, see the keytool syntax documentation.

    2. Upload the required TLS files to the jumphost:

      • One-way TLS: Upload the server CA.

      • mTLS: Upload the server CA, the client certificate, and the client key.

    3. From a shell on the jumphost, copy each file to the origin_tls_files or target_tls_files directory in the Ansible Control Host container:

      • Copy origin files to the origin_tls_files directory, replacing TLS_FILE with the path to each required files:

        docker cp TLS_FILE zdm-ansible-container:/home/ubuntu/origin_tls_files
      • Copy target files to the target_tls_files directory, replacing TLS_FILE with the path to each required files:

        docker cp TLS_FILE zdm-ansible-container:/home/ubuntu/target_tls_files
    4. If you want to enable TLS encryption for both clusters, make sure that you complete the previous steps for both clusters, copying the appropriate files to the appropriate directory for each cluster.

  3. Open a shell to the Ansible Control Host container if you don’t already have one:

    docker exec -it zdm-ansible-container bash
  4. From this shell, edit the zdm_proxy_tls_config.yml file at zdm-proxy-automation/ansible/vars/zdm_proxy_custom_tls_config.yml.

  5. Uncomment and populate the TLS configuration variables for the clusters where you want to enable TLS encryption. For example, if you want to enable TLS encryption for both clusters, configure both sets of variables: origin_tls_* and target_tls_*.

    In the proxy-to-cluster configuration, the word server in the variable names refers to the cluster, which acts as the TLS server, and the word client refers to ZDM Proxy, which acts as the TLS client.

    • Origin cluster TLS encryption variables

    • Target cluster TLS encryption variables

    • origin_tls_user_dir_path: Use the default value of /home/ubuntu/origin_tls_files.

    • origin_tls_server_ca_filename: Required. Provide the filename (without the path) of the server CA.

    • origin_tls_client_cert_filename: Required for mTLS only. Provide the filename (without the path) of the client cert. Must be unset for one-way TLS.

    • origin_tls_client_key_filename: Required for mTLS only. Provide the filename (without the path) of the client key. Must be unset for one-way TLS.

    • target_tls_user_dir_path: Use the default value of /home/ubuntu/target_tls_files.

    • target_tls_server_ca_filename: Required. Provide the filename (without the path) of the server CA.

    • target_tls_client_cert_filename: Required for mTLS only. Provide the filename (without the path) of the client cert. Must be unset for one-way TLS.

    • target_tls_client_key_filename: Required for mTLS only. Provide the filename (without the path) of the client key. Must be unset for one-way TLS.

Use these steps to enable TLS encryption between your client application and ZDM Proxy if required.

In this case, your client application is the TLS client, and ZDM Proxy is the TLS server.

One-way TLS and Mutual TLS (mTLS) are both supported.

  1. Get the server CA, server certificate, and server key files. All files must be in plain-text, non-binary format.

    If your client application and origin cluster already use TLS encryption, then the required files should already be used in the client application’s configuration (TLS client files) and the origin cluster’s configuration (TLS Server files).

  2. If your TLS files are in a JKS keystore, extract them as plain text.

    ZDM Proxy cannot accept a JKS keystore. You must provide the raw files.

    1. Get the files contained in your JKS keystore and their aliases:

      keytool -list -keystore PATH/TO/KEYSTORE.JKS

      Replace PATH/TO/KEYSTORE.JKS with the path to your JKS keystore.

    2. Extract each file from your JKS keystore:

      keytool -exportcert -keystore PATH/TO/KEYSTORE.JKS -alias FILE_ALIAS -file PATH/TO/DESTINATION/FILE -rfc

      Replace the following:

      • PATH/TO/KEYSTORE.JKS: The path to your JKS keystore

      • FILE_ALIAS: The alias of the file you want to extract

      • PATH/TO/DESTINATION/FILE: The path where you want to save the extracted file

    The -rfc option extracts the files in non-binary PEM format. For more information, see the keytool syntax documentation.

  3. Upload the files to the jumphost.

  4. From a shell on the jumphost, copy each file to the zdm_proxy_tls_files directory in the Ansible Control Host container:

    docker cp TLS_FILE zdm-ansible-container:/home/ubuntu/zdm_proxy_tls_files

    Replace TLS_FILE with the path to each of your TLS files.

  5. Open a shell to the Ansible Control Host container if you don’t already have one:

    docker exec -it zdm-ansible-container bash
  6. From this shell, edit the zdm_proxy_tls_config.yml file at zdm-proxy-automation/ansible/vars/zdm_proxy_custom_tls_config.yml.

  7. Uncomment and populate the following TLS configuration variables, which are prefixed with zdm_proxy_*. The word server in the variable names refers to ZDM Proxy, which acts as the TLS server in this configuration.

    • zdm_proxy_tls_user_dir_path_name: Use the default value of /home/ubuntu/zdm_proxy_tls_files.

    • zdm_proxy_tls_server_ca_filename: Required. Provide the filename (without the path) of the server CA that the proxy must use.

    • zdm_proxy_tls_server_cert_filename: Required. Provide the filename (without the path) of the server certificate that the proxy must use.

    • zdm_proxy_tls_server_key_filename : Required. Provide the filename (without the path) of the server key that the proxy must use.

    • zdm_proxy_tls_require_client_auth: Set to false (default) for one-way TLS between the application and proxy. Set to true to enable mTLS between the application and the proxy.

When you deploy the ZDM Proxy instances with ZDM Proxy Automation, the deployment playbook automatically distributes the TLS files and applies the TLS configuration to all ZDM Proxy instances. If you want to enable TLS after the initial deployment, you must rerun the deployment playbook to redeploy the instances with the new TLS configuration.

Run the deployment playbook

After modifying all necessary configuration variables, you are ready to deploy your ZDM Proxy instances.

  1. From your shell connected to the Control Host, make sure you are in the ansible directory at /home/ubuntu/zdm-proxy-automation/ansible.

  2. Run the deployment playbook:

    ansible-playbook deploy_zdm_proxy.yml -i zdm_ansible_inventory
  3. Wait while ZDM Proxy containers are deployed to each of your ZDM Proxy machines.

    The playbook creates one ZDM Proxy instance for each proxy host listed in the inventory file. While the playbook runs, activity is printed to the shell along with any errors. If the entire operation is successful, the final output is a confirmation message.

  4. Confirm that the ZDM Proxy instances are running by checking the Docker logs.

    Alternatively, after you deploy the monitoring stack, you can call the liveliness and readiness endpoints to confirm that each ZDM Proxy instance is running.

    To check the Docker logs, do the following:

    1. SSH into one of the servers where a deployed ZDM Proxy instance is running. You can do this from within the Ansible Control Host Docker container, or directly from the jumphost machine:

      ssh USER@ZDM_PROXY_IP_ADDRESS
    2. Use the docker logs command to view the logs for the ZDM Proxy instance:

      docker logs zdm-proxy-container
      Result
      time="2023-01-13T22:21:42Z" level=info msg="Initialized origin control connection. Cluster Name: OriginCluster, Hosts: map[3025c4ad-7d6a-4398-b56e-87d33509581d:Host{addr: 191.100.20.61,
      port: 9042, host_id: 3025c4ad7d6a4398b56e87d33509581d} 7a6293f7-5cc6-4b37-9952-88a4b15d59f8:Host{addr: 191.100.20.85, port: 9042, host_id: 7a6293f75cc64b37995288a4b15d59f8} 997856cd-0406-45d1-8127-4598508487ed:Host{addr: 191.100.20.93, port: 9042, host_id: 997856cd040645d181274598508487ed}], Assigned Hosts: [Host{addr: 191.100.20.61, port: 9042, host_id: 3025c4ad7d6a4398b56e87d33509581d}]."
      
      time="2023-01-13T22:21:42Z" level=info msg="Initialized target control connection. Cluster Name: cndb, Hosts: map[69732713-3945-4cfe-a5ee-0a84c7377eaa:Host{addr: 10.0.79.213,
      port: 9042, host_id: 6973271339454cfea5ee0a84c7377eaa} 6ec35bc3-4ff4-4740-a16c-03496b74f822:Host{addr: 10.0.86.211, port: 9042, host_id: 6ec35bc34ff44740a16c03496b74f822} 93ded666-501a-4f2c-b77c-179c02a89b5e:Host{addr: 10.0.52.85, port: 9042, host_id: 93ded666501a4f2cb77c179c02a89b5e}], Assigned Hosts: [Host{addr: 10.0.52.85, port: 9042, host_id: 93ded666501a4f2cb77c179c02a89b5e}]."
      time="2023-01-13T22:21:42Z" level=info msg="Proxy connected and ready to accept queries on 172.18.10.111:9042"
      time="2023-01-13T22:21:42Z" level=info msg="Proxy started. Waiting for SIGINT/SIGTERM to shutdown."
    3. In the logs, look for messages containing Proxy connected and Proxy started:

      time="2023-01-13T22:21:42Z" level=info msg="Proxy connected and ready to accept queries on 172.18.10.111:9042"
      time="2023-01-13T22:21:42Z" level=info msg="Proxy started. Waiting for SIGINT/SIGTERM to shutdown."
  5. Optional: Check the status of the running Docker image.

    docker ps
    Result
    CONTAINER ID  IMAGE                     COMMAND  CREATED      STATUS     PORTS   NAMES
    02470bbc1338  datastax/zdm-proxy:2.1.x  "/main"  2 hours ago  Up 2 hours         zdm-proxy-container

Troubleshoot deployment issues

If the ZDM Proxy instances fail to start due to errors in the configuration, edit the configuration files and then rerun the deployment playbook.

For specific troubleshooting scenarios, see Troubleshoot Zero Downtime Migration.

Next steps

To continue Phase 1 of the migration, deploy the ZDM Proxy monitoring stack.

Was this helpful?

Give Feedback

How can we improve the documentation?

© Copyright IBM Corporation 2025 | Privacy policy | Terms of use Manage Privacy Choices

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: Contact IBM