Deploy the ZDM Proxy and monitoring
This topic explains how to use the Ansible automation playbooks that you set up in the prior topic to deploy the ZDM Proxy and its monitoring stack.
Once completed, you will have a working and fully monitored ZDM Proxy deployment.
Prerequisites
You must have completed the Ansible setup as described in the prior topic.
No other prerequisites or dependencies are needed. The playbooks will automatically install all the required software packages as part of their operation.
Access the Ansible Control Host Docker container
You can connect to the Ansible Control Host Docker container by opening a shell on it:
docker exec -it zdm-ansible-container bash
You’re now connected to the container, at a prompt such as this:
ubuntu@52772568517c:~$
You can ls
to see the resources in the Ansible Control Host Docker container. The most important resource is the zdm-proxy-automation
.
Now, cd
into zdm-proxy-automation/ansible
and ls
. Example:

Configure the ZDM Proxy
The ZDM Proxy configuration is composed of five files:
-
zdm_proxy_container_config.yml
: Contains the internal configuration of the proxy container itself. -
zdm_proxy_cluster_config.yml
: Contains the configuration that allows the proxy to connect to origin and target clusters. This is always required. -
zdm_proxy_core_config.yml
: Contains important configuration that is commonly used and changed during the migration. -
zdm_proxy_advanced_config.yml
: Contains advanced configuration that is required in some scenarios, but often left to the default values. -
zdm_proxy_custom_tls_config.yml
: Configures TLS encryption, if needed.
Container configuration
The first step of the proxy container configuration is to open the zdm_proxy_container_config.yml
file.
Configure the desired ZDM Proxy version and create a strategy to inject configuration parameters.
All versions of ZDM Proxy support the ability to provide configuration parameters using environment variables.
Starting with ZDM 2.3.0, you can inject the configuration with the YAML file generated from automation scripts.
Cluster and core configuration
The next step is to edit the zdm_proxy_cluster_config.yml
file in the Docker container.
You will need to provide values like your Cassandra/DSE username, password, and other connection credentials.
-
Get connection credentials for your origin and target clusters.
-
Self-managed clusters with authentication enabled: You need a valid username and password for the cluster.
-
Astra DB databases: Generate an application token with a role that can read and write to your database, such as the Database Administrator role, and then store the token values (
clientId
,secret
, andtoken
) securely.
-
-
In the container shell,
cd
to~/zdm-proxy-automation/ansible/vars
and editzdm_proxy_cluster_config.yml
. Thevi
andnano
text editors are available in the container.If you are on ZDM Proxy Automation version 2.1.0 or earlier
Starting in version 2.2.0 of the ZDM Proxy Automation, all origin and target cluster configuration variables are stored in
zdm_proxy_cluster_config.yml
. In earlier versions, these variables are in thezdm_proxy_core_config.yml
file.This change is backward compatible. If you previously populated the variables in
zdm_proxy_core_config.yml
, these variables are honored and take precedence over any variables inzdm_proxy_cluster_config.yml
, if both files are present. However, consider updating your configuration to use the new file to take advantage of new features in later releases. -
In the
ORIGIN CONFIGURATION
andTARGET CONFIGURATION
sections, uncomment and configure all variables that are required for ZDM Proxy to connect to the origin and target clusters.The variables for the origin cluster are prefixed with
origin
, and the variables for the target cluster are prefixed withtarget
. You must provide connection details in both sections, otherwise ZDM Proxy won’t be able to connect to both clusters.-
*_username
and*_password
:-
For a self-managed cluster with authentication enabled, provide valid username and password values to access the cluster.
-
For a self-managed cluster without authentication, leave both values unset.
-
For an Astra DB database, use the values generated with your application token. Either set
username
to theclientId
andpassword
to thesecret
, or setusername
to the literal stringtoken
and setpassword
to thetoken
value, which is prefixed byAstraCS:
.
-
-
*_contact_points
:-
For a self-managed cluster, provide a comma-separated list of IP addresses for the cluster’s seed nodes.
-
For an Astra DB database, leave this unset.
-
-
*_port
:-
For a self-managed cluster, provide the port on which the cluster listens for client connections. The default is 9042.
-
For an Astra DB database, leave this unset.
-
-
*_astra_secure_connect_bundle_path
,*_astra_db_id
, and*_astra_token
:-
For a self-managed cluster, leave all of these unset.
-
For an Astra DB database, provide either
*astra_secure_connect_bundle_path
_or both*_astra_db_id
and*_astra_token
.-
If you want ZDM Proxy Automation to automatically download your database’s Secure Connect Bundle (SCB), use
*_astra_db_id
and*_astra_token
. Set*_astra_db_id
to your database’s ID, and set*_astra_token
to your application token, which is prefixed byAstraCS:
. -
If you want to manually upload your database’s SCB to the jumphost, use
*_astra_secure_connect_bundle_path
.Manually upload the SCB to the jumphost
-
Upload it to the jumphost.
-
Open a new shell on the jumpost, and then run
docker cp /path/to/scb.zim zdm-ansible-container:/home/ubuntu
to copy the SCB to the container. -
Set
*_astra_secure_connect_bundle_path
to the path to the SCB on the jumphost.
-
Make sure that you leave the unused credential unset. For example, if you use
target_astra_db_id
andtarget_astra_token
, leavetarget_astra_secure_connect_bundle_path
unset.Example: Cluster configuration variables
The following example
zdm_proxy_cluster_config.yml
file shows the configuration for a migration from a self-managed origin cluster to an Astra DB target:############################## #### ORIGIN CONFIGURATION #### ############################## ## Origin credentials origin_username: "my_user" origin_password: "my_password" ## Set the following two parameters only if the origin is a self-managed, non-Astra cluster origin_contact_points: "191.100.20.135,191.100.21.43,191.100.22.18" origin_port: 9042 ############################## #### TARGET CONFIGURATION #### ############################## ## Target credentials (partially redacted) target_username: "dqhg...NndY" target_password: "Yc+U_2.gu,9woy0w...9JpAZGt+CCn5" ## Set the following two parameters only if the target is an Astra DB database ## and you want the automation to download the Secure Connect Bundle for you target_astra_db_id: "d425vx9e-f2...c871k" target_astra_token: "AstraCS:dUTGnRs...jeiKoIqyw:01...29dfb7"
-
-
-
-
Save and close the file.
-
Open the
zdm_proxy_core_config.yml
file in the same directory. This file contains some global variables that are used in subsequent steps during the migration. Familiarize yourself with these values, but don’t change any of them yet:-
primary_cluster
: The cluster that serves as the primary source of truth for read requests during the migration. For the majority of the migration, leave this set to the default value ofORIGIN
. At the end of the migration, when you’re preparing to switch over to the target cluster permanently, you can change it toTARGET
after migrating all data from the origin cluster. -
read_mode
: Leave this set to the default value ofPRIMARY_ONLY
. For more information, see Phase 3: Enable asynchronous dual reads. -
log_level
: Leave this set to the default value ofINFO
.
-
Enable TLS encryption (optional)
If you want to enable TLS encryption between the client application and the ZDM Proxy, or between the ZDM Proxy and one (or both) self-managed clusters, you will need to specify some additional configuration. For instructions, see Configure Transport Layer Security (TLS).
Advanced configuration (optional)
There are additional configuration variables in vars/zdm_proxy_advanced_config.yml
that you might want to change at deployment time in specific cases.
All advanced configuration variables not listed here are considered mutable and can be changed later if needed (changes can be easily applied to existing deployments in a rolling fashion using the relevant Ansible playbook, as explained later, see Change a mutable configuration variable).
Multi-datacenter clusters
For multi-datacenter origin clusters, you will need to specify the name of the datacenter that the ZDM Proxy should consider local. To do this, set the property origin_local_datacenter
to the datacenter name.
Likewise, for multi-datacenter target clusters you will need to set target_local_datacenter
appropriately.
These two variables are stored in vars/zdm_proxy_advanced_config.yml
.
Note that this is not relevant for multi-region Astra DB databases, where this is handled through region-specific Secure Connect Bundles.
Ports
Each ZDM Proxy instance listens on port 9042 by default, like a regular Cassandra cluster.
This can be overridden by setting zdm_proxy_listen_port
to a different value.
This can be useful if the origin nodes listen on a port that is not 9042 and you want to configure the ZDM Proxy to listen on that same port to avoid changing the port in your client application configuration.
The ZDM Proxy exposes metrics on port 14001 by default.
This port is used by Prometheus to scrape the application-level proxy metrics.
This can be changed by setting metrics_port
to a different value if desired.
Use Ansible to deploy the ZDM Proxy
Now you can run the playbook that you’ve configured above.
From the shell connected to the container, ensure that you are in /home/ubuntu/zdm-proxy-automation/ansible
and run:
ansible-playbook deploy_zdm_proxy.yml -i zdm_ansible_inventory
That’s it! A ZDM Proxy container has been created on each proxy host.
Indications of success on the origin and target clusters
The playbook will create one ZDM Proxy instance for each proxy host listed in the inventory file. It will indicate the operations that it is performing and print out any errors, or a success confirmation message at the end.
Confirm that the ZDM proxies are up and running by using one of the following options:
-
Call the
liveness
andreadiness
HTTP endpoints for ZDM Proxy instances. -
Check ZDM Proxy instances via docker logs.
Call the liveness
and readiness
HTTP endpoints
ZDM metrics provide /health/liveness
and /health/readiness
HTTP endpoints, which you can call to determine the state of ZDM Proxy instances.
It’s often fine to simply submit the readiness
check to return the proxy’s state.
The format:
http://ZDM_PROXY_PRIVATE_IP:METRICS_PORT/health/liveness
http://ZDM_PROXY_PRIVATE_IP:METRICS_PORT/health/readiness
Readiness expanded GET format:
curl -G "http://{{ hostvars[inventory_hostname]['ansible_default_ipv4']['address'] }}:{{ metrics_port }}/health/readiness"
The default port for metrics collection is 14001
.
You can override this port if you deploy the ZDM Proxy with metrics_port
set to a non-default port.
For more information, see Ports.
Readiness example:
curl -G "http://172.18.10.40:14001/health/readiness"
Result
{
"OriginStatus":{
"Addr":"<origin_node_addr>",
"CurrentFailureCount":0,
"FailureCountThreshold":1,
"Status":"UP"
},
"TargetStatus":{
"Addr":"<target_node_addr>",
"CurrentFailureCount":0,
"FailureCountThreshold":1,
"Status":"UP"
},
"Status":"UP"
}
Check ZDM Proxy instances via docker logs
After running the playbook, you can ssh
into one of the servers where one of the deployed ZDM Proxy instances is running.
You can do so from within the Ansible container, or directly from the jumphost machine:
ssh <linux user>@<zdm proxy ip address>
Then, use the docker logs
command to view the logs of this ZDM Proxy instance.
.
.
.
ubuntu@ip-172-18-10-111:~$ docker logs zdm-proxy-container
.
.
.
time="2023-01-13T22:21:42Z" level=info msg="Initialized origin control connection. Cluster Name: OriginCluster, Hosts: map[3025c4ad-7d6a-4398-b56e-87d33509581d:Host{addr: 191.100.20.61,
port: 9042, host_id: 3025c4ad7d6a4398b56e87d33509581d} 7a6293f7-5cc6-4b37-9952-88a4b15d59f8:Host{addr: 191.100.20.85, port: 9042, host_id: 7a6293f75cc64b37995288a4b15d59f8} 997856cd-0406-45d1-8127-4598508487ed:Host{addr: 191.100.20.93, port: 9042, host_id: 997856cd040645d181274598508487ed}], Assigned Hosts: [Host{addr: 191.100.20.61, port: 9042, host_id: 3025c4ad7d6a4398b56e87d33509581d}]."
time="2023-01-13T22:21:42Z" level=info msg="Initialized target control connection. Cluster Name: cndb, Hosts: map[69732713-3945-4cfe-a5ee-0a84c7377eaa:Host{addr: 10.0.79.213,
port: 9042, host_id: 6973271339454cfea5ee0a84c7377eaa} 6ec35bc3-4ff4-4740-a16c-03496b74f822:Host{addr: 10.0.86.211, port: 9042, host_id: 6ec35bc34ff44740a16c03496b74f822} 93ded666-501a-4f2c-b77c-179c02a89b5e:Host{addr: 10.0.52.85, port: 9042, host_id: 93ded666501a4f2cb77c179c02a89b5e}], Assigned Hosts: [Host{addr: 10.0.52.85, port: 9042, host_id: 93ded666501a4f2cb77c179c02a89b5e}]."
time="2023-01-13T22:21:42Z" level=info msg="Proxy connected and ready to accept queries on 172.18.10.111:9042"
time="2023-01-13T22:21:42Z" level=info msg="Proxy started. Waiting for SIGINT/SIGTERM to shutdown."
In the logs, the important information to notice is:
time="2023-01-13T22:21:42Z" level=info msg="Proxy connected and ready to accept queries on 172.18.10.111:9042"
time="2023-01-13T22:21:42Z" level=info msg="Proxy started. Waiting for SIGINT/SIGTERM to shutdown."
Also, you can check the status of the running Docker image. Here’s an example with ZDM Proxy 2.1.0:
ubuntu@ip-172-18-10-111:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
02470bbc1338 datastax/zdm-proxy:2.1.x "/main" 2 hours ago Up 2 hours zdm-proxy-container
If the ZDM Proxy instances fail to start up due to mistakes in the configuration, you can simply rectify the incorrect configuration values and run the deployment playbook again.
With the exception of the origin credentials, target credentials, and the If you wish to change any of the cluster connection configuration variables (other than credentials and Be aware that running the |
Setting up the Monitoring stack
The ZDM Proxy Automation enables you to easily set up a self-contained monitoring stack that is preconfigured to collect metrics from your ZDM Proxy instances and display them in ready-to-use Grafana dashboards.
The monitoring stack is deployed entirely on Docker. It includes the following components, all deployed as Docker containers:
-
Prometheus node exporter, which runs on each ZDM Proxy host and makes OS- and host-level metrics available to Prometheus.
-
Prometheus server, to collect metrics from the ZDM Proxy process, its Golang runtime and the Prometheus node exporter.
-
Grafana, to visualize all these metrics in three preconfigured dashboards (see Leverage metrics provided by ZDM Proxy).
After running the playbook described here, you will have a fully configured monitoring stack connected to your ZDM Proxy deployment.
There are no additional prerequisites or dependencies for this playbook to execute. If it is not already present, Docker will automatically be installed by the playbook on your chosen monitoring server. |
Connect to the Ansible Control Host
Make sure you are connected to the Ansible Control Host docker container. As above, you can do so from the jumphost machine by running:
docker exec -it zdm-ansible-container bash
You will see a prompt like:
ubuntu@52772568517c:~$
Configure the Grafana credentials
Edit the file zdm_monitoring_config.yml
, stored at zdm-proxy-automation/ansible/vars
:
-
grafana_admin_user
: leave unchanged (defaults toadmin
) -
grafana_admin_password
: set to the password of your choice
Run the monitoring playbook
Ensure that you are in /home/ubuntu/zdm-proxy-automation/ansible
and then run the following command:
ansible-playbook deploy_zdm_monitoring.yml -i zdm_ansible_inventory
Check the Grafana dashboard
In a browser, open http://<jumphost_public_ip>:3000
Login with:
-
username: admin
-
password: the password you configured
Details about the metrics you can observe are available in Leverage metrics provided by ZDM Proxy. |