Change read routing to Target
This topic explains how you can configure the ZDM Proxy to route all reads to Target instead of Origin.
You would typically do this once you have migrated all the existing data from Origin and completed all validation checks and reconciliation if necessary.
This operation is a configuration change that can be carried out as explained here.
If you’re not there already,
ssh back into the jumphost:
ssh -F ~/.ssh/zdm_ssh_config jumphost
On the jumphost, connect to the Ansible Control Host container:
docker exec -it zdm-ansible-container bash
You will see a prompt like:
Now open the configuration file
vars/zdm_proxy_core_config.yml for editing.
Change the variable
Run the playbook that changes the configuration of the existing ZDM Proxy deployment:
ansible-playbook rolling_update_zdm_proxy.yml -i zdm_ansible_inventory
Wait for the ZDM Proxy instances to be restarted by Ansible, one by one. All instances will now send all reads to Target instead of Origin. In other words, Target is now the primary cluster, but the ZDM Proxy is still keeping Origin up-to-date via dual writes.
Once the read routing configuration change has been rolled out, you may want to verify that reads are correctly sent to Target as expected. This is not a required step, but you may wish to do it for peace of mind.
The ZDM Proxy handles reads to system tables differently, by intercepting them and always routing them to Origin, in some cases partly populating them at proxy level.
This means that system reads are not representative of how the ZDM Proxy routes regular user reads: even after you switched the configuration to read from Target as the primary cluster, all system reads will still go to Origin.
Verifying that the correct routing is taking place is a slightly cumbersome operation, due to the fact that the purpose of the ZDM process is to align the clusters and therefore, by definition, the data will be identical on both sides.
For this reason, the only way to do a manual verification test is to force a discrepancy of some test data between the clusters. To do this, you could consider using the Themis sample client application. This client application connects directly to Origin, Target and the ZDM Proxy, inserts some test data in its own table and allows you to view the results of reads from each source. Please refer to its README for more information.
Alternatively, you could follow this manual procedure:
Create a small test table on both clusters, for example a simple key/value table (it could be in an existing keyspace, or in one that you create specifically for this test). For example
CREATE TABLE test_keyspace.test_table(k TEXT PRIMARY KEY, v TEXT);.
cqlshto connect directly to Origin. Insert a row with any key, and with a value specific to Origin, for example
INSERT INTO test_keyspace.test_table(k, v) VALUES ('1', 'Hello from Origin!');.
cqlshto connect directly to Target. Insert a row with the same key as above, but with a value specific to Target, for example
INSERT INTO test_keyspace.test_table(k, v) VALUES ('1', 'Hello from Target!');.
cqlshto connect to the ZDM Proxy (see here for how to do this) and issue a read request for this test table:
SELECT * FROM test_keyspace.test_table WHERE k = '1';. The result will clearly show you where the read actually comes from.