Multiple datacenter deployment per workload type

Steps for configuring nodes in a deployment scenario in a mixed workload cluster that has more than one datacenter for each type of node.

In this scenario, a mixed workload cluster has more than one datacenter for each type of node. For example, if the cluster has 4 analytics nodes, 4 Cassandra nodes, and 2 DSE Search nodes, the cluster could have 5 datacenters: 2 datacenters for analytics nodes, 2 datacenters for Cassandra nodes, and 1 datacenter for the DSE Search node. A single datacenter cluster has only one datacenter for each type of node.

In Cassandra, a datacenter can be a physical datacenter or virtual datacenter. Different workloads must always use separate datacenters, either physical or virtual.

Uses for multiple datacenter deployments include:
  • Isolating replicas from external infrastructure failures, such as networking between datacenters and power outages.
  • Distributed data replication across multiple, geographically dispersed nodes.
  • Between different physical racks in a physical datacenter.
  • Between public cloud providers and on-premise managed datacenters.
  • To prevent the slow down of a real-time analytics cluster by a development cluster running analytics jobs on live data.
  • To ensure your reads from a specific datacenter is local to the requests, especially when using a consistency level greater than ONE, use virtual datacenters in the physical datacenter. This strategy ensures lower latency because it avoids reads from one node in New York and another read from a node in Los Angeles.
For more information about replication:

Prerequisites

To configure a multi-node cluster with multiple datacenters:

  • A good understanding of how Cassandra works. Be sure to read at least Understanding the architecture, Data Replication, and Cassandra's rack feature.
  • Ensure DataStax Enterprise is installed on each node.
  • Choose a name for the cluster.
  • For a mixed-workload cluster, determine the purpose of each node.
  • Determine the snitch and replication strategy. The GossipingPropertyFileSnitch and NetworkTopologyStrategy are recommended for production environments.
  • Get the IP address of each node.
  • Determine which nodes are seed nodes. Do not make all nodes seed nodes. Seed nodes are not required for DSE Search datacenters. Read Internode communications (gossip).
  • Develop a naming convention for each datacenter and rack, for example: DC1, DC2 or 100, 200 and RAC1, RAC2 or R101, R102.
  • Use the yaml_diff tool to review and make appropriate changes to the cassandra.yaml configuration file.
    The location of the cassandra.yaml file depends on the type of installation:
    Package installations /etc/dse/cassandra/cassandra.yaml
    Tarball installations install_location/resources/cassandra/conf/cassandra.yaml
  • Set virtual nodes correctly for the type of datacenter. DataStax does not recommend using virtual nodes on datacenters running BYOH or DSE Hadoop. See Virtual nodes.

Procedure

This configuration example describes installing a 6 node cluster spanning 2 data centers. The default consistency level is QUORUM.

  1. Suppose you install DataStax Enterprise on these nodes:
    • node0 10.168.66.41 (seed1)
    • node1 10.176.43.66
    • node2 10.168.247.41
    • node3 10.176.170.59 (seed2)
    • node4 10.169.61.170
    • node5 10.169.30.138
  2. If the nodes are behind a firewall, open the required ports for internal/external communication. See Configuring firewall port access.
  3. If DataStax Enterprise is running, stop the nodes and clear the data:
    • Installer-Services and Package installations:
      sudo service dse stop
      $ sudo rm -rf /var/lib/cassandra/*  # Clears the data from the  default directories
    • Installer-No Services and Tarball installations:

      From the install directory:

      sudo bin/dse cassandra-stop
      $ sudo rm -rf /var/lib/cassandra/*  # Clears the data from the  default directories 
      Note: If you are clearing data from an AMI installation for restart, you need to preserve the log files.
  4. Set the properties in the cassandra.yaml file for each node:
    The location of the cassandra.yaml file depends on the type of installation:
    Package installations /etc/dse/cassandra/cassandra.yaml
    Tarball installations install_location/resources/cassandra/conf/cassandra.yaml
    Important: After making any changes in cassandra.yaml, you must restart the node for the changes to take effect.

    Properties to set:

    Note: If the nodes in the cluster are identical in terms of disk layout, shared libraries, and so on, you can use the same copy of the cassandra.yaml file on all of them.
    • num_tokens: 256 for Cassandra nodes
    • num_tokens: 1 for Hadoop and DSE Search nodes
    • num_tokens: 64 to 256 for DSE Search nodes when using vnodes; otherwise num_tokens: 1
    • -seeds: internal_IP_address of each seed node
    • listen_address: empty

      If not set, Cassandra asks the system for the local address, the one associated with its host name. In some cases Cassandra doesn't produce the correct address and you must specify the listen_address.

    • endpoint_snitch: snitch See endpoint_snitch. If you are changing snitches, see Switching snitches.
    • auto_bootstrap: false

      Add the bootstrap setting only when initializing a fresh cluster with no data.

    • endpoint_snitch: snitch

      For more information, see endpoint_snitch and About Snitches.

    • If you are using a cassandra.yaml file from a previous version, remove the following options, as they are no longer supported by DataStax Enterprise:
      ## Replication strategy to use for the auth keyspace.
      auth_replication_strategy: org.apache.cassandra.locator.SimpleStrategy
      
      auth_replication_options:
          replication_factor: 1

    Example:

    You must include at least one seed node from each datacenter. It is a best practice to have more than one seed node per datacenter.

    cluster_name: 'MyDemoCluster'
    num_tokens: 256
    seed_provider:
      - class_name: org.apache.cassandra.locator.SimpleSeedProvider
        parameters:
             - seeds: "10.168.66.41,10.176.170.59"
    listen_address:
    endpoint_snitch: GossipingPropertyFileSnitch
  5. In the cassandra-rackdc.properties (GossipingPropertyFileSnitch) or cassandra-topology.properties (PropertyFileSnitch) file, use your naming convention to assign datacenter and rack names to the IP addresses of each node, and assign a default datacenter name and rack name for unknown nodes.
    The default location of the cassandra-topology.properties file depends on the type of installation:
    Installer-Services and Package installations /etc/dse/cassandra/cassandra-topology.properties
    Installer-No Services and Tarball installations install_location/resources/cassandra/conf/cassandra-topology.properties
    The default location of the cassandra-rackdc.properties file depends on the type of installation:
    Installer-Services and Package installations /etc/dse/cassandra/cassandra-rackdc.properties
    Installer-No Services and Tarball installations install_location/resources/cassandra/conf/cassandra-rackdc.properties

    Example:

    # Cassandra Node IP=Data Center:Rack
    10.168.66.41=DC1:RAC1
    10.176.43.66=DC2:RAC1
    10.168.247.41=DC1:RAC1
    10.176.170.59=DC2:RAC1
    10.169.61.170=DC1:RAC1
    10.169.30.138=DC2:RAC1
    
    # default for unknown nodes
    default=DC1:RAC1
  6. After you have installed and configured DataStax Enterprise on all nodes, start the seed nodes one at a time, and then start the rest of the nodes:
    Note: If the node has restarted because of automatic restart, you must stop the node and clear the data directories, as described above.
  7. Check that your cluster is up and running:
    • Installer-Services and Package installations: $ nodetool status
    • Installer-No Services and Tarball installations: $ install_location/bin/nodetool status

Results

Datacenter: DC1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address          Load        Tokens    Owns    Host ID             Rack
UN 10.168.66.41     45.96 KB    256       27.4%   c885aac7-f2c0-...   RAC1
UN 10.168.247.41    66.34 KB    256       36.6%   fa31416c-db22-...   RAC1
UN 10.169.61.170    55.72 KB    256       33.0%   f488367f-c14f-...   RAC1
Datacenter: DC2
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address          Load        Tokens    Owns    Host ID             Rack
UN 10.176.43.66     45.96 KB    256       27.4%   f9fa31c7-f3c0-...   RAC1
UN 10.176.170.59    66.34 KB    256       36.6%   a5bb526c-db51-...   RAC1
UN 10.169.30.138    55.72 KB    256       33.0%   b836478f-c49f-...   RAC1

What's next