Multiple data center deployment per workload type

Steps for configuring nodes in a deployment scenario in a mixed workload cluster that has more than one data center for each type of node.

In this scenario, a mixed workload cluster has more than one data center for each type of node. For example, if the cluster has 4 Hadoop nodes, 4 Cassandra nodes, and 2 Solr nodes, the cluster could have 5 data centers: 2 data centers for Hadoop nodes, 2 data centers for Cassandra nodes, and 1 data center for the Solr nodes. A single data-center cluster has only one data center for each type of node.

Uses for multiple data center deployments include:
  • Isolating replicas from external infrastructure failures, such as networking between data centers and power outages.
  • Distributed data replication across multiple, geographically dispersed nodes.
  • Between different physical racks in a physical data center.
  • Between public cloud providers and on-premise managed data centers.
  • To prevent the slow down of a real-time analytics cluster by a development cluster running analytics jobs on live data.
  • To ensure your reads from a specific data center is local to the requests, especially when using a consistency level greater than ONE, use virtual data centers in the physical data center. This strategy ensures lower latency because it avoids reads from one node in New York and another read from a node in Los Angeles.
For more information about replication:

Prerequisites

To configure a multi-node cluster with multiple data centers:

Procedure

This configuration example describes installing a 6 node cluster spanning 2 data centers. The default consistency level is QUORUM.

Important: DataStax Enterprise 4.6 uses Cassandra 2.0.

  1. Suppose you install DataStax Enterprise on these nodes:
    • node0 10.168.66.41 (seed1)
    • node1 10.176.43.66
    • node2 10.168.247.41
    • node3 10.176.170.59 (seed2)
    • node4 10.169.61.170
    • node5 10.169.30.138
  2. If the nodes are behind a firewall, open the required ports for internal/external communication. See Configuring firewall port access.
  3. If DataStax Enterprise is running, stop the nodes and clear the data:
    • Installer-Services and Package installations:
      $ sudo service dse stop
      $ sudo rm -rf /var/lib/cassandra/* ## Clears the data from the  default directories
    • Installer-No Services and Tarball installations:

      From the install directory:

      $ sudo bin/dse cassandra-stop
      $ sudo rm -rf /var/lib/cassandra/* ## Clears the data from the default directories
    • Installer-No Services and Tarball installations:

      From the install directory:

      $ sudo bin/dse cassandra-stop
      $ sudo rm -rf /var/lib/cassandra/*  # Clears the data from the  default directories 
      Note: If you are clearing data from an AMI installation for restart, you need to preserve the log files.
  4. Set the properties in the cassandra.yaml file for each node, located in:
    Important: After making any changes in the cassandra.yaml file, you must restart the node for the changes to take effect.

    Properties to set:

    Note: If the nodes in the cluster are identical in terms of disk layout, shared libraries, and so on, you can use the same copy of the cassandra.yaml file on all of them.
    • num_tokens: 256 for Cassandra nodes
    • num_tokens: 1 for Hadoop and Solr nodes
    • num_tokens: 32 to 64 for Solr nodes when using vnodes; otherwise num_tokens: 1
    • -seeds: internal_IP_address of each seed node
    • listen_address: empty

      If not set, Cassandra asks the system for the local address, the one associated with its host name. In some cases Cassandra doesn't produce the correct address and you must specify the listen_address.

    • endpoint_snitch: snitch See endpoint_snitch. If you are changing snitches, see Switching snitches.
    • auto_bootstrap: false

      Add the bootstrap setting only when initializing a fresh cluster with no data.

    • endpoint_snitch: snitch

      For more information, see endpoint_snitch and About Snitches.

    • If you are using a cassandra.yaml from a previous version, remove the following options, as they are no longer supported by DataStax Enterprise:
      ## Replication strategy to use for the auth keyspace.
      auth_replication_strategy: org.apache.cassandra.locator.SimpleStrategy
      
      auth_replication_options:
          replication_factor: 1

    Example:

    You must include at least one seed node from each data center. It is a best practice to have more than one seed node per data center.

    cluster_name: 'MyDemoCluster'
    num_tokens: 256
    seed_provider:
      - class_name: org.apache.cassandra.locator.SimpleSeedProvider
        parameters:
             - seeds: "10.168.66.41,10.176.170.59"
    listen_address:
    endpoint_snitch: GossipingPropertyFileSnitch
  5. In the cassandra-rackdc.properties (GossipingPropertyFileSnitch) or cassandra-topology.properties (PropertyFileSnitch) file, use your naming convention to assign data center and rack names to the IP addresses of each node, and assign a default data center name and rack name for unknown nodes.
    • Installer-Services and Package installations: /etc/dse/cassandra
    • Installer-No Services and Tarball installations: install_location/resources/cassandra/conf

    Example:

    # Cassandra Node IP=Data Center:Rack
    10.168.66.41=DC1:RAC1
    10.176.43.66=DC2:RAC1
    10.168.247.41=DC1:RAC1
    10.176.170.59=DC2:RAC1
    10.169.61.170=DC1:RAC1
    10.169.30.138=DC2:RAC1
    
    # default for unknown nodes
    default=DC1:RAC1
  6. After you have installed and configured DataStax Enterprise on all nodes, start the seed nodes one at a time, and then start the rest of the nodes:
    Note: If the node has restarted because of automatic restart, you must stop the node and clear the data directories, as described above.
  7. Check that your cluster is up and running:
    • Installer-Services and Package installations: $ nodetool status
    • Installer-No Services and Tarball installations: $ install_location/bin/nodetool status

Results

Datacenter: DC1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address          Load        Tokens    Owns    Host ID             Rack
UN 10.168.66.41     45.96 KB    256       27.4%   c885aac7-f2c0-...   RAC1
UN 10.168.247.41    66.34 KB    256       36.6%   fa31416c-db22-...   RAC1
UN 10.169.61.170    55.72 KB    256       33.0%   f488367f-c14f-...   RAC1
Datacenter: DC2
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address          Load        Tokens    Owns    Host ID             Rack
UN 10.176.43.66     45.96 KB    256       27.4%   f9fa31c7-f3c0-...   RAC1
UN 10.176.170.59    66.34 KB    256       36.6%   a5bb526c-db51-...   RAC1
UN 10.169.30.138    55.72 KB    256       33.0%   b836478f-c49f-...   RAC1

What's next

The location of the cassandra.yaml file depends on the type of installation:
Package installations /etc/cassandra/cassandra.yaml
Tarball installations install_location/resources/cassandra/conf/cassandra.yaml