Switching snitches

Steps for switching snitches.

Because snitches determine how Cassandra distributes replicas, the procedure to switch snitches depends on whether or not the topology of the cluster will change:

  • If data has not been inserted into the cluster, there is no change in the network topology. This means that you only need to set the snitch; no other steps are necessary.
  • If data has been inserted into the cluster, it's possible that the topology has changed and you will need to perform additional steps.
  • If data has been inserted into the cluster that must be kept, change the snitch without changing the topology. Then add a new datacenter with new nodes and racks as desired. Finally, remove nodes from the old datacenters and racks. Simply altering the snitch and replication to move some nodes to a new datacenter will result in data being replicated incorrectly.

A change in topology means that there is a change in the datacenters and/or racks where the nodes are placed. Topology changes may occur when the replicas are placed in different places by the new snitch. Specifically, the replication strategy places the replicas based on the information provided by the new snitch. The following examples demonstrate the differences:

  • No topology change

    Change from: five nodes using the RackInferringSnitch in a single datacenter

    To: five nodes in one datacenter and 1 rack using a network snitch such as the GossipingPropertyFileSnitch

  • Topology changes
    • Change from: 5 nodes using the RackInferringSnitch in a single datacenter

      To: 5 nodes in 2 datacenters using the PropertyFileSnitch (add a datacenter).
      Note: If "splitting" one datacenter into two, create a new datacenter with new nodes. Alter the keyspace replication settings for the keyspace that originally existed to reflect that two datacenters now exist. Once data is replicated to the new datacenter, remove the number of nodes from the original datacenter that have "moved" to the new datacenter.
    • Change From: 5 nodes using the PropertyFileSnitch in a single datacenter

      To: 5 nodes in 1 datacenter and 2 racks using the RackInferringSnitch (add rack information).

Procedure

Steps for switching snitches:

  1. Create a properties file with datacenter and rack information.
  2. Copy the cassandra-rackdc.properties or cassandra-topology.properties file to the Cassandra configuration directory on all the cluster's nodes. They won't be used until the new snitch is enabled.
    The location of the cassandra-topology.properties file depends on the type of installation:
    DataStax Enterprise 5.0 Installer-Services and package installations /etc/dse/cassandra/cassandra-topology.properties
    DataStax Enterprise 5.0 Installer-No Services and tarball installations install_location/resources/cassandra/conf/cassandra-topology.properties
    Cassandra package installations /etc/cassandra/cassandra-topology.properties
    Cassandra tarball installations install_location/conf/cassandra-topology.properties
    The location of the cassandra-rackdc.properties file depends on the type of installation:
    DataStax Enterprise 5.0 Installer-Services and package installations /etc/dse/cassandra/cassandra-rackdc.properties
    DataStax Enterprise 5.0 Installer-No Services and tarball installations install_location/resources/cassandra/conf/cassandra-rackdc.properties
    Cassandra package installations /etc/cassandra/cassandra-rackdc.properties
    Cassandra tarball installations install_location/conf/cassandra-rackdc.properties
    The location of the cassandra.yaml file depends on the type of installation:
    DataStax Enterprise 5.0 Installer-Services and package installations /etc/dse/cassandra/cassandra.yaml
    DataStax Enterprise 5.0 Installer-No Services and tarball installations install_location/resources/cassandra/conf/cassandra.yaml
    Cassandra package installations /etc/cassandra/cassandra.yaml
    Cassandra tarball installations install_location/resources/cassandra/conf/cassandra.yaml
  3. Change the snitch for each node in the cluster in the node's cassandra.yaml file. For example:
    endpoint_snitch: GossipingPropertyFileSnitch
  4. If the topology has not changed, you can restart each node one at a time.

    Any change in the cassandra.yaml file requires a node restart.

  5. If the topology of the network has changed, but no datacenters are added:
    1. Shut down all the nodes, then restart them.
    2. Run a sequential repair and nodetool cleanup on each node.
  6. If the topology of the network has changed and a datacenter is added:
    1. Create a new datacenter.
    2. Replicate data into new datacenter. Remove nodes from old datacenter.
    3. Run a sequential repair and nodetool cleanup on each node.
      CAUTION:
      DataStax recommends stopping repair operations during topology changes; the Repair Service does this automatically. Repairs running during a topology change are likely to error when they involve moving ranges.
  7. If migrating from the PropertyFileSnitch to the GossipingPropertyFileSnitch, remove the cassandra-topology.properties file from each node on any new cluster after the migration is complete.