Purging gossip state on a node

Correcting a problem in the gossip state.

Gossip information is persisted locally by each node to use immediately on node restart without having to wait for gossip communications.


In the unlikely event you need to correct a problem in the gossip state:
  1. Use the nodetool assassinate to shut down the problem node.

    This takes approximately 35 seconds to complete, so wait for confirmation that the node is deleted.

  2. If this method doesn't solve the problem, stop your client application from sending writes to the cluster.
  3. Take the entire cluster offline:
    1. Drain each node.
      nodetool options drain
    2. Stop each node:
      • Cassandra package installations:
        sudo service cassandra stop
      • Cassandra tarball installations:
        sudo service cassandra stop
  4. Clear the data from the peers directory, remove all directories in the peers-UUID directory, where UUID is the particular directory that corresponds to the appropriate node:
    sudo rm -r /var/lib/cassandra/data/system/peers-UUID/*
    Use caution when performing this step. The action clears internal system data from Cassandra and may cause application outage without careful execution and validation of the results. To validate the results, run the following query individually on each node to confirm that all of the nodes are able to see all other nodes.
    select * from system.peers;
  5. Clear the gossip state when the node starts:
    • For tarball installations, you can use a command line option or edit the cassandra-env.sh. To use the command line:
      install_location/bin/cassandra -Dcassandra.load_ring_state=false #Starts Cassandra
    • For package installations or if you are not using the command line option above, add the following line to the cassandra-env.sh file:
      JVM_OPTS="$JVM_OPTS -Dcassandra.load_ring_state=false"
      • Cassandra package installations: /usr/share/cassandra/cassandra-env.sh
      • Cassandra tarball installations: install_location/conf/cassandra-env.sh
    The location of the cassandra-env.sh file depends on the type of installation:
    Cassandra package installations /etc/cassandra/cassandra-env.sh
    Cassandra tarball installations install_location/conf/cassandra-env.sh
  6. Bring the cluster online one node at a time, starting with the seed nodes.
    • Cassandra package installations:
      sudo service cassandra start #Starts Cassandra
    • Cassandra tarball installations:
      cd install_location

What's next

Remove the line you added in the cassandra-env.sh file.