nodetool resetlocalschema

Where is the file?

The location of the file depends on the type of installation:

Installation Type Location

Package installations + Installer-Services installations


Tarball installations + Installer-No Services installations


Reset the node’s local schema and resynchronizes.


nodetool [options] resetlocalschema [args]

Tarball and Installer No-Services path:

Connection options
Short Long Description



Hostname or IP address.



Port number.



Password file path.






Remote JMX agent username.


Separates an option from an argument that could be mistaken for an option.

  • For tarball installations, execute the command from the <installation_location>/bin directory.

  • If a username and password for RMI authentication are set explicitly in the file for the host, then you must specify credentials.

  • nodetool resetlocalschema operates on a single node in the cluster if -h is not used to identify one or more other nodes. If the node from which you issue the command is the intended target, you do not need the -h option to identify the target; otherwise, for remote invocation, identify the target node, or nodes, using -h.


Normally, this command is used to rectify schema disagreements on different nodes. It can be useful if table schema changes have generated too many tombstones, on the order of 100,000s.

nodetool resetlocalschema drops the schema information of the local node and resynchronizes the schema from another node. To drop the schema, the tool truncates all the system schema tables. The node temporarily loses metadata about the tables on the node but rewrites the information from another node. If the node is experiencing problems with too many tombstones, the truncation of the tables eliminates the tombstones.

This command is useful when you have one node that is out of sync with the cluster. The system schema tables must have another node from which to fetch the tables. It is not useful when all or many of your nodes are in an incorrect state. If there is only one node in the cluster (replication factor of 1) — it does not perform the operation, because another node from which to fetch the tables does not exist. Run the command on the node experiencing difficulty.

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2024 DataStax | Privacy policy | Terms of use

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000,