About NodeSync

NodeSync is an easy-to-use continuous background repair that provides consistent performance and virtually eliminates manual efforts to run repair operations in a DataStax cluster.

  • Continuously validates that data is in sync on all replicas.

  • Always running but has low impact on cluster performance.

  • Fully automatic with no manual intervention needed.

  • Completely replace anti-entropy repairs.

For write-heavy workloads, where more than 20% of the operations are writes, you may notice CPU consumption overhead associated with NodeSync. If that’s the case for you environment, DataStax recommends using nodetool repair instead of enabling NodeSync. See nodetool repair.

NodeSync service

By default, each node runs the NodeSync service. The service is idle unless it has something to validate. NodeSync is enabled on a per-table basis. The service continuously validates local data ranges for NodeSync-enabled tables and repairs any inconsistency found. The local data ranges are split into small segments, which act as validation save points. Segments are prioritized in order to try to meet the per-table deadline target.


A segment is a small local token range of a table. NodeSync recursively splits local ranges in half a certain number of times (depth) to create segments. The depth is calculated using the total table size, assuming equal distribution of data. Typically segments cover no more than 200 MB. The token ranges can be no smaller than a single partition, so very large partitions can result in segments larger than the configured size.

Incremental NodeSync

When incremental NodeSync is enabled on tables, new validations do not re-validate previously validated data, drastically lowering the NodeSync workload and its impact on cluster performance.

Validation process and status

After a segment is selected for validation, NodeSync reads the entirety of the data it covers from all replicas (using paging), checks for inconsistencies, and repairs if needed. When a node validates a segment, it locks it in a system table to avoid work duplication by other nodes. It is not a race-free lock; there is a possibility of duplicated work which saves the complexity and cost of true distributed locking.

Segment validation is saved on completion in the system_distributed.nodesync_status table, which is used internally for resuming on failure, prioritization, segment locking, and by tools. It is not meant to be read directly.

  • Validation status is:

    • successful: All replicas responded and all inconsistencies (if any) were properly repaired.

      • full_in_sync: All replica were already in sync.

      • full_repaired: Some replica were repaired.

    • unsuccessful: Either some replicas did not respond or repairs on inconsistent replicas failed.

      • partial_in_sync: Not all replica responded, but all respondents were in sync.

      • partial_repaired: Not all replica responded, some that responded were repaired.

      • uncompleted: At most 1 node was available/responded; no validation happened.

      • failed: Some unexpected errors occurred. (Check the node logs.)

        If validation of a large segment is interrupted, increase the amount of redundant work.


  • For debugging/tuning, understanding of traditional repair is mostly unhelpful because NodeSync depends on the read repair path.

  • No special optimizations for a remote datacenter (DC) - may perform poorly on especially bad WAN links.

  • In aggregate, CPU consumption of NodeSync might exceed traditional repair.

  • NodeSync only makes internal adjustments to try to hit the configured rate - operators must ensure this configured throughput is sufficient to meet the gc_grace_seconds commitment and can be achieved by the hardware.

  • When incremental Nodesync is enabled, if a node loses an SSTable, run a manual validation. A node can lose an SSTable when it is corrupted and either needs to be entirely deleted, or sstablescrub is not able to recover all of its data. See Manually starting NodeSync validation.

Tables with NodeSync enabled will be skipped for repair operations run against all or specific keyspaces. For individual tables, running the repair command will be rejected when NodeSync is enabled.

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2024 DataStax | Privacy policy | Terms of use

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000, info@datastax.com