About NodeSync

Easy to use continuous background repair that has low overhead and provides consistent performance and virtually eliminates manual efforts to run repair operations in a DataStax cluster.

NodeSync is an easy to use continuous background repair that has low overhead and provides consistent performance and virtually eliminates manual efforts to run repair operations in a DataStax cluster.

  • Continuously validates that data is in sync on all replica.
  • Always running but low impact on cluster performance
  • Fully automatic, no manual intervention needed
  • Completely replace anti-entropy repairs
Important: For write-heavy workloads, where more than 20% of the operations are writes, you may notice CPU consumption overhead associated with NodeSync. If that's the case for you environment, DataStax recommends using nodetool repair instead of enabling NodeSync. See nodetool repair.

NodeSync service

By default, each node runs the NodeSync service. The service is idle unless it has something to validate. NodeSync is enabled on a per table basis. The service continuously validates local data ranges for NodeSync-enabled tables and repairs any inconsistency found. The local data ranges are split into small segments, which act as validation save points. Segments are prioritized in order to try to meet the per-table deadline target.

Segments

A segment is a small local token range of a table. NodeSync recursively splits local ranges in half a certain number of times (depth) to create segments. The depth is calculated using the total table size, assuming equal distribution of data. Typically segments cover no more than 200 MB. The token ranges can be no smaller than a single partition, so very large partitions can result in segments larger than the configured size.

Validation process and status

After a segment is selected for validation, NodeSync reads the entirety of the data it covers from all replica (using paging), checks for inconsistencies, and repairs if needed. When a node validates a segment, it “locks” it in a system table to avoid work duplication by other nodes. It is not a race-free lock; there is a possibility of duplicated work which saves the complexity and cost of true distributed locking.

Segment validation is saved on completion in the system_distributed.nodesync_status table, which is used internally for resuming on failure, prioritization, segment locking, and by tools. It is not meant to be read directly.
  • Validation status is:
    • successful: All replicas responded and all inconsistencies (if any) were properly repaired.
      • full_in_sync: All replica were already in sync.
      • full_repaired: Some replica were repaired.
    • unsuccessful: Either some replicas did not respond or repairs on inconsistent replicas failed.
      • partial_in_sync: Not all replica responded, but all respondents were in sync.
      • partial_repaired: Not all replica responded, some that responded were repaired.
      • uncompleted: At most 1 node was available/responded; no validation happened.
      • failed: Some unexpected errors occurred. (Check the node logs.)
        Note: If validation of a large segment is interrupted, increase the amount of redundant work.

Limitations

  • For debugging/tuning, understanding of traditional repair will be mostly unhelpful, since NodeSync depends on the read repair path
  • No special optimizations for remote DC - may perform poorly on particularly bad WAN links
  • In aggregate, CPU consumption of NodeSync might exceed traditional repair
  • NodeSync only makes internal adjustments to try to hit the configured rate - operators must ensure this configured throughput is sufficient to meet the gc_grace_seconds commitment and can be achieved by the hardware
Important: Tables with NodeSync enabled will be skipped for repair operations run against all or specific keyspaces. For individual tables, running the repair command will be rejected when NodeSync is enabled.