nodetool tpstats

Returns usage statistics of thread pools.

The database separates different tasks into stages connected by a messaging service. Each stage has a queue and a thread pool. Some stages skip the messaging service and queue tasks immediately on a different stage when it exists on the same node. If the next stage is too busy, the database can back up a queue and lead to performance bottlenecks, as described in Monitor an HCD cluster.

Reports are updated after SSTables change through compaction or flushing.

HCD is built on Apache Cassandra® and includes additional capabilities for vector search. Vector search operations may impact thread pool usage patterns compared to standard Cassandra deployments.

Report columns

The nodetool tpstats command report includes the following columns:

Active

The number of Active threads.

Pending

The number of Pending requests waiting to be executed by this thread pool.

Completed

The number of tasks Completed by this thread pool.

Blocked

The number of requests that are currently Blocked because the thread pool for the next step in the service is full.

All-Time Blocked

The total number of All-Time Blocked requests, which are all requests blocked in this thread pool up to now.

Report rows

The follow list describes the task or property associated with the task reported in the nodetool tpstats output.

General metrics

The following report aggregated statistics for tasks on the local node:

BackgroundIoStage

Completes background tasks like submitting hints and deserializing the row cache.

CompactionExecutor

Running compaction.

GossipStage

Distributing node information via Gossip. Out of sync schemas can cause issues. You may have to sync using nodetool resetlocalschema.

HintsDispatcher

Dispatches a single hints file to a specified node in a batched manner.

InternalResponseStage

Responding to non-client initiated messages, including bootstrapping and schema checking.

MemtableFlushWriter

Writing memtable contents to disk. May back up if the queue is overruns the disk I/O, or because of sorting processes.

nodetool tpstats no longer reports blocked threads in the MemtableFlushWriter pool. Check the Pending Flushes metric reported by nodetool tblestats.

MemtablePostFlush

Cleaning up after flushing the memtable (discarding commit logs and secondary indexes as needed).

MemtableReclaimMemory

Making unused memory available.

PendingRangeCalculator

Calculating pending ranges per bootstraps and departed nodes. Reporting by this tool is not useful — see Developer notes.

PerDiskMemtableFlushWriter_N

Activity for the memtable flush writer of each disk.

ReadRepairStage

Performing read repairs. Usually fast, if there is good connectivity between replicas.

The main thread pool stages include:

  • ReadStage - Handles read requests and range queries. When a read request comes in, it’s queued on the ReadStage and processed by one of its threads. Vector search operations and SAI queries are also processed through this stage. See the Pythian guide to Cassandra thread pools for detailed information about read request flow.

  • WriteStage - Processes write operations including mutations, counter mutations, and read repairs. This stage handles all write-related tasks that need to be persisted to the database. Vector data writes and SAI index updates are processed through this stage.

  • RequestResponseStage - Manages the response handling for completed requests. This stage processes the final responses before sending them back to clients.

  • ReadRepairStage - Handles asynchronous read repair operations. This stage processes read repair tasks that are not in the client feedback loop, so they can queue up without affecting client response times.

  • MutationStage - Processes mutation operations. This stage handles the actual data mutations and writes to the database.

  • GossipStage - Manages cluster membership and gossip communication between nodes.

  • AntiEntropyStage - Handles anti-entropy operations for data consistency across the cluster.

  • MigrationStage - Manages schema changes and data migration operations.

  • CompactionExecutor - Handles SSTable compaction operations to optimize storage and performance.

For HCD deployments with vector search and SAI indexing, the CompactionExecutor may experience increased load due to the additional indexing overhead required for vector data and SAI indexes.

For a comprehensive understanding of how these stages work together in the Staged Event Driven Architecture (SEDA), see the Pythian guide to Cassandra thread pools.

Droppable messages

The database generates the messages listed below, but discards them after a timeout. The nodetool tpstats command reports the number of messages of each type that have been dropped. You can view the messages themselves using a JMX client.

Message Type Stage Notes

BINARY

n/a

Deprecated.

_TRACE

n/a (special)

Used for recording traces (nodetool settraceprobability). Has a special executor (1 thread, 1000 queue depth) that throws away messages on insertion instead of within the execute.

MUTATION

WriteStage

Times out after write_request_timeout_in_ms. If the timeout is exceeded, the write may fail or rely on hinted handoff for recovery.

COUNTER_MUTATION

WriteStage

Times out after write_request_timeout_in_ms. If the timeout is exceeded, the write may fail or rely on hinted handoff for recovery.

READ_REPAIR

WriteStage

Times out after read_request_timeout_in_ms since read repair occurs during read operations.

READ

ReadStage

Times out after read_request_timeout_in_ms. No point in servicing reads after that point because it would have returned error to client.

RANGE_SLICE

ReadStage

Times out after range_request_timeout_in_ms.

PAGED_RANGE

ReadStage

Times out after range_request_timeout_in_ms.

REQUEST_RESPONSE

RequestResponseStage

Times out after the appropriate request timeout based on the original operation type. Response was completed and sent back but not before the timeout.

HCD specific considerations

When using HCD with vector search and SAI indexing, consider the following:

  • Vector search workloads: Vector similarity searches and ANN (Approximate Nearest Neighbor) queries may require additional processing time and can impact thread pool performance.

  • SAI indexing: Storage Attached Indexes, especially vector indexes, can increase the load on the CompactionExecutor and other stages during index maintenance operations.

  • Capacity planning: For production deployments with vector search, ensure adequate CPU resources as recommended in Vector search capacity planning.

  • Monitoring: Pay attention to thread pool metrics when running vector search workloads, as these operations may have different performance characteristics than traditional NoSQL workloads.

Synopsis

nodetool [<connection_options>] tpstats
[-C] [-F json | yaml]

Definition

include:ROOT:partial$cql/CqlWHlistShortLongFormNodetoolShort.adoc[]

include:ROOT:partial$cql/CqlWHconnOptsPara.adoc[]

include:ROOT:partial$cql/CqlWHnodetoolConnectOpts.adoc[]

include:ROOT:partial$cql/CqlWHcmdOptsPara.adoc[]

include:ROOT:partial$cql/CqlWHC_cores.adoc[]

include:ROOT:partial$cql/CqlWHF_format.adoc[]

Examples

Get usage statistics for thread pools

nodetool tpstats
Result
Pool Name                    Active   Pending      Completed   Blocked  All time blocked
CompactionExecutor              0          0         427073         0                 0
GossipStage                     0          0        1565077         0                 0
InternalResponseStage           0          0              2         0                 0
MemtableFlushWriter             0          0            183         0                 0
MemtablePostFlush               0          0            700         0                 0
MemtableReclaimMemory           0          0            183         0                 0
MigrationStage                  0          0              2         0                 0
PendingRangeCalculator          0          0              2         0                 0
PerDiskMemtableFlushWriter_0   0          0            181         0                 0
ReadRepairStage                 0          0           1779         0                 0
ReadStage                       0          0        1062638         0                 0
RequestResponseStage            0          0        536269         0                 0
ViewBuildExecutor               0          0             32         0                 0
WriteStage                      0          0         262963         0                 0

Message type            Dropped                  Latency waiting in queue (micros)
                                              50%               95%               99%               Max
RANGE_SLICE                   0           3670.02           3670.02           3670.02           4194.30
SNAPSHOT                      0               N/A               N/A               N/A               N/A
HINT                          0               N/A               N/A               N/A               N/A
COUNTER_MUTATION              0               N/A               N/A               N/A               N/A
LWT                           0               N/A               N/A               N/A               N/A
BATCH_STORE                   0               N/A               N/A               N/A               N/A
VIEW_MUTATION                 0              0.00           2621.44           8388.61          25165.82
READ                          0              0.00              0.00              0.00              0.00
OTHER                         0           1835.01           3670.02           3670.02           4194.30
REPAIR                        0               N/A               N/A               N/A               N/A
SCHEMA                        0              0.00          50331.65          50331.65          58720.26
MUTATION                      0              0.00           5242.88           5242.88           6291.46
READ_REPAIR                   0               N/A               N/A               N/A               N/A
TRUNCATE                      0               N/A               N/A               N/A               N/A

Get usage statistics for thread pools with information about each core

nodetool tpstats -C
Result
Pool Name                    Active   Pending      Completed   Blocked  All time blocked
CompactionExecutor              0          0         427751         0                 0
GossipStage                     0          0        1567102         0                 0
InternalResponseStage           0          0              2         0                 0
MemtableFlushWriter             0          0            189         0                 0
MemtablePostFlush               0          0            712         0                 0
MemtableReclaimMemory           0          0            189         0                 0
MigrationStage                  0          0              2         0                 0
PendingRangeCalculator          0          0              2         0                 0
PerDiskMemtableFlushWriter_0   0          0            187         0                 0
ReadRepairStage                 0          0           1782         0                 0
ReadStage                      0          0        1064206         0                 0
RequestResponseStage            0          0        536301         0                 0
ViewBuildExecutor               0          0             32         0                 0
WriteStage                      0          0         262963         0                 0

Meters                                   One Minute Rate    Five Minute Rate     Fifteen Minute Rate      Mean Rate          Count    Connections
CLIENT_REQUEST                                0.00                0.00                    0.00           0.00              0              0
INTERNODE_MESSAGE                             0.00                0.00                    0.00           0.00              0              0

Message type            Dropped                  Latency waiting in queue (micros)
                                              50%               95%               99%               Max
RANGE_SLICE                   0           3670.02           3670.02           3670.02           4194.30
SNAPSHOT                      0               N/A               N/A               N/A               N/A
HINT                          0               N/A               N/A               N/A               N/A
COUNTER_MUTATION              0               N/A               N/A               N/A               N/A
LWT                           0               N/A               N/A               N/A               N/A
BATCH_STORE                   0               N/A               N/A               N/A               N/A
VIEW_MUTATION                 0              0.00           2621.44           8388.61          25165.82
READ                          0              0.00              0.00              0.00              0.00
OTHER                         0           1835.01           3670.02           3670.02           4194.30
REPAIR                        0               N/A               N/A               N/A               N/A
SCHEMA                        0              0.00          50331.65          50331.65          58720.26
MUTATION                      0              0.00           3670.02           3670.02           4194.30
READ_REPAIR                   0               N/A               N/A               N/A               N/A
TRUNCATE                      0               N/A               N/A               N/A               N/A

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2025 DataStax, an IBM Company | Privacy policy | Terms of use | Manage Privacy Choices

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000, info@datastax.com