nodetool tpstats

Provides usage statistics of thread pools.


nodetool <options> tpstats

Tarball and Installer No-Services path:

Short Long Description



Hostname or IP address.



Output format json or yaml.



Port number.



Password file path.






Remote JMX agent username.


Separates an option from an argument that could be mistaken for an option.


The DataStax Enterprise (DSE) database is based on a Staged Event Driven Architecture (SEDA). The database separates different tasks into stages connected by a messaging service. Each stage has a queue and a thread pool. Although some stages skip the messaging service and queue tasks immediately on a different stage when it exists on the same node. The database can back up a queue if the next stage is too busy and lead to a performance bottleneck, as described in Monitoring a DataStax Enterprise cluster.

The nodetool tpstats command reports on each stage of database operations by thread pool:

  • The number of Active threads

  • The number of Pending requests waiting to be executed by this thread pool

  • The number of tasks Completed by this thread pool

  • The number of requests that are currently Blocked because the thread pool for the next step in the service is full

  • The total number of All-Time Blocked requests, which are all requests blocked in this thread pool up to now.

Reports are updated when SSTables change through compaction or flushing.

Run nodetool tpstats on a local node to get statistics for the thread pool used by the DSE instance running on that node.

Run nodetool tpstats with the appropriate options to check the thread pool statistics for a remote node. For setup instructions, see Enabling DSE Unified Authentication.

nodetool tpstats pool names and tasks

Task or property associated with each pool name reported in the nodetool tpstats output:
Pool Name Associated tasks Related information


Processing repair messages and streaming

For details, see Nodetool repair.


Clearing the cache


Copying or archiving commitlog files for recovery


Running compaction


Processing local counter changes

Backs up if the write rate exceeds the mutation rate. A high pending count is seen if consistency level is set to ONE and there is a high counter increment workload.


Distributing node information via Gossip

Out of sync schemas can cause issues. You may have to sync using nodetool resetlocalschema .


Sending missed mutations to other nodes

Usually symptom of a problem elsewhere. Use nodetool disablehandoff and run repair.


Responding to non-client initiated messages, including bootstrapping and schema checking


Writing memtable contents to disk

May back up if the queue is overruns the disk I/O, or because of sorting processes

nodetool tpstats no longer reports blocked threads in the MemtableFlushWriter pool. Check the Pending Flushes metric reported by nodetool tblestats.


Cleaning up after after flushing the memtable (discarding commit logs and secondary indexes as needed)


Making unused memory available


Processing schema changes


Snapshotting, replicating data after node remove completed.


Performing local inserts/updates, schema merges, commit log replays or hints in progress

A high number of Pending write requests indicates the node is having a problem handling them. Fix this by adding a node, tuning hardware and configuration, and/or updating data models.


Processing CQL requests to the server


Calculating pending ranges per bootstraps and departed nodes

Reporting by this tool is not useful — see Developer notes


Performing read repairs

Usually fast, if there is good connectivity between replicas. If Pending grows too large, attempt to lower the rate for high-read tables by altering the table to use a smaller read_repair_chance value, like 0.11.


Performing local reads

Also includes deserializing data from row cache. Pending values can cause increased read latency. Generally resolved by adding nodes or tuning the system.


Handling responses from other nodes


Validating schema

nodetool tpstats droppable messages

The database generates the messages listed below, but discards them after a timeout. The nodetool tpstats command reports the number of messages of each type that have been dropped. You can view the messages themselves using a JMX client.

Message type, stage, and notes
Message Type Stage Notes





n/a (special)

Used for recording traces (nodetool settraceprobability) Has a special executor (1 thread, 1000 queue depth) that throws away messages on insertion instead of within the execute



If a write message is processed after its timeout (write_request_timeout_in_ms) it either sent a failure to the client or it met its requested consistency level and replays on hinted handoff and read repairs to do the mutation if it succeeded.



If a write message is processed after its timeout (write_request_timeout_in_ms) it either sent a failure to the client or it met its requested consistency level and relays on hinted handoff and read repairs to do the mutation if it succeeded.



Times out after write_request_timeout_in_ms



Times out after read_request_timeout_in_ms. No point in servicing reads after that point since it would of returned error to client



Times out after range_request_timeout_in_ms.



Times out after request_timeout_in_ms.



Times out after request_timeout_in_ms. Response was completed and sent back but not before the timeout


Running nodetool tpstats:

nodetool tpstats

Example output is:

Pool Name                         Active   Pending      Completed   Blocked  All time blocked
ReadStage                              0         0              7         0                 0
ContinuousPagingStage                  0         0              0         0                 0
MiscStage                              0         0              0         0                 0
CompactionExecutor                     0         0             76         0                 0
MutationStage                          0         0              2         0                 0
GossipStage                            0         0              0         0                 0
RequestResponseStage                   0         0              0         0                 0
ReadRepairStage                        0         0              0         0                 0
CounterMutationStage                   0         0              0         0                 0
MemtablePostFlush                      0         0            182         0                 0
ValidationExecutor                     0         0              0         0                 0
MemtableFlushWriter                    0         0             52         0                 0
ViewMutationStage                      0         0              0         0                 0
CacheCleanupExecutor                   0         0              0         0                 0
MemtableReclaimMemory                  0         0             52         0                 0
PendingRangeCalculator                 0         0              1         0                 0
AntiCompactionExecutor                 0         0              0         0                 0
SecondaryIndexManagement               0         0              0         0                 0
HintsDispatcher                        0         0              0         0                 0
Native-Transport-Requests              0         0              0         0                 0
MigrationStage                         0         0             12         0                 0
PerDiskMemtableFlushWriter_0           0         0             51         0                 0
Sampler                                0         0              0         0                 0
InternalResponseStage                  0         0              0         0                 0
AntiEntropyStage                       0         0              0         0                 0

Message type           Dropped                  Latency waiting in queue (micros)
                                             50%               95%               99%               Max
READ                         0               N/A               N/A               N/A               N/A
RANGE_SLICE                  0               N/A               N/A               N/A               N/A
_TRACE                       0               N/A               N/A               N/A               N/A
HINT                         0               N/A               N/A               N/A               N/A
MUTATION                     0               N/A               N/A               N/A               N/A
COUNTER_MUTATION             0               N/A               N/A               N/A               N/A
BATCH_STORE                  0               N/A               N/A               N/A               N/A
BATCH_REMOVE                 0               N/A               N/A               N/A               N/A
REQUEST_RESPONSE             0               N/A               N/A               N/A               N/A
PAGED_RANGE                  0               N/A               N/A               N/A               N/A
READ_REPAIR                  0               N/A               N/A               N/A               N/A

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2024 DataStax | Privacy policy | Terms of use

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000,