nodetool tpstats
Provides usage statistics of thread pools.
Synopsis
nodetool <options> tpstats
Tarball and Installer No-Services path:
<installation_location>/resources/cassandra/bin
Short | Long | Description |
---|---|---|
|
|
Hostname or IP address. |
|
|
Output format |
|
|
Port number. |
|
|
Password file path. |
|
|
Password. |
|
|
Remote JMX agent username. |
|
Separates an option from an argument that could be mistaken for an option. |
Description
The DataStax Enterprise (DSE) database is based on a Staged Event Driven Architecture (SEDA). The database separates different tasks into stages connected by a messaging service. Each stage has a queue and a thread pool. Although some stages skip the messaging service and queue tasks immediately on a different stage when it exists on the same node. The database can back up a queue if the next stage is too busy and lead to a performance bottleneck, as described in Monitoring a DataStax Enterprise cluster.
The nodetool tpstats
command reports on each stage of database operations by thread pool:
-
The number of
Active
threads -
The number of
Pending
requests waiting to be executed by this thread pool -
The number of tasks
Completed
by this thread pool -
The number of requests that are currently
Blocked
because the thread pool for the next step in the service is full -
The total number of
All-Time Blocked
requests, which are all requests blocked in this thread pool up to now.
Reports are updated when SSTables change through compaction or flushing.
Run nodetool tpstats
on a local node to get statistics for the thread pool used by the DSE instance running on that node.
Run nodetool tpstats
with the appropriate options to check the thread pool statistics for a remote node.
For setup instructions, see Enabling DSE Unified Authentication.
nodetool tpstats pool
names and tasks
Pool Name | Associated tasks | Related information | ||
---|---|---|---|---|
|
Processing repair messages and streaming |
For details, see |
||
|
Clearing the cache |
|||
|
Copying or archiving commitlog files for recovery |
|||
|
Running compaction |
|||
|
Processing local counter changes |
Backs up if the write rate exceeds the mutation rate.
A high pending count is seen if consistency level is set to |
||
|
Distributing node information via Gossip |
Out of sync schemas can cause issues.
You may have to sync using |
||
|
Sending missed mutations to other nodes |
Usually symptom of a problem elsewhere.
Use |
||
|
Responding to non-client initiated messages, including bootstrapping and schema checking |
|||
|
Writing memtable contents to disk |
May back up if the queue is overruns the disk I/O, or because of sorting processes
|
||
|
Cleaning up after after flushing the memtable (discarding commit logs and secondary indexes as needed) |
|||
|
Making unused memory available |
|||
|
Processing schema changes |
|||
|
Snapshotting, replicating data after node remove completed. |
|||
|
Performing local inserts/updates, schema merges, commit log replays or hints in progress |
A high number of |
||
|
Processing CQL requests to the server |
|||
|
Calculating pending ranges per bootstraps and departed nodes |
Reporting by this tool is not useful — see Developer notes |
||
|
Performing read repairs |
Usually fast, if there is good connectivity between replicas.
If |
||
|
Performing local reads |
Also includes deserializing data from row cache. Pending values can cause increased read latency. Generally resolved by adding nodes or tuning the system. |
||
|
Handling responses from other nodes |
|||
|
Validating schema |
nodetool tpstats droppable
messages
The database generates the messages listed below, but discards them after a timeout.
The nodetool tpstats
command reports the number of messages of each type that have been dropped.
You can view the messages themselves using a JMX client.
Message Type | Stage | Notes |
---|---|---|
|
n/a |
Deprecated |
|
n/a (special) |
Used for recording traces (nodetool settraceprobability) Has a special executor (1 thread, 1000 queue depth) that throws away messages on insertion instead of within the execute |
|
If a write message is processed after its timeout ( |
|
|
If a write message is processed after its timeout ( |
|
|
Times out after |
|
|
Times out after |
|
|
Times out after |
|
|
Times out after |
|
|
Times out after |
Example
Running nodetool tpstats
:
nodetool tpstats
Example output is:
Pool Name Active Pending Completed Blocked All time blocked
ReadStage 0 0 7 0 0
ContinuousPagingStage 0 0 0 0 0
MiscStage 0 0 0 0 0
CompactionExecutor 0 0 76 0 0
MutationStage 0 0 2 0 0
GossipStage 0 0 0 0 0
RequestResponseStage 0 0 0 0 0
ReadRepairStage 0 0 0 0 0
CounterMutationStage 0 0 0 0 0
MemtablePostFlush 0 0 182 0 0
ValidationExecutor 0 0 0 0 0
MemtableFlushWriter 0 0 52 0 0
ViewMutationStage 0 0 0 0 0
CacheCleanupExecutor 0 0 0 0 0
MemtableReclaimMemory 0 0 52 0 0
PendingRangeCalculator 0 0 1 0 0
AntiCompactionExecutor 0 0 0 0 0
SecondaryIndexManagement 0 0 0 0 0
HintsDispatcher 0 0 0 0 0
Native-Transport-Requests 0 0 0 0 0
MigrationStage 0 0 12 0 0
PerDiskMemtableFlushWriter_0 0 0 51 0 0
Sampler 0 0 0 0 0
InternalResponseStage 0 0 0 0 0
AntiEntropyStage 0 0 0 0 0
Message type Dropped Latency waiting in queue (micros)
50% 95% 99% Max
READ 0 N/A N/A N/A N/A
RANGE_SLICE 0 N/A N/A N/A N/A
_TRACE 0 N/A N/A N/A N/A
HINT 0 N/A N/A N/A N/A
MUTATION 0 N/A N/A N/A N/A
COUNTER_MUTATION 0 N/A N/A N/A N/A
BATCH_STORE 0 N/A N/A N/A N/A
BATCH_REMOVE 0 N/A N/A N/A N/A
REQUEST_RESPONSE 0 N/A N/A N/A N/A
PAGED_RANGE 0 N/A N/A N/A N/A
READ_REPAIR 0 N/A N/A N/A N/A