OpsCenter Metrics Tooltips Reference

Comprehensive reference of performance metrics available in OpsCenter.

Metrics are available to add to any graph. View descriptions of any metric by hovering over a metric in the Add Metric dialog, or by hovering over a graph legend.

The following list of metric descriptions available in tooltips is provided for your convenience:

Write Requests 
The number of write requests per second on the coordinator nodes, analogous to client writes. Monitoring the number of requests over a given time period reveals system write workload and usage patterns.
Write Request Latency (percentiles) 
The min, median, max, 90th, and 99th percentiles of a client writes. The time period starts when a node receives a client write request, and ends when the node responds back to the client. Depending on consistency level and replication factor, this may include the network latency from writing to the replicas.
Write Failures 
The number of write requests on the coordinator nodes that fail due to errors returned from replicas.
Write Timeouts 
The number of server write timeouts per second on the coordinator nodes.
Write Unavailable Errors 
The number of write requests per second on the coordinator nodes, that fail because not enough replicas are available.
Read Requests 
The number of read requests per second on the coordinator nodes, analogous to client reads. Monitoring the number of requests over a given time period reveals system read workload and usage patterns.
Read Request Latency (percentiles) 
The min, median, max, 90th, and 99th percentiles of a client reads. The time period starts when a node receives a client read request, and ends when the node responds back to the client. Depending on consistency level and replication factor, this may include the network latency from requesting the data’s replicas.
Read Failures 
The number of read requests on the coordinator nodes that fail due to errors returned from replicas.
Read Timeouts 
The number of server read timeouts per second on the coordinator nodes.
Read Unavailable Errors 
The number of read requests per second on the coordinator nodes, that fail because not enough replicas are available.
Non Heap Committed 
Allocated memory, guaranteed for Java nonheap.
Non Heap Max 
Maximum amount that the Java nonheap can grow.
Non Heap Used 
Average amount of Java nonheap memory used.
Heap Commited 
Allocated memory guaranteed for the Java heap.
Heap Max 
Maximum amount that the Java heap can grow.
Heap Used 
Average amount of Java heap memory used.
JVM CMS Collection Count 
Number of concurrent mark sweep garbage collections performed per second.
JVM ParNew Collection Count 
Number of ParNew garbage collections performed per second. ParNew collections pause all work in the JVM but should finish quickly.
JVM CMS Collection Time 
Average number of milliseconds spent performing CMS garbage collections per second.
JVM ParNew Collection Time 
Average number of milliseconds spent performing ParNew garbage collections per second. ParNew collections pause all work in the JVM but should finish quickly.
JVM G1 Old Collection Count 
Number of G1 old generation garbage collections performed per second.
JVM G1 Old Collection Time 
Average number of milliseconds spent performing G1 old generation garbage collections per second.
JVM G1 Young Collection Count 
Number of G1 young generation garbage collections performed per second.
JVM G1 Young Collection Time 
Average number of milliseconds spent performing G1 young generation garbage collections per second.
Data Size 
The live disk space used by all tables on a node.
Total Bytes Compacted 
Number of bytes compacted per second.
Total Compactions Completed 
Number of compaction tasks completed per second.
Total Compactions 
Number of sstable scans per second that could result in a compaction.
Compactions Pending 
Estimated number of compactions required to achieve the desired state. This includes the pending queue to the compaction executor and additional tasks that may be created from their completion.
Task Queues 
Aggregate of thread pools pending queues that can be used to identify where things are backing up internally. This doesn't include pending compactions because it includes an estimate outside of the task queue or the hinted hand off queue, which can be in constant state of being on.
TP: Dropped Tasks 
Aggregate of different messages that might be thrown away.
TP: Dropped Counter Mutations 
Mutation was seen after the timeout (write_request_timeout_in_ms) so was thrown away. This client might have timed out before it met the required consistency level, but might have succeeded as well. Hinted handoffs and read repairs should resolve inconsistencies but a repair can ensure it.
TP: Dropped Mutations 
Mutation was seen after the timeout (write_request_timeout_in_ms) so was thrown away. This client might have timed out before it met the required consistency level, but might have succeeded as well. Hinted handoffs and read repairs should resolve inconsistencies but a repair can ensure it.
TP: Dropped Reads 
A local read request was received after the timeout (read_request_timeout_in_ms) so it was thrown away because it would have already either been completed and sent to client or sent back as a timeout error.
TP: Dropped Ranged Slice Reads 
A local ranged read request was received after the timeout (range_request_timeout_in_ms) so it was thrown away because it would have already either been completed and sent to client or sent back as a timeout error.
TP: Dropped Paged Range Reads 
A local paged read request was received after the timeout (request_timeout_in_ms) so it was thrown away because it would have already either been completed and sent to client or sent back as a timeout error.
TP: Dropped Request Responses 
A response to a request was received after the timeout (request_timeout_in_ms) so it was thrown away because it would have already either been completed and sent to client or sent back as a timeout error.
TP: Dropped Read Repairs 
The Mutation was seen after the timeout (write_request_timeout_in_ms) so was thrown away. With the read repair timeout, the node still exists in an inconsistent state.
TP: Flushes Pending 
Number of memtables queued for the flush process. A flush sorts and writes the memtables to disk.
TP: Gossip Tasks Pending 
Number of gossip messages and acknowledgments queued and waiting to be sent or received.
TP: Internal Responses Pending 
Number of pending tasks from internal tasks, such as nodes joining and leaving the cluster.
TP: Manual Repair Tasks Pending 
Repair tasks pending, such as handling the merkle tree transfer after the validation compaction.
TP: Cache Cleaning Pending 
Tasks pending to clean row caches during a cleanup compaction.
TP: Post Flushes Pending 
Tasks related to the last step in flushing memtables to disk as SSTables. Includes removing unnecessary commitlog files and committing Solr-based secondary indexes.
TP: Migrations Pending 
Number of pending tasks from system methods that modified the schema.
TP: Misc. Tasks Pending 
Number of pending tasks from infrequently run operations, such as taking a snapshot or processing the notification of a completed replication.
TP: Read Requests Pending 
Number of pending read requests. Read requests read data off of disk and deserialize cached data.
TP: Read Repair Tasks Pending 
Number of read repair operations in the queue waiting to run.
TP: Request Responses Pending 
Number of pending callbacks to execute after a task on a remote node completes.
TP: Write Requests Pending 
Number of write requests received by the cluster and waiting to be handled.
TP: Validation Executor Pending 
Pending task to read data from sstables and generate a merkle tree for a repair.
TP: Compaction Executor Pending 
Pending compactions that are known. This may deviate from "pending compactions" which includes an estimate of tasks that these pending tasks may create after completion.
TP: Pending Range Calculator Pending 
Pending tasks to calculate the ranges according to bootsrapping and leaving nodes.
TP: Native Transport Requests Pending 
Native Transport Requests Requests Pending
TP: Flushes Active 
Up to memtable_flush_writers concurrent tasks to flush and write the memtables to disk.
TP: Gossip Tasks Active 
Number of gossip messages and acknowledgments actively being sent or received.
TP: Internal Responses Active 
Number of active tasks from internal tasks, such as nodes joining and leaving the cluster.
TP: Manual Repair Tasks Active 
Repair tasks active, such as handling the merkle tree transfer after the validation compaction.
TP: Cache Cleaning Active 
Tasks to clean row caches during a cleanup compaction.
TP: Post Flushes Active 
Tasks related to the last step in flushing memtables to disk as SSTables. Includes removing unnecessary commitlog files and committing Solr-based secondary indexes.
TP: Migrations Active 
Number of active tasks from system methods that modified the schema.
TP: Misc. Tasks Active 
Number of active tasks from infrequently run operations, such as taking a snapshot or processing the notification of a completed replication.
TP: Read Requests Active 
Number of active read requests. Read requests read data off of disk and deserialize cached data.
TP: Read Repair Tasks Active 
Number of read repair operations actively being run.
TP: Request Responses Active 
Number of callbacks to being executed after a task on a remote node is completed.
TP: Write Requests Active 
Number of write requests being handled.
TP: Validation Executor Active 
Active task to read data from sstables and generate a merkle tree for a repair.
TP: Compaction Executor Active 
Active compactions that are known.
TP: Pending Range Calculator Active 
Active tasks to calculate the ranges according to bootsrapping and leaving nodes.
TP: Native Transport Requests Active 
Native Transport Requests Requests Active
TP: Flushes Completed 
Number of memtables flushed to disk since the nodes start.
TP: Gossip Tasks Completed 
Number of gossip messages and acknowledgments recently sent or received.
TP: Internal Responses Completed 
Number of recently completed tasks from internal tasks, such as nodes joining and leaving the cluster.
TP: Manual Repair Tasks Completed 
Repair tasks recently completed, such as handling the merkle tree transfer after the validation compaction.
TP: Cache Cleaning Completed 
Tasks to clean row caches during a cleanup compaction.
TP: Post Flushes Completed 
Tasks related to the last step in flushing memtables to disk as SSTables. Includes removing unnecessary commitlog files and committing Solr-based secondary indexes.
TP: Migrations Completed 
Number of completed tasks from system methods that modified the schema.
TP: Misc. Tasks Completed 
Number of completed tasks from infrequently run operations, such as taking a snapshot or processing the notification of a completed replication.
TP: Read Requests Completed 
Number of completed read requests. Read requests read data off of disk and deserialize cached data.
TP: Read Repair Tasks Completed 
Number of read repair operations recently completed.
TP: Request Responses Completed 
Number of completed callbacks executed after a task on a remote node is completed.
TP: Write Requests Completed 
Number of write requests received by the cluster that have been handled.
TP: Validation Executor Completed 
Completed tasks to read data from sstables and generate a merkle tree for a repair.
TP: Compaction Executor Completed 
Completed compactions.
TP: Pending Range Calculator Completed 
Completed tasks to calculate the ranges according to bootsrapping and leaving nodes.
TP: Native Transport Requests Completed 
Native Transport Requests Requests Completed
TP: Native Transport Requests Blocked 
Native Transport Requests Requests Blocked
TP: Total Native Transport Requests Blocked 
Total Native Transport Requests Requests Blocked
KeyCache Hits 
The number of key cache hits per second. This will avoid possible disk seeks when finding a partition in an SSTable.
KeyCache Requests 
The number of key cache requests per second.
KeyCache Hit Rate 
The percentage of key cache lookups that resulted in a hit.
RowCache Hits 
The number of row cache hits per second.
RowCache Requests 
The number of row cache requests per second.
RowCache Hit Rate 
The percentage of row cache lookups that resulted in a hit.
Native Clients 
The number of clients connected using the native protocol.
Thrift Clients 
The number of clients connected via thrift.
Read Repairs Attempted 
Number of read requests where the number of nodes queried possibly exceeds the consistency level requested in order to check for a possible digest mismatch.
Asynchronous Read Repairs 
Corresponds to a digest mismatch that occurred after a completed read, outside of the client read loop.
Synchronous Read Repairs 
Corresponds to the number of times there was a digest mismatch within the requested consistency level and a full data read was started.
TBL: Local Writes 
Local write requests per second. Local writes update the table's memtable and appends to a commitlog.
TBL: Local Write Latency (percentiles) 
The min, median, max, 90th, and 99th percentile of the response times to write data to a table's memtable. The elapsed time from when the replica receives the request from a coordinator and returns a response.
TBL: Local Reads 
Local read requests per second. Local reads retrieve data from a table's memtable and any necessary SSTables on disk.
TBL: Local Read Latency (percentiles) 
The min, median, max, 90th, and 99th percentile of the response time to read data from the memtable and sstables for a specific table. The elapsed time from when the replica receives the request from a coordinator and returns a response.
TBL: Live Disk Used 
Disk space used by live SSTables. There might be obsolete SSTables not included.
TBL: Total Disk Used 
Disk space used by a table by SSTables, including obsolete ones waiting to be garbage collected.
TBL: SSTable Count 
Total number of SSTables for a table.
TBL: SSTables per Read (percentiles) 
The min, median, max, 90th, and 99th percentile of how many SSTables are accessed during a read. Includes sstables that undergo bloom-filter checks, even if no data is read from the sstable.
TBL: Partition Size (percentiles) 
The min, median, max, 90th, and 99th percentile of the size (in bytes) of partitions of this table.
TBL: Cell Count (percentiles) 
The min, median, max, 90th, and 99th percentile of how many cells exist in partitions for this table.
TBL: Bloom Filter Space Used 
The total size of all the SSTables' bloom filters for this table.
TBL: Bloom Filter False Positives 
Number of bloom filter false positives per second.
TBL: Bloom Filter False Positive Ratio 
Percentage of bloom filter lookups that resulted in a false positive.
Search: Requests 
Requests per second made to a specific Solr core/index.
Search: Request Latency 
Average time a search query takes in a DSE cluster using DSE Search.
Search: Errors 
Errors per second that occur for a specific Solr core/index.
Search: Timeouts 
Timeouts per second on a specific Solr core/index.
Search: Core Size 
Size of the Solr core on disk.
OS: Memory (stacked) 
Stacked graph of used, cached, and free memory.
OS: Memory (stacked) 
Stacked graph of used and free memory.
OS: Memory Free 
Total system memory currently free.
OS: Memory Used 
Total system memory currently used.
OS: Memory Shared 
Total amount of memory in shared memory space.
OS: Memory Buffered 
Total system memory currently buffered.
OS: Memory Cached 
Total system memory currently cached.
OS: Memory (stacked) 
Stacked graph of committed, cached, paged, non-paged, and free memory.
OS: Memory Available 
Available physical memory.
OS: Memory Committed 
Memory in use by the operating system.
OS: Pool Paged Resident Memory 
Allocated pool-paged-resident memory.
OS: Pool Nonpaged Memory 
Allocated pool-nonpaged memory.
OS: System Cache Resident Memory 
Memory used by the file cache.
OS: CPU (stacked) 
Stacked graph of iowait, steal, nice, system, user, and idle CPU usage.
OS: CPU (stacked) 
Stacked graph of idle, user, and system CPU usage.
OS: CPU (stacked) 
Stacked graph of user, privileged, and idle CPU usage.
OS: CPU User 
Time the CPU devotes to user processes.
OS: CPU System 
Time the CPU devotes to system processes.
OS: CPU Idle 
Time the CPU is idle.
OS: CPU Iowait 
Time the CPU devotes to waiting for I/O to complete.
OS: CPU Steal 
Time the CPU devotes to tasks stolen by virtual operating systems.
OS: CPU Nice 
Time the CPU devotes to processing nice tasks.
OS: CPU Privileged 
Time the CPU devotes to processing privileged instructions.
OS: Load 
Operating system load average.
OS: Disk Usage (%) 
Disk space used by Cassandra at a given time.
OS: Disk Free 
Free space on a specific disk partition.
OS: Disk Used 
Disk space used by Cassandra at a given time.
OS: Disk Read Throughput 
Average disk throughput for read operations.
OS: Disk Write Throughput 
Average disk throughput for write operations.
OS: Disk Throughput 
Average disk throughput for read and write operations.
OS: Disk Read Rate 
Rate of reads per second to the disk.
OS: Disk Writes Rate 
Rate of writes per second to the disk.
OS: Disk Latency 
Average completion time of each request to the disk.
OS: Disk Request Size 
Average size of read requests issued to the disk.
OS: Disk Request Size 
Average size of read requests issued to the disk.
OS: Disk Queue Size 
Average number of requests queued due to disk latency issues.
OS: Disk Utilization 
CPU time consumed by disk I/O.
OS: Net Received 
Speed of data received from the network.
OS: Net Sent 
Speed of data sent across the network.
OS: Net Sent 
Speed of data sent across the network.
OS: Net Received 
Speed of data received from the network.
Speculative Retries 
Number of speculative retries for all column families.
TBL: Speculative Retries 
Number of speculative retries for this table.
Stream Data Out - Total 
Data streamed out from this node to all other nodes, for all tables.
Stream Data In - Total 
Data streams in to this node from all other nodes, for all tables.
TBL: Bloom Filter Off Heap 
Total off heap memory used by bloom filters from all live SSTables in a table.
TBL: Index Summary Off Heap 
Total off heap memory used by the index summary of all live SSTables in a table.
TBL: Compression Metadata Off Heap 
Total off heap memory used by the compression metadata of all live SSTables in a table.
TP: Counter Mutations Pending 
Pending tasks to execute local counter mutations.
TP: Counter Mutations Active 
Up to concurrent_counter_writes running tasks that execute local counter mutations.
TP: Counter Mutations Completed 
Number of local counter mutations that have been executed.
TP: Memtable Reclaims Pending 
Waits for current reads to complete and then frees the memory formerly used by the obsoleted memtables.
TP: Memtable Reclaims Active 
Waits for current reads to complete and then frees the memory formerly used by the obsoleted memtables.
TP: Memtable Reclaims Completed 
Waits for current reads to complete and then frees the memory formerly used by the obsoleted memtables.
TBL: Memtable Off Heap 
Off heap memory used by a table's current memtable.
TBL: Total Memtable Heap Size 
An estimate of the space used in JVM heap memory for all memtables. This includes ones that are currently being flushed and related secondary indexes.
TBL: Total Memtable Live Data Size 
An estimate of the space used for 'live data' (off-heap, excluding overhead) for all memtables. This includes ones that are currently being flushed and related secondary indexes.
TBL: Total Memtable Off-Heap Size 
An estimate of the space used in off-heap memory for all memtables. This includes ones that are currently being flushed and related secondary indexes.
In-Memory Percent Used 
The percentage of memory allocated for in-memory tables currently in use.
TBL: Partition Count 
Approximate number of partitions. This may be off given duplicates in memtables and sstables are both counted and there is a very small error percentage inherited from the HyperLogLog data structure.
Write Request Latency 
Deprecated. The median response times (in milliseconds) of a client write. The time period starts when a node receives a client write request, and ends when the node responds back to the client. Depending on consistency level and replication factor, this may include the network latency from writing to the replicas.
Read Request Latency 
Deprecated. The median response times (in milliseconds) of a client read. The time period starts when a node receives a client read request, and ends when the node responds back to the client. Depending on consistency level and replication factor, this may include the network latency from requesting the data's replicas.
View Write Latency (percentiles) 
The min, median, max, 90th, and 99th percentiles of the time from when base mutation is applied to memtable until CL.ONE is achieved on the async write to the tables materialized views. An estimate to determine the lag between base table mutations and the views consistency.
View Write Successes 
Number of view mutations sent to replicas that have been acknowledged.
View Write Pending 
Number of view mutations sent to replicas where the replicas acknowledgement hasn't been received.
TP: View Mutations Pending 
Number of mutations to apply locally after modifications to a base table.
TP: View Mutations Active 
Number of mutations to being applied locally after modifications to a base table.
TP: View Mutations Completed 
Number of mutations applied locally after modifications to a base table.
TP: Hint Dispatcher Pending 
Pending tasks to send the stored hinted handoffs to a host.
TP: Hint Dispatcher Active 
Up to max_hints_delivery_threads tasks, each dispatching all hinted handoffs to a host.
TP: Hint Dispatcher Completed 
Number of tasks to transfer hints to a host that have completed.
TP: Index Management Pending 
Any initialization work when a new index instance is created. This may involve costly operations such as (re)building the index.
TP: Index Management Active 
Any initialization work when a new index instance is created. This may involve costly operations such as (re)building the index.
TP: Index Management Completed 
Any initialization work when a new index instance is created. This may involve costly operations such as (re)building the index.
TBL: Tombstones per Read (percentiles) 
The min, median, max, 90th, and 99th percentile of how many tombstones are read during a read.
TBL: Local Write Latency 
Deprecated. Median response time to write data to a table's memtable. The elapsed time from when the replica receives the request from a coordinator and returns a response.
TBL: Local Read Latency 
Deprecated. Median response time to read data from the memtable and SSTables for a specific table. The elapsed time from when the replica receives the request from a coordinator and returns a response.
TBL: Coordinator Read Latency (percentiles) 
The min, median, max, 90th, and 99th percentiles of client reads on this table. The time period starts when a node receives a client read request, and ends when the node responds back to the client. Depending on consistency level and replication factor, this may include the network latency from requesting the data's replicas.
TBL: Coordinator Read Requests 
The number of read requests per second for a particular table on the coordinator nodes. Monitoring the number of requests over a given time period reveals table read workload and usage patterns.
Cells Scanned (percentiles) 
The min, median, max, 90th, and 99th percentile of how many cells were scanned during a read.
TBL: Cells Scanned (percentiles) 
The min, median, max, 90th, and 99th percentile of how many cells were scanned during a read.
TIER: Total Disk Used 
Disk space used by a table by SSTables for the tier.
TIER: sstables 
Number of SSTables in a tier for a table.
TIER: Max Data Age 
Timestamp in local server time that represents an upper bound to the newest piece of data stored in the SSTable. When a new SSTable is flushed, it is set to the time of creation. When an SSTable is created from compaction, it is set to the max of all merged SSTables.
Graph: Adjacency Cache Hits 
Number of hits against the adjacency cache for this graph.
Graph: Adjacency Cache Misses 
Number of misses against the adjacency cache for this graph.
Graph: Index Cache Hits 
Number of hits against the index cache for this graph.
Graph: Index Cache Misses 
Number of misses against the index cache for this graph.
Graph: Request Latencies 
The min, median, max, 90th, and 99th percentile of request latencies during the period.
Graph TP: Graph Query Threads Pending 
Number of pending tasks in the GraphQueryThreads thread pool.
Graph TP: Graph Query Threads Active 
Number of active tasks in the GraphQueryThreads thread pool.
Graph TP: Graph Query Threads Completed 
Number of tasks completed by the GraphQueryThreads thread pool.
Graph TP: Graph Scheduled Threads Pending 
Number of pending tasks in the GraphScheduledThreads thread pool.
Graph TP: Graph Scheduled Threads Active 
Number of active tasks in the GraphScheduledThreads thread pool.
Graph TP: Graph Scheduled Threads Completed 
Number of tasks completed by the GraphScheduledThreads thread pool.
Graph TP: Graph System Threads Pending 
Number of pending tasks in the GraphSystemThreads thread pool.
Graph TP: Graph System Threads Active 
Number of active tasks in the GraphSystemThreads thread pool.
Graph TP: Graph System Threads Completed 
Number of tasks completed by the GraphSystemThreads thread pool.
Graph TP: Gremlin Worker Threads Pending 
Number of pending tasks in the GremlinWorkerThreads thread pool.
Graph TP: Gremlin Worker Threads Active 
Number of active tasks in the GremlinWorkerThreads thread pool.
Graph TP: Gremlin Worker Threads Completed 
Number of tasks completed by the GremlinWorkerThreads thread pool.
Node Messaging Latency 
The min, median, max, 90th, and 99th percentiles of the latency of messages between nodes. The time period starts when a node sends a message and ends when the current node receives it.
Datacenter Messaging Latency 
The min, median, max, 90th, and 99th percentiles of the message latency between nodes in the same or different destination datacenter. This metric measures how long it takes a message from a node in the source datacenter to reach a node in the destination datacenter. Selecting a destination node within the source datacenter yields lower latency values.
SSTables Repaired 
The percentage of bytes that have been repaired. Calculated as ratio of the uncompressed size of repaired sstables to the total size of all sstables.
TBL: SSTables Repaired 
The percentage of bytes that have been repaired for this table. Calculated as ratio of the uncompressed size of repaired sstables to the total size of all sstables.