Pending task metrics for reads
Pending read and compaction tasks indicate I/O contention and can manifest in degrading read performance.
Pending read and compaction tasks indicate I/O contention and can manifest in degrading read performance.
- Read Requests Pending
- The number of read requests that have arrived into the cluster but are waiting to be handled. During low or moderate read load, you should see 0 pending read operations (or at most a very low number). A continuous high number of pending reads signals a need for more capacity in a cluster or to investigate disk I/O contention. Pending reads can also indicate an application design that is not accessing data in the most efficient way possible.
- Read Repair Tasks Pending
- The number of read repair operations that are queued and waiting for system resources
in order to run. The optimal number of pending read repairs is 0 (or at most a very
small number). A value greater than 0 indicates that read repair operations are in I/O
contention with other operations. If this graph shows high values for pending tasks,
this may suggest the need to run a node repair to make nodes consistent. Or, for tables
where your requirements can tolerate a certain degree of stale data, you can lower the
value of the table parameter
read_repair_chance
. - Compactions Pending
- An upper bound of the number of compactions that are queued and waiting for system resources in order to run. This is a worst-case estimate. The compactions pending metric is often misleading. An unrealistic, high reading often occurs. The optimal number of pending compactions is 0 (or at most a very small number). A value greater than 0 indicates that read operations are in I/O contention with compaction operations, which usually manifests itself as declining read performance. This is usually caused by applications that perform frequent small writes in combination with a steady stream of reads. If a node or cluster frequently displays pending compactions, that is an indicator that you might need to increase I/O capacity by adding nodes to the cluster. You can also try to reduce I/O contention by reducing the number of insert/update requests (have your application batch writes for example), or reduce the number of SSTables created by increasing the memtable size and flush frequency on your tables.