Use tracing to test consistency level performance

In a distributed system such as CQL, the most recent value of data is not necessarily on every node all the time. The client application configures the consistency level per request to balance response time and data accuracy.

Consistency levels for read requests
Consistency level Description

ONE

Returns data from the nearest replica.

QUORUM

Returns the most recent data from the majority of replicas.

ALL

Returns the most recent data from all replicas.

Changing the consistency level can affect read performance.

Testing performance impact using tracing

To test consistency level performance, use tracing to record all activity related to a request. The following steps use tracing to show the performance of queries on a keyspace with a replication factor of 3 using different consistency levels (CL).

As the consistency level changes, performance degrades in from 1714 to 1887 to 2391 microseconds respectively. Your actual results might vary depending on your data model, cluster configuration, and network conditions.

  1. Use cqlsh to create a keyspace that specifies using three replicas for data distribution in the cluster:

    CREATE KEYSPACE IF NOT EXISTS cycling_alt 
    WITH REPLICATION = {
      'class': 'SimpleStrategy', 
      'replication_factor': 3
    };

    For Astra DB, triple replication is enabled by default and cannot be changed.

  2. Create a table and insert a row:

    USE cycling_alt;
    CREATE TABLE IF NOT EXISTS cycling_alt.cyclist_name (
      id int PRIMARY KEY,
      lastname text,
      firstname text
    );
  3. Turn on tracing, and then use the CONSISTENCY command to ensure the consistency level is ONE, which is the default:

    TRACING ON;
    CONSISTENCY;
    Results
    Current consistency level is ONE.

    Consistency level of ONE processes responses from only one of the three replicas.

  4. Query the table to read the row:

    SELECT * FROM cycling_alt.cyclist_name WHERE id = 1;
    Results
     id | firstname | lastname
    ----+-----------+----------
      1 |   Melissa |  HOSKINS
    
    (1 rows)

    The tracing results list all the actions taken to complete the SELECT statement.

  5. Change the consistency level to QUORUM to trace what happens during a read with a QUORUM consistency level:

    CONSISTENCY QUORUM;
    SELECT * FROM cycling_alt.cyclist_name WHERE id = 1;

    Consistency level of QUORUM waits for responses from a majority of the replicas. In this case, it waits for responses from two of the three replicas.

    Results
    Consistency level set to QUORUM.

    If QUORUM inadvertently selects a slow node as part of the quorum, it can take longer to process the request.

  6. Change the consistency level to ALL, and then run the SELECT statement again:

    CONSISTENCY ALL;
    SELECT * FROM cycling_alt.cyclist_name WHERE id = 1;

    Consistency level of ALL processes responses from all three replicas. This can take the longest time to process because it waits for responses from all replicas. ALL can be much slower than QUORUM when querying especially large data sets or if one node is slower than the others.

    Results
    Consistency level set to ALL.

Tracing queries on large datasets

You can use probabilistic tracing on databases having at least ten rows. This capability is intended for tracing through much larger datasets where it it’s impractical to trace every request.

After configuring probabilistic tracing using the nodetool settraceprobability command, you query the system_traces keyspace. For example:

SELECT * FROM system_traces.events;
Results
 session_id                           | event_id                             | activity  | source     | source_elapsed | thread
--------------------------------------+--------------------------------------+-----------+------------+----------------+---------------
 a20badc0-37d9-11ef-81ed-f92c3c7170c3 | a20ff380-37d9-11ef-81ed-f92c3c7170c3 | Parsing ; | 172.26.0.2 |          20987 |  CoreThread-6
 0443ce20-373d-11ef-b699-4d3badfe036f | 044c59a0-373d-11ef-b699-4d3badfe036f | Parsing ; | 172.26.0.3 |          32010 | CoreThread-10
 3e951740-3734-11ef-9db4-4d3badfe036f | 3e99ab20-3734-11ef-9db4-4d3badfe036f | Parsing ; | 172.26.0.3 |          27999 |  CoreThread-0

(3 rows)

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2025 DataStax, an IBM Company | Privacy policy | Terms of use | Manage Privacy Choices

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000, info@datastax.com