How is the consistency level configured?
Consistency levels in DataStax Enterprise can be configured to manage availability versus data accuracy.
Consistency levels in DataStax Enterprise (DSE) can be configured to manage availability versus data accuracy. Configure consistency for a session or per individual read or write operation.
Within cqlsh
, use to set the consistency
level for all queries in the current cqlsh
session. For programming client
applications, set the consistency level using an appropriate driver. For example, using the
Java driver, call QueryBuilder.insertInto
with
setConsistencyLevel
to set a per-insert consistency level.
The consistency level defaults to ONE
for all write and read operations.
Write consistency levels
This table describes the write consistency levels.
Level | Description | Usage |
---|---|---|
ALL |
A write must be written to the commit log and memtable on all replica nodes in the cluster for that partition. | Provides the highest consistency and the lowest availability of any other level. |
EACH_QUORUM |
A write must be written to the commit log and memtable on a quorum of replica nodes in each datacenter. | Use in multiple datacenter clusters to strictly maintain consistency at the same level
in each datacenter. For example, choose this level if you want a write to fail when
a datacenter is down and the QUORUM cannot be reached on that
datacenter. Strong consistency. |
QUORUM |
A write must be written to the commit log and memtable on a quorum of replica nodes across all datacenters. | Use in single or multiple datacenter clusters to maintain strong consistency across the cluster. Use if you can tolerate some level of failure. |
LOCAL_QUORUM |
A write must be written to the commit log and memtable on a quorum of replica nodes in the same datacenter as the coordinator node. Avoids latency of inter-datacenter communication. | Use to maintain consistency within the single datacenter in multiple-datacenter clusters with a rack-aware replica placement strategy, such as NetworkTopologyStrategy, and a properly configured snitch. Strong consistency. |
ONE |
A write must be written to the commit log and memtable of at least one replica node. | Satisfies the needs of most users because consistency requirements are not stringent. |
TWO |
A write must be written to the commit log and memtable of at least two replica nodes. | Similar to ONE . |
THREE |
A write must be written to the commit log and memtable of at least three replica nodes. | Similar to TWO . |
LOCAL_ONE |
A write must be sent to and successfully acknowledged by at least one replica node in the local datacenter. | Achieves a consistency level of ONE without cross-datacenter traffic,
which is desirable for multiple datacenter clusters. For security and quality
reasons, use this consistency level in an offline datacenter. If an offline node
goes down, LOCAL_ONE prevent automatic connection to online nodes
in other datacenters. |
ANY |
A write must be written to at least one node. If all replica nodes for the given
partition key are down, the write can still succeed after a hinted handoff has been written. If
all replica nodes are down at write time, an ANY write is not
readable until the replica nodes for that partition have recovered. |
Provides low latency and a guarantee that a write never fails. Delivers the lowest consistency and highest availability. |
Read consistency levels
This table describes read consistency levels.
Level | Description | Usage |
---|---|---|
ALL |
Returns the record after all replicas have responded. The read operation will fail if a replica does not respond. | Provides the highest consistency of all levels and the lowest availability of all levels. |
EACH_QUORUM |
Returns the record after a quorum of replica nodes in each datacenter has responded. | Used in multiple datacenters to provide strict read consistency for data returned from each datacenter. |
QUORUM |
Returns the record after a quorum of replicas from all datacenters has responded. | Used in single or multiple datacenter clusters to maintain strong consistency across the cluster. Ensures strong consistency if you can tolerate some level of failure. |
LOCAL_QUORUM |
Returns the record after a quorum of replicas in the current datacenter as the coordinator node has reported. Avoids latency of inter-datacenter communication. | Use to maintain consistency within the single datacenter in multiple-datacenter clusters with a rack-aware replica placement strategy, such as NetworkTopologyStrategy, and a properly configured snitch. |
LOCAL_ONE |
Returns a response from the closest replica in the local datacenter. | Achieves a consistency level of ONE without cross-datacenter traffic,
which is desirable for multiple datacenter clusters. For security and quality
reasons, use this consistency level in an offline datacenter. If an offline node
goes down, LOCAL_ONE prevent automatic connection to online nodes
in other datacenters. |
SERIAL |
Allows reading the current (and possibly uncommitted) state of
data without proposing a new addition or update. If a SERIAL read finds an
uncommitted transaction in progress, it will commit the transaction as part of the read.
Similar to QUORUM. |
Use to read the latest value of a column after a user has invoked a lightweight transaction to write to the column. The database then checks the inflight lightweight transaction for updates and, if found, returns the latest data. |
LOCAL_SERIAL |
Same as SERIAL , but confined to the local datacenter. Similar to
LOCAL_QUORUM . |
Use to achieve linearizable consistency for lightweight transactions. |
ONE |
Returns a response from the closest replica, as determined by the snitch. By default, a read repair runs in the background to make the other replicas consistent. | Provides the highest availability of all the levels if you can tolerate a comparatively high probability of stale data being read. The replicas contacted for reads may not always have the most recent write. |
TWO |
Returns the most recent data from two of the closest replicas. | Similar to ONE . |
THREE |
Returns the most recent data from three of the closest replicas. | Similar to TWO . |
How QUORUM is calculated
The QUORUM
level writes to the number of nodes that make up a quorum. A
quorum is calculated, and then rounded down to a whole number, as follows:
quorum = (sum_of_replication_factors / 2) + 1
The sum of all the replication_factor
settings for each datacenter is the
sum_of_replication_factors
.
sum_of_replication_factors = datacenter1_RF + datacenter2_RF + . . . + datacentern_RF
For example, using a replication factor of 3, a quorum is 2 nodes ((3 / 2) + 1 =
2
). The cluster can tolerate one replica down.
- Using a replication factor of 6, a quorum is 4 (
(6 / 2) + 1 = 4
). The cluster can tolerate 2 replicas down. - In a two-datacenter cluster where each datacenter has a replication factor of 3, a quorum
is 4 nodes (
(6 / 2) + 1 = 4
). The cluster can tolerate 2 replica nodes down. - In a five-datacenter cluster where two datacenters have a replication factor of 3 and three
datacenters have a replication factor of 2, a quorum is 7 nodes (
(12 / 2) + 1 = 7
).
The more datacenters, the higher number of replica nodes need to respond for a successful operation.
Similar to QUORUM
, the LOCAL_QUORUM
level is calculated
based on the replication factor of the same datacenter as the coordinator node. Even if the
cluster has more than one datacenter, the quorum is calculated with only local replica
nodes.
In EACH_QUORUM
, every datacenter in the cluster must reach a quorum based on
that datacenter's replication factor for the write request to succeed. For every datacenter
in the cluster, a quorum of replica nodes must respond to the coordinator node for the write
request to succeed.