Server errors

Server errors originate at the server and are sent back to the driver. Common server errors include overloaded exceptions, read timeouts, write timeouts, and unavailable exceptions.

This page provides troubleshooting advice for server errors that you can encounter with DataStax drivers. These errors originate from the Cassandra Native Protocol.

Already exists

Cause

The query attempted to create a keyspace or table that already exists.

Error message content

This error includes the following information:

  • ks: The keyspace associated with the keyspace or table that already exists.

  • table: The name of the table that already exists. If no table is involved, then this is empty.

Resolution

Make sure the keyspace or table does not exist before trying to create it. You can do this with a manual check or use the IF NOT EXISTS CQL syntax.

Authentication error

Cause

Authentication errors occur when the server requires authentication but it fails.

This error has multiple possible causes. Check the error message for more details.

Resolution

To resolve this error, investigate the authentication mechanisms and credentials used by your database clusters and your application. For more information, see Authentication in DataStax drivers.

Function failure

Cause

A user defined function (UDF) failed during execution.

Error message content

The error message contains the following information:

  • keyspace: The keyspace of the failed function.

  • function: The name of the failed function.

  • arg_types: A list of argument types of the failed function.

Resolution

Typically, this error indicates a logic error in the UDF, such as an infinite loop or syntax error. Scrutinize the UDF definition to find the issue.

Invalid

Cause

The submitted query is syntactically correct, but it isn’t a valid query.

For example, a query can be well-formed but invalid because it references a keyspace or table that doesn’t exist.

Resolution

Make sure the query is valid. You can do this by manually checking the query or adding validations in your application.

Overloaded

Cause

Overloaded exceptions happen when the cluster can’t handle the incoming traffic from clients. Specifically, the node has reaches the maximum number of requests per second.

Overloaded exceptions can happen during traffic spikes or when processing computationally expensive queries that exhaust the node’s resources.

Resolution

This error can indicate an under provisioned cluster or suboptimal queries. For example:

  • Make sure clusters have sufficient resources to handle regular traffic levels from your application.

  • Use throttling or batching to meter your request traffic.

  • Avoid resource-intense operations like ALLOW FILTERING.

Read failure

Cause

A read failure is a non-timeout exception encountered during a read request.

This error is rare. The most common cause for this error is reading too many tombstones per request.

Error message content

This error includes the following information:

  • CL: The consistency level of the query that triggered the error.

  • received: An integer representing the number of nodes that acknowledged the request.

  • blockfor: An integer representing the number of replicas required to meet the consistency level.

  • reasonmap: A map of endpoint-to-failure reason codes. This maps the endpoints of the replicas that failed to execute the request to the code representing the reason for the failure.

  • data_present: The coordinator asks a single node for the data, and then it uses a checksum from the other nodes to determine if the data is consistent. If this value is 0, then the replica that was asked for the data didn’t respond. If the value is not 0, then the replica did respond.

Resolution

Examine the reasonmap to find to the root cause, and then troubleshoot accordingly.

Read timeout

Cause

A read timeout is a type of query timeout. It means that a server side timeout exception occurred during the read request.

Although read timeouts have various causes, they typically indicate issues with the data model or query patterns that cause poor performance on the server. Examples include fetching large amounts of data at once or reading during long server-side garbage collection events.

Error message content

The error message includes the following information:

  • CL: The consistency level of the query that triggered the read timeout.

  • received: The number of nodes that acknowledged the request.

  • blockfor: An integer that represents the number of replicas required to meet the consistency level.

  • data_present: The coordinator asks a single node for the data, and then it uses a checksum from the other nodes to determine if the data is consistent. If this value is 0, then the replica that was asked for the data didn’t respond. If the value is not 0, then the replica did respond.

Resolution

If possible, your driver’s retry policy can automatically retry the request.

To debug the read timeouts, check the server logs to verify that garbage collection times are acceptable, and then examine the data model and access patterns for inefficiencies. For example, you might need to batch, throttle, or optimize your read requests.

If you can’t identify any other underlying cause for the timeouts in Cassandra, HCD, or DSE clusters, then you can adjust the server side read_request_timeout_in_ms in cassandra.yaml. For information about configuring cassandra.yaml, see the documentation for your database type and version.

Retry policy

Read timeouts can be eligible for automatic retry under the default retry policy.

Syntax error

Cause

The submitted query contains invalid syntax.

Resolution

Ensure your query has correct CQL syntax.

Unavailable exception

Cause

An unavailable exception occurs when the driver passes a query to a coordinator node but there aren’t enough replicas available to serve the request. Specifically, the query can’t be executed because the query’s consistency level is greater than the number of available replicas.

This error often indicates that nodes are down or can’t connect to the coordinator.

For example, one possible cause is that the target Cassandra, HCD, or DSE cluster is in the middle of a rolling restart or upgrade, and the prior node wasn’t ready to receive requests before the next node was restarted.

Error message content

This exception includes the following information:

  • CL: The consistency level of the query that triggered the exception.

  • Required: An integer representing the number of nodes that must be alive to meet the CL.

  • Alive: An integer representing the number of replicas that were known to be alive when the coordinator attempted to process the request.

Resolution

Ensure that a sufficient number of replicas are available for your consistency level.

For example, during a rolling upgrade or restart, ensure that the previous node is fully up and ready to receive query requests before you restart the next node.

Retry policy

If this error occurs, you can assume that data wasn’t mutated. It is a fail-fast check, and this error can be eligible for automatic retry.

Unprepared

Cause

The request attempted to execute a prepared statement that was not prepared in advance.

Resolution

Prepare the statement before executing it.

Write failure

Cause

A write failure is a non-timeout exception encountered during a write request.

This error is rare. The most common cause for this error is batch sizes that are too large.

Error message content

This error includes the following information:

  • CL: The consistency level of the query that triggered the error.

  • received: An integer representing the number of nodes that acknowledged the request.

  • blockfor: An integer representing the number of replicas required to satisfy the consistency level.

  • reasonmap: A map of endpoint-to-failure reason codes. This maps the endpoints of the replicas that failed to execute the request to the code representing the reason for the failure.

  • writeType: An enum representing the type of write that failed:

    • SIMPLE

    • BATCH

    • BATCH_LOG

    • UNLOGGED_BATCH

    • COUNTER

    • CAS

    • VIEW (MV)

    • CDC

Resolution

Examine the reasonmap to find to the root cause, and then troubleshoot accordingly.

Write timeout

Cause

A write timeout is a type of query timeout. It means that a server-side timeout exception occurred during the write request.

Write timeous occur for various reasons, including inadequate resources for query execution or transient failures. For example, write timeouts are common when batches are large or span multiple partitions.

Error message content

The error message includes the following information:

  • CL: The consistency level of the query that triggered the write timeout.

  • received: The number of nodes that acknowledged the request.

  • blockfor: An integer that represents the number of replicas required to meet the consistency level.

  • writetype: A description of the type of write that timed out:

    • SIMPLE

    • BATCH

    • BATCH_LOG

    • UNLOGGED_BATCH

    • COUNTER

    • CAS

    • VIEW (MV)

    • CDC

Resolution

If possible, your driver’s retry policy can automatically retry the request.

If timeouts occur predictably, you might need to adjust server/client configuration or batch, throttle, or optimize requests. If you already batch your requests, consider decreasing the size or limiting writes to a single partition.

For example, depending on SLAs and application requirements, the default server-side write timeout might not be adequate for Cassandra and DSE clusters. In this case, you can adjust write_request_timeout_in_ms in the cassandra.yaml configuration file. For information about configuring cassandra.yaml, see the documentation for your database type and version.

Retry policy

In the event of a write timeout on INSERT, UPDATE, and DELETE statements, the request could have been partially executed, and data may or may not have been written to the table by the node.

For idempotent statements and a batchlog writes, the request is eligible for automatic retry under the default retry policy.

If the request isn’t idempotent, such as incrementing a counter, then you must use caution if you retry the request. There can be unexpected changes or duplicates that result from retrying a non-idempotent request.

Other errors

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2025 DataStax | Privacy policy | Terms of use | Manage Privacy Choices

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000, info@datastax.com