Managing concurrency in asynchronous query execution

The DSE drivers support sending multiple concurrent requests on a single connection to improve overall query performance.

The DSE drivers support sending multiple concurrent requests on a single connection to improve overall query performance. This is also known as request pipelining. These requests are processed by the server concurrently and responses are sent back to the client driver without strict ordering, allowing improved overall performance when a single operation is slow but the rest of the operations can be processed without any delay. For example, a query that involves consolidating data from multiple partitions will be much slower than a query that only retrieves data from a single partition.

DSE deployments should be planned and provisioned to support the maximum number of parallel requests required for the desired latency of an application. For a given deployment, introducing more load to the system above a minimum threshold will increase overall latency.

On the client side, the driver limits the amount of in-flights requests (or simultaneous requests that haven't completed yet) to between 1024 and 2048 per connection by default, depending on the driver language. Above that limit, the driver will immediately throw an exception indicating that the connections to the cluster are busy. You may reach this limit as a result of handling incoming load to your application. If your application is hitting the limit of in-flight requests add additional capacity to your DSE cluster.

Note: Increasing the limit of in-flight requests can be set using the driver configuration settings, but it's not recommended. A server node usually takes less than a millisecond to fulfil a request. Exceeding the driver side limit would mean trying to support millions of requests per second with a few server nodes.

Limiting simultaneous requests in your application code

When submitting several requests in parallel, the requests are queued at one of three levels: on the driver side, on the network stack, or on the server side. Excessive queueing on any of these levels affects the total time it takes each operation to complete. Adjust the concurrency level, or number of simultaneous requests, to reduce the amount of queuing and get high throughput and low latency.

The optimal concurrency level depends on both the client and server hardware specifications as well as other factors like:
  • the server cluster size
  • the number of instances of the application accessing the database
  • the complexity of the queries

When implementing an application, launch a fixed number of asynchronous operations using the concurrency level as the maximum. As each operation completes, add a new one. This ensures your application's asynchronous operations will not exceed the concurrency level.

The following code examples show how to launch asynchronous operations in a loop and controlling the concurrency level.

C/C++ C# Java Node.js Python

Using specialized tools to avoid problems in custom applications

Unbounded concurrency issues often arise when performing bulk operations in custom code. Avoid them by using the appropriate tool for the task. If you are importing data from other sources use dsbulk. If you are performing transformations from external data sources use Apache Spark.