Tuning search for maximum indexing throughput

Tuning DSE Search for maximum indexing throughput

cassandra.yaml

The location of the cassandra.yaml file depends on the type of installation:
Package installations /etc/dse/cassandra/cassandra.yaml
Tarball installations installation_location/resources/cassandra/conf/cassandra.yaml

dse.yaml

The location of the dse.yaml file depends on the type of installation:
Package installations /etc/dse/dse.yaml
Tarball installations installation_location/resources/dse/conf/dse.yaml

To tune DataStax Enterprise (DSE) Search for maximum indexing throughput, follow the recommendations in this topic. Also see the related topics in DSE Search performance tuning and monitoring. If search throughput improves in your development environment, consider using the recommendations in production.

Locate transactional and search data on separate SSDs

Attention: It is critical that you locate DSE Cassandra transactional data and Solr-based DSE Search data on separate Solid State Drives (SSDs). Failure to do so will very likely result in sub-optimal search indexing performance.

For the steps to accomplish this task, refer to Set the location of search indexes.

In addition, plan for sufficient memory resources and disk space to meet operational requirements. Refer to Capacity planning for DSE Search.

Determine physical CPU resources

Before you tune anything, determine how many physical CPUs you have. The JVM does not know whether CPUs are using hyper-threading.

Assess the IO throughput

DSE Search can be very IO intensive. Performance is impacted by the Thread Per Core (TPC) asynchronous read and write paths architecture. In your development environment, check the iowait system metric by using the iostat command during peak load. For example, on Linux:
iostat -x -c -d -t 1 600

IOwait is a measure of time over a given period that a CPU (or all CPUs) spent idle because all runnable tasks were waiting for an IO operation to complete. While each environment is unique, a general guidelines is to check whether iowait is above 5% more than 5% of the time. If that scenario occurs, try upgrading to faster SSD devices or tune the machine to use less IO, and test again. Again, it's important to locate the search data on dedicated SSDs, separate from the transactional data.

Disable AIO

All DSE Search index updates first perform a read-before-write against the partition or row being indexed. This functionality means DSE uses the core database's internal read path, which in turn uses the asynchronous I/O (AIO) chunk cache apparatus.

If you are experiencing poor performance during search indexing, or during read or write queries of frequently used datasets, DataStax recommends that you try the following steps. Starting in your development environment:
  1. Disable AIO.
  2. Set file_cache_size_in_mb to 512.

To disable AIO, pass -Ddse.io.aio.enabled=false to DSE at startup. Once enforced, SSTables and Lucene segments, as well as other minor off-heap elements, will reside in the OS page cache and will be managed by the kernel.

Disabling AIO will generate a WARN entry in system.log. Example:
WARN [main] 2019-10-01 21:37:16,563 StartupChecks.java:632 
          - Asynchronous I/O has been manually disabled (through the'dse.io.aio.enabled' 
          system property). This may result in subpar performance.
If performance improves, consider using these settings in production.

DSE 6.0 and later use AIO and a custom chunk cache that replaces the OS page cache for SSTable data. However, in DSE 6.0.7 and later 6.0.x releases, and in DSE 6.7.3 and later 6.7.x releases, AIO is disabled automatically if the file cache size is less than one-eighth (⅛) of the system memory. By default, the chunk cache is configured to use one-half (½) of the max direct memory for the DSE process.

For related information, refer to increasing the max direct memory.

The only scenario where you could consider leaving AIO enabled is when you have mostly DSE/Cassandra database workloads and your DSE Search usage is very light.

Differences between indexing modes

There are two indexing modes in DSE Search:
  • Near-real-time (NRT) indexing is the default indexing mode for Apache Solr and Apache Lucene®.
  • Live indexing, also called real-time (RT) indexing, supports searching directly against the Lucene RAM buffer and more frequent, cheaper soft-commits, which provide earlier visibility to newly indexed data. However, RT indexing requires a larger RAM buffer and more memory usage than an otherwise equivalent NRT setup.

Tune NRT reindexing

DSE Search provides multi-threaded asynchronous indexing with a back pressure mechanism to avoid saturating available memory and to maintain stable performance. Multi-threaded indexing improves performance on machines that have multiple CPU cores.

For reindexing only, the IndexPool MBean provides operational visibility and tuning through JMX.

For NRT only, to maximize NRT throughput during a manual re-index, adjust these settings in the search index config:
  • Increase the soft commit time, which is set to 10 seconds (10000 ms) by default. For example, increase the time to 60 seconds and then reload the search index:
    ALTER SEARCH INDEX CONFIG ON demo.health_data SET autoCommitTime = 60000;
    To make the pending changes active:
    RELOAD SEARCH INDEX ON demo.health_data;
A disadvantage of changing the autoSoftCommit attribute is that newly updated rows take longer than usual (10000ms) to appear in search results.

Tune RT indexing

Live indexing reduces the time for docs to be searchable.
  1. To enable live indexing (also known as RT):
    ALTER SEARCH INDEX CONFIG ON demo.health_data SET realtime = true;
  2. To configure live indexing, set the autoCommitTime to a value between 100-1000 ms:
    ALTER SEARCH INDEX CONFIG ON demo.health_data SET autoCommitTime = 1000;

    Test with tuning values of 100-1000 ms. An optimal setting in this range depends on your hardware and environment. For live indexing (RT), this refresh interval saturates at 1000 ms. A value higher than 1000 ms is not recognized.

  3. Ensure that search nodes have at least 14 GB heap.
  4. If you change the heap, restart DSE to use live indexing with the changed heap size.

Tune TPC cores

DSE Search workloads do not benefit from hyper-threading for writes (indexing). To optimize DSE Search for indexing throughput for both modes (NRT and RT), change tpc_cores in cassandra.yaml from the default to the number of physical CPUs. Change this setting only on search nodes, because this change might degrade throughput for workloads other than search.

Size RAM buffer

The default settings for RAM buffer in dse.yaml are appropriate for:
  • ram_buffer_heap_space_in_mb: 1024
  • ram_buffer_offheap_space_in_mb: 1024

    Because NRT does not use offheap, these settings apply only to RT.

Adjust these settings to configure how much global memory all Solr cores use to accumulate updates before flushing segments. Setting this value too low can induce a state of constant flushing during periods of ongoing write activity. For NRT, these forced segment flushes will also de-schedule pending auto-soft commits to avoid potentially flushing too many small segments.

JMX MBean path: com.datastax.bdp.metrics.search.RamBufferSize

Check back pressure setting

The back_pressure_threshold_per_core in dse.yaml affects only index rebuilding/reindexing. If you upgraded to DSE 6.0 from earlier versions, ensure that you use the new default value of 1024.

Use default mergeScheduler

The default mergeScheduler settings are set automatically. Do not adjust these settings in DSE 6.0 and later. In earlier versions, the default settings were different and might have required tuning.