Configuring and tuning indexing performance

Tuning DSE Search for maximum indexing throughput

DSE Search provides multi-threaded asynchronous indexing with a back pressure mechanism to avoid saturating available memory and to maintain stable performance. Multi-threaded indexing improves performance on machines that have multiple CPU cores. The IndexPool MBean MBean provides operational visibility and tuning through JMX.

There are two indexing modes in DSE Search:
  • Near-real-time (NRT) indexing is the default indexing mode for Apache Solr™ and Apache Lucene®.
  • Live indexing, also called real-time (RT) indexing supports searching directly against the Lucene RAM buffer and more frequent, cheaper soft-commits, which provide earlier visibility to newly indexed data. However, RT indexing requires a larger RAM buffer and more memory usage than an otherwise equivalent NRT setup.
On this page:

How many CPUs do you have?

Before you tune anything, determine how many physical CPUs you have. The JVM doesn't know if CPUs are using hyper-threading or not.

DSE 4.8.0 only

Challenge: The default mergeScheduler settings are not appropriate for DSE Search near real time (NRT) indexing production use on a typical size server. Lucene merge scheduling and lack of parallelism might cause periods of 0 throughput.


  • First, set the size of the indexing thread-pool per core to the number of physical CPUs available:
    max_solr_concurrency_per_core: min(2, num physical non-HT CPU cores / num actively indexing Solr cores) 
    Note: If max_solr_concurrency_per_core is set to 1, DSE Search uses the legacy synchronous indexing implementation.
  • Next, set a maximum queue depth before back-pressure is activated. A good rule of thumb is 1000 updates per worker. The back_pressure_threshold_per_core option defines the number of buffered asynchronous index updates per Solr core before the back-pressure is activated with the option. The default value is 1000 times the max_solr_concurrency_per_core.
    back_pressure_threshold_per_core: 1000 * max_solr_concurrency_per_core


Apply your maximum concurrency per core to the merge scheduler, where:
  • maxThreadCount - maximum number of Lucene merge threads
  • maxMergeCount - number of outstanding merges before Lucene throttles incoming indexing workers. If maxMergeCount is too low, you will observe periods of zero indexing throughput.
    <mergeScheduler class="org.apache.lucene.index.ConcurrentMergeScheduler">
        <int name="maxThreadCount">your max_solr_concurrency_per_core</int>
        <int name="maxMergeCount">your max_solr_concurrency_per_core * 2</int>

DSE 4.8.13 and later

The dse.yaml and solrconfig.xml settings are automatically tuned optimally for nodes with a single active Solr core. This automatic tuning is especially useful during auto-generated core creation.

Even with auto tuning, consider:
  • When multiple Solr cores are in play, for example, with DSE Graph, it still makes sense to scale down the resources assigned to them. However, rather than re-configuring every parameter, modify only max_solr_concurrency_per_core. All other tuning options, if not explicitly specified, are modified appropriately on restart.
  • The number of threads per core used for auto-tuning max out at 2, even when proc/cpuinfo reports a higher thread count in the logs. For example:
    /proc/cpuinfo is reporting 8 threads per CPU core, but DSE will assume a value of 2 for auto-tuning...
  • Auto-tuning occurs only when options are not already specified. While the defaults are auto-tuned, they are still only defaults and will absolutely not override any manually specified options in dse.yaml (globally) and solrconfig.xml (per core).
  • 4.8.15 and later DSE Search optimizes for SSDs by default. Spinning disk detection logic is removed.
  • 4.8.1-4.8.14 Spinning disk detection logic is used. When the Lucene index files for your Solr core reside on a spinning/rotational hard disk, the solrconfig default for maxThreadCount is 1 and maxMergeCount is 6.

Tuning NRT reindexing

For NRT only, to maximize NRT throughput during a manual re-index adjust these settings in the solrconfig.xml file:
  • Increase the size of the RAM buffer, which is set to 512 MB by default. For example, increase to 1000:
    . . .
    The RAM buffer holds uncommitted documents for NRT. A larger RAM buffer reduces flushes. Segments are also larger when flushed. Fewer flushes reduces I/O pressure which is ideal for higher write workload scenarios.
  • Increase the soft commit time, which is set to 10 seconds (10000 ms) by default, to a larger value. For example, increase the time to 60 seconds:
A disadvantage of changing the autoSoftCommit attribute is that newly updated rows take longer than usual (1000ms) to appear in search results.

Tuning RT indexing

Live indexing uses more memory but reduces the time for docs to be searchable. Enable live indexing on only one Solr core per cluster.
  1. To enable live indexing (also known as RT), add <rt>true</rt> to the <indexConfig> attribute of the solrconfig.xml file.
  2. To configure live indexing, edit the solrconfig.xml file and change these settings:
    • Set the autoSoftCommit/maxTime to 1000 ms (1 second).

      The autoSoftCommit.maxTime for live indexing is not a soft commit, this value controls the refresh interval. For live indexing (RT), this refresh interval saturates at 1000 ms. Any higher value is overridden to limit the maximum time to 1000 ms.

    • When live indexing is on the RAM buffer uses more memory than near real-time indexing. Set the RAM buffer size, 2048 MB is a good starting point for RT:
    • For faster live indexing, configure the postings portion of the RAM buffer to be allocated offheap
      Postings allocated offheap improve garbage collection (GC) times and prevent out-of-memory errors due to fluctuating live indexing memory usage.
      <updateHandler class="solr.DirectUpdateHandler2">
  3. Increase the heap size to at least 14 GB.
  4. Restart DataStax Enterprise to use live indexing with the increased heap size.
The location of the dse.yaml file depends on the type of installation:
Installer-Services /etc/dse/dse.yaml
Package installations /etc/dse/dse.yaml
Installer-No Services install_location/resources/dse/conf/dse.yaml
Tarball installations install_location/resources/dse/conf/dse.yaml