Shard transport options for DSE Search communications

A custom, TCP-based communications layer for Solr is the default type in DataStax Enterprise. To improve Solr inter-node communications and avoid distributed deadlock during queries, switch from the HTTP-based communications to the netty non-blocking communications layer.

A custom, TCP-based communications layer for Solr is the default type in DataStax Enterprise. The TCP-based type, netty, is an alternative to the HTTP-based, Tomcat-backed interface, which is reportedly slow and resource intensive. The communications layer improves Solr inter-node communications in several ways:

  • Lowers latency
  • Reduces resource consumption
  • Increases throughput even while handling thousands of concurrent requests
  • Provides nonblocking I/O processing

To avoid distributed deadlock during queries, switch from the HTTP-based communications to the netty non-blocking communications layer.

The TCP-based communications layer for Solr supports client-to-node and node-to-node encryption using SSL, but does not support Kerberos.

Configure the shard transport options in the dse.yaml file to select HTTP- or TCP-based communication.

Note: Shard transport options work only in data centers where the replication factor is not equal to the number of nodes. You can change the replication factor of the keyspace.
The shard_transport_options in the dse.yaml file for managing inter-node communication between DSE Search nodes are:
  • type

    netty is used for TCP-based communication. It provides lower latency, improved throughput, and reduced resource consumption than http transport, which uses standard a HTTP-based interface for communication. Default: netty

  • netty_server_port

    The TCP listen port. This setting is mandatory to use the netty transport now or migrate to it later. To use http transport, comment out this setting or change it to -1. Default: 8984

  • netty_server_acceptor_threads

    The number of server acceptor threads. Default: number of available processors

  • netty_server_worker_threads

    The number of server worker threads. Default: number of available processors * 8

  • netty_client_worker_thread

    The number of client worker threads. Default: number of available processors * 8

  • netty_client_max_connections

    The maximum number of client connections. Default: 100

  • netty_client_request_timeout

    The client request timeout in milliseconds is the maximum cumulative time that a distributed Solr request will wait idly for shard responses. Default: 60000

Upgrading to use the netty type 

If you upgrade to DataStax Enterprise 4.0 or later, perform the upgrade procedure using the shard transport type of your old installation, and after the upgrade, change the shard transport type to netty. Start the cluster using a rolling restart.

The location of the dse.yaml file depends on the type of installation:
Installer-Services /etc/dse/dse.yaml
Package installations /etc/dse/dse.yaml
Installer-No Services install_location/resources/dse/conf/dse.yaml
Tarball installations install_location/resources/dse/conf/dse.yaml