Shard transport options for DSE Search/Solr communications

A custom, TCP-based communications layer for Solr is the default type in DataStax Enterprise. To improve Solr inter-node communications and avoid distributed deadlock during queries, switch from the HTTP-based communications to the netty non-blocking communications layer.

A custom, TCP-based communications layer for Solr is the default type in DataStax Enterprise. The TCP-based type, netty, is an alternative to the HTTP-based, Tomcat-backed interface, which is reportedly slow and resource intensive. The communications layer improves Solr inter-node communications in several ways:

  • Lowers latency
  • Reduces resource consumption
  • Increases throughput even while handling thousands of concurrent requests
  • Provides nonblocking I/O processing

To avoid distributed deadlock during queries, switch from the HTTP-based communications to the netty non-blocking communications layer.

The TCP-based communications layer for Solr supports client-to-node and node-to-node encryption using SSL, but does not support Kerberos.

Configure the shard transport options in the dse.yaml file to select HTTP- or TCP-based communication.

Note: Shard transport options work only in data centers where the replication factor is not equal to the number of nodes. You can verify or change the replication factor of the keyspace.

The shard_transport_options in the dse.yaml file for managing inter-node communication between Solr nodes are:

  • type: netty or http

    The default type, netty, configures TCP-based Solr communications. Choosing http configures plain old Solr communication that uses the standard HTTP-based communications interface. If you accept the netty default the following netty options are applicable.

  • netty_server_port: 8984

    The TCP listen port, mandatory if use the netty type, or if you want to migrate to the netty type from the http type later. If you plan to use the http type indefinitely, either comment netty_server_port or set it to -1.

  • netty_server_acceptor_threads

    The number of server acceptor threads. The default is number of available processors.

  • netty_server_worker_threads

    The number of server worker threads. The default is number of available processors times 8.

  • netty_client_worker_threads

    The number of client worker threads. The default is number of available processors times 8.

  • netty_client_max_connections

    The maximum number of client connections. The default is 100.

  • netty_client_request_timeout

    The client request timeout in milliseconds. The default is 60000.

Upgrading to use the netty type 

If you upgrade to DataStax Enterprise 4.6, perform the upgrade procedure using the shard transport type of your old installation, and after the upgrade, change the shard transport type to netty. Start the cluster using a rolling restart.

The location of the dse.yaml file depends on the type of installation:
Installer-Services /etc/dse/dse.yaml
Package installations /etc/dse/dse.yaml
Installer-No Services install_location/resources/dse/conf/dse.yaml
Tarball installations install_location/resources/dse/conf/dse.yaml