Securing DataStax Enterprise ports
Lock down all unnecessary ports, and create IP security rules that allow internode and client communications.
All network security starts with strict and proper firewall rules on interfaces that are exposed to the internet, allowing only the absolute minimum traffic in or out of the internal network. Firewall security is especially important when running your infrastructure in a public cloud. Wherever you host your clusters, DataStax strongly recommends using a firewall on all nodes in your cluster.
Begin with a restrictive configuration that blocks all traffic except SSH. Then, open up the following ports in compliance with your security requirements to allow communication between the nodes. If these ports are not opened, the node acts as a standalone database server rather than joining the cluster when you start DataStax Enterprise (DSE) on a node.
If the cluster uses SSL only, close any non-SSL ports that have dedicated SSL ports. To ensure communication is not disabled to any non-SSL clients, DataStax recommends testing the configuration in a staging environment before enabling the firewall in production environments.
cassandra.yaml
The location of the cassandra.yaml file depends on the type of installation:Package installations | /etc/dse/cassandra/cassandra.yaml |
Tarball installations | installation_location/resources/cassandra/conf/cassandra.yaml |
cassandra-env.sh
The location of the cassandra-env.sh file depends on the type of installation:Package installations | /etc/dse/cassandra/cassandra-env.sh |
Tarball installations | installation_location/resources/cassandra/conf/cassandra-env.sh |
spark-defaults.conf
The default location of the spark-defaults.conf file depends on the type of installation:Package installations | /etc/dse/spark/spark-defaults.conf |
Tarball installations | installation_location/resources/spark/conf/spark-defaults.conf |
spark-env.sh
The default location of the spark-env.sh file depends on the type of installation:Package installations | /etc/dse/spark/spark-env.sh |
Tarball installations | installation_location/resources/spark/conf/spark-env.sh |
dse.yaml
The location of the dse.yaml file depends on the type of installation:Package installations | /etc/dse/dse.yaml |
Tarball installations | installation_location/resources/dse/conf/dse.yaml |
Procedure
Default port | Service | Configurable in |
---|---|---|
Public facing ports | ||
22 | SSH (default) | See your OS documentation on sshd. |
DataStax Enterprise public ports | ||
(random) | Spark port for the driver to listen on. Used for
communicating with the executors and the standalone Master.
In client mode, this port is opened on the local node where
the Spark application was started. In cluster mode, this
port is opened on one of the Analytics nodes selected
randomly. When used in cluster mode, the port is opened only
on the network interface used for internode communication.
To explicitly set the port, set the
spark.driver.port property in the Spark
driver. If an application is already using the designated
port, it will increment the port number up to the setting of
the spark.port.maxRetries property. For
example, if spark.driver.port is set to
11000 and spark.port.maxRetries is set to
10, it will attempt to bind to port 11000. If that fails it
will increment the port number and retry, stopping at port
11010. |
spark-defaults.conf and
using the --conf option on the command
line. |
(random) | Spark port for all block managers to listen on. These
ports exist on both the driver and the executors. To
explicitly set the port, set the
spark.blockManager.port property. If an
application is already using the designated port, it will
increment the port number up to the setting of the
spark.port.maxRetries property. For
example, if spark.blockManager.port is set
to 11000 and spark.port.maxRetries is set
to 10, it will attempt to bind to port 11000. If that fails
it will increment the port number and retry, stopping at
port 11010. |
spark-defaults.conf and
using the --conf option on the command
line. |
(random) | spark.executor.port Spark port for the
executor to listen on. This is used for communicating with
the driver. |
spark-defaults.conf and
using the --conf option on the command
line. |
(random) | spark.shuffle.service.port Spark port on
which the external shuffle service will run. |
The SPARK_SHUFFLE_OPTS variable in
spark-env.sh |
4040 | Spark application web site port. If an application is
already using the designated port, it will increment the
port number up to the setting of the
spark.port.maxRetries property. For
example, if spark.port.maxRetries is set to
10, it will attempt to bind to port 4041, and repeat until
it reaches port 4050. |
spark-defaults.conf and
using the --conf option on the command
line. |
5598, 5599 | Public/internode ports for DSE File System (DSEFS) clients. | dse.yaml |
7080 | Spark Master web UI port. | spark-env.sh |
7081 | Spark Worker web UI port. | spark-env.sh |
8182 | The gremlin server port for DSE Graph. | See Graph configuration. |
8983 | DSE Search (Solr) port and Demo applications web site port (Portfolio, Search, Search log, Weather Sensors) | |
8090 | Spark Jobserver REST API port. | See Spark Jobserver. |
9042 | DSE database native clients port. Enabling native
transport encryption in
client_encryption_options provides the
option to use encryption for the standard port, or to use a
dedicated port in addition to the unencrypted
native_transport_port . When SSL is
enabled, port 9142 is used by native clients instead. |
cassandra.yaml |
9091 | The DataStax Studio server port. | See DataStax Studio documentation. Configure in dse_studio_install_dir/configuration.yaml. |
9077 | AlwaysOn SQL WebUI port. | See . |
9142 | DSE client port when SSL is enabled. Enabling client
encryption and keeping
native_transport_port_ssl disabled will
use encryption for native_transport_port .
Setting native_transport_port_ssl to a
different value from native_transport_port
will use encryption for
native_transport_port_ssl while keeping
native_transport_port unencrypted. |
See Configuring SSL for client-to-node connections. |
9999 | Spark Jobserver JMX port. Required only if Spark Jobserver is running and remote access to JMX is required. | |
18080 | Spark application history server web site port. Only required if Spark application history server is running. Can be changed with the spark.history.ui.port setting. | See Spark history server. |
OpsCenter public ports | ||
8888 | OpsCenter web site port. The opscenterd daemon listens on this port for HTTP requests coming directly from the browser. See . | opscenterd.conf |
Internode ports | ||
DSE database internode communication ports | ||
5599 | Private port for DSEFS internode communication port. Must not be visible outside of the cluster. | dse.yaml |
7000 | DSE internode cluster communication port. | cassandra.yaml |
7001 | DSE SSL internode cluster communication port. | cassandra.yaml |
7199 | DSE JMX metrics monitoring port. DataStax recommends allowing connections only from the local node. Configure SSL and JMX authentication when allowing connections from other nodes. | cassandra-env.sh See JMX options in . |
DataStax Enterprise internode ports | ||
7077 | Spark Master internode communication port. | dse.yaml |
8609 | Port for internode messaging service. | dse.yaml |
Spark SQL Thrift server | ||
10000 | Spark SQL Thrift server port. Only required if Spark SQL Thrift server is running. | Set with the -p option with the Spark SQL Thrift server. |