• Glossary
  • Support
  • Downloads
  • DataStax Home
Get Live Help
Expand All
Collapse All

DataStax Enterprise 6.8 Security Guide

    • About DSE Advanced Security
    • Security FAQs
    • Security checklists
    • Securing the environment
      • Securing ports
      • Securing the TMP directory
    • Authentication and authorization
      • Configuring authentication and authorization
        • About DSE Unified Authentication
          • Steps for new deployment
          • Steps for production environments
        • Configuring security keyspaces
        • Setting up Kerberos
          • Kerberos guidelines
          • Enabling JCE Unlimited
            • Removing AES-256
          • Preparing DSE nodes for Kerberos
            • DNS and NTP
            • krb5.conf
            • Principal
            • Keytab
        • Enabling authentication and authorization
          • Defining a Kerberos scheme
          • Defining an LDAP scheme
        • Configuring JMX authentication
        • Configuring cache settings
        • Securing schema information
      • Managing database access
        • About RBAC
        • Setting up logins and users
          • Adding a superuser login
          • Adding database users
          • LDAP users and groups
            • LDAP logins
            • LDAP groups
          • Kerberos principal logins
          • Setting up roles for applications
          • Binding a role to an authentication scheme
        • Assigning permissions
          • Database object permissions
            • Data resources
            • Functions and aggregate resources
            • Search indexes
            • Roles
            • Proxy login and execute
            • Authentication schemes
            • DSE Utilities (MBeans)
            • Analytic applications
            • Remote procedure calls
          • Separation of duties
          • Keyspaces and tables
          • Row Level Access Control (RLAC)
          • Search index permissions
          • DataStax Graph keyspace
          • Spark application permissions
          • DataStax Studio permissions
          • Remote procedure calls
          • DSE client-tool spark
          • JMX MBean permissions
          • Deny (denylist) db object permission
          • Restricting access to data
      • Providing credentials from DSE tools
        • About clients
        • Internal and LDAP authentication
          • Command line
          • File
          • Environment variables
          • Using CQLSH
        • Kerberos
          • JAAS configuration file location
          • Keytab
          • Ticket Cache
          • Spark jobs
          • SSTableLoader
          • Graph and gremlin-console
          • dsetool
          • CQLSH
        • Nodetool
        • JConsole
    • Auditing database activity
      • Enabling database auditing
      • Capturing DSE Search HTTP requests
      • Log formats
      • View events from DSE audit table
    • Transparent data encryption
      • About Transparent Data Encryption
      • Configuring local encryption
        • Setting up local encryption keys
        • Encrypting configuration file properties
        • Encrypting system resources
        • Encrypting tables
        • Rekeying existing data
        • Using tools with TDE-encrypted SSTables
        • Troubleshooting encryption key errors
      • Configuring KMIP encryption
      • Encrypting Search indexes
        • Encrypting new Search indexes
        • Encrypting existing Search indexes
        • Tuning encrypted Search indexes
      • Migrating encrypted tables from earlier versions
      • Bulk loading data between TDE-enabled clusters
    • Configuring SSL
      • Steps for configuring SSL
      • Creating SSL certificates, keystores, and truststores
        • Remote keystore provider
        • Local keystore files
      • Securing node-to-node connections
      • Securing client-to-node connections
        • Configuring JMX on the server side
        • nodetool, nodesync, dsetool, and Advanced Replication
        • JConsole (JMX)
        • SSTableloader
        • Connecting to SSL-enabled nodes using cqlsh
      • Enabling SSL encryption for DSEFS
      • Reference: SSL instruction variables
    • Securing Spark connections
  • DataStax Enterprise 6.8 Security Guide
  • Securing the environment
  • Securing ports

Securing DataStax Enterprise Ports

All network security starts with strict and proper firewall rules on interfaces that are exposed to the internet, allowing only the absolute minimum traffic in or out of the internal network. Firewall security is especially important when running your infrastructure in a public cloud. Wherever you host your clusters, DataStax strongly recommends using a firewall on all nodes in your cluster.

Begin with a restrictive configuration that blocks all traffic except SSH. Then, open up the following ports in compliance with your security requirements to allow communication between the nodes. If these ports are not opened, the node acts as a standalone database server rather than joining the cluster when you start DataStax Enterprise (DSE) on a node.

If the cluster uses SSL only, close any non-SSL ports that have dedicated SSL ports. To ensure communication is not disabled to any non-SSL clients, DataStax recommends testing the configuration in a staging environment before enabling the firewall in production environments.

Make sure your firewall rules do not restrict traffic between DSE Analytics nodes. Traffic between DSE Analytics nodes must be unrestricted to allow communication between DSE Spark Master and Worker nodes.

Table 1. Configuration Files
Filename Location dependent on the type of installation

cassandra-env.sh

Package installations: /etc/dse/cassandra/cassandra-env.sh

Tarball installations: <installation_location>/resources/cassandra/conf/cassandra-env.sh

cassandra.yaml

Package installations: /etc/dse/cassandra/cassandra.yaml

Tarball installations: <installation_location>/resources/cassandra/conf/cassandra.yaml

dse.yaml

Package installations: /etc/dse/dse.yaml

Tarball installations: <installation_location>/resources/dse/conf/dse.yaml

spark-defaults.conf

Package installations: /etc/dse/spark/spark-defaults.conf

Tarball installations: <installation_location>/resources/spark/conf/spark-defaults.conf

spark-env.sh

Package installations: /etc/dse/spark/spark-env.sh

Tarball installations: <installation_location>/resources/spark/conf/spark-env.sh

opscenterd.conf

Package installations: /etc/opscenter/opscenterd.conf

Tarball installations: <installation_location>/conf/opscenterd.conf

Procedure

Open the following ports:

Table 2. Ports to Open for Configurable Services
Default port Service Configurable in

Public-facing ports

22

SSH (default)

See your OS documentation on sshd.

DataStax Enterprise public ports

(random)

Spark port for the driver to listen on. Used for communicating with the executors and the standalone Master. In client mode, this port is opened on the local node where the Spark application was started. In cluster mode, this port is opened on one of the Analytics nodes selected randomly. When used in cluster mode, the port is opened only on the network interface used for internode communication. To explicitly set the port, set the spark.driver.port property in the Spark driver. If an application is already using the designated port, it increments the port number up to the setting of the spark.port.maxRetries property. For example, if spark.driver.port is set to 11000 and spark.port.maxRetries is set to 10, it attempts to bind to port 11000. If that fails it increments the port number and retry, stopping at port 11010.

spark-defaults.conf and using the --conf option on the command line.

(random)

Spark port for all block managers to listen on. These ports exist on both the driver and the executors. To explicitly set the port, set the spark.blockManager.port property. If an application is already using the designated port, it increments the port number up to the setting of the spark.port.maxRetries property. For example, if spark.blockManager.port is set to 11000 and spark.port.maxRetries is set to 10, it attempts to bind to port 11000. If that fails it increments the port number and retries, stopping at port 11010.

spark-defaults.conf and using the --conf option on the command line.

(random)

spark.executor.port Spark port for the executor to listen on. This is used for communicating with the driver.

spark-defaults.conf and using the --conf option on the command line.

(random)

spark.shuffle.service.port Spark port on which the external shuffle service runs.

The SPARK_SHUFFLE_OPTS variable in spark-env.sh.

4040

Spark application web site port. If an application is already using the designated port, it increments the port number up to the setting of the spark.port.maxRetries property. For example, if spark.port.maxRetries is set to 10, it attempts to bind to port 4041, and repeats until it reaches port 4050.

spark-defaults.conf and using the --conf option on the command line.

5598, 5599

Public/internode ports for DSE File System (DSEFS) clients.

dse.yaml

7080

Spark Master web UI port.

spark-env.sh

7081

Spark Worker web UI port.

spark-env.sh

8182

The gremlin server port for DSE Graph.

See Graph configuration.

8983

DSE Search (Solr) port and Demo applications web site port (Portfolio, Search, Search log, Weather Sensors)

8090

Spark Jobserver REST API port.

See Spark Jobserver.

9042

DSE database native clients port. Enabling native transport encryption in client_encryption_options provides the option to use encryption for the standard port, or to use a dedicated port in addition to the unencrypted native_transport_port. When SSL is enabled, port 9142 is used by native clients instead.

cassandra.yaml

9091

The DataStax Studio server port.

See DataStax Studio documentation. Configure in <dse_studio_install_dir>/configuration.yaml.

9077

AlwaysOn SQL WebUI port.

See Configuring AlwaysOn SQL.

9142

DSE client port when SSL is enabled. Enabling client encryption and keeping native_transport_port_ssl disabled uses encryption for native_transport_port. Setting native_transport_port_ssl to a different value from native_transport_port uses encryption for native_transport_port_ssl while keeping native_transport_port unencrypted.

See Configuring SSL for client-to-node connections.

9999

Spark Jobserver JMX port. Required only if Spark Jobserver is running and remote access to JMX is required.

18080

Spark application history server web site port. Only required if Spark application history server is running. Can be changed with the spark.history.ui.port setting.

See Spark history server.

OpsCenter public ports

8888

OpsCenter web site port. The opscenterd daemon listens on this port for HTTP requests coming directly from the browser. See OpsCenter ports reference.

opscenterd.conf

Internode ports

DSE database internode communication ports

5599

Private port for DSEFS internode communication port. Must not be visible outside of the cluster.

dse.yaml

7000

DSE internode cluster communication port.

cassandra.yaml

7001

DSE SSL internode cluster communication port.

cassandra.yaml

7199

DSE JMX metrics monitoring port. DataStax recommends allowing connections only from the local node. Configure SSL and JMX authentication when allowing connections from other nodes.

cassandra-env.sh See JMX options in Tuning Java Virtual Machine.

DataStax Enterprise internode ports

7077

Spark Master internode communication port.

dse.yaml

8609

Port for internode messaging service.

dse.yaml

3++h

Spark SQL Thrift server

10000

Spark SQL Thrift server port. Only required if Spark SQL Thrift server is running.

Securing the environment Securing the TMP directory

General Inquiries: +1 (650) 389-6000 info@datastax.com

© DataStax | Privacy policy | Terms of use

DataStax, Titan, and TitanDB are registered trademarks of DataStax, Inc. and its subsidiaries in the United States and/or other countries.

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries.

Kubernetes is the registered trademark of the Linux Foundation.

landing_page landingpage