Configuring Spark nodes
Modify the settings for Spark nodes security, performance, and logging.
spark-defaults.conf
The default location of the spark-defaults.conf file depends on the type of installation:Package installations | /etc/dse/spark/spark-defaults.conf |
Tarball installations | installation_location/resources/spark/conf/spark-defaults.conf |
hive-site.xml
For use with Spark, the default location of the hive-site.xml file is:Package installations | /etc/dse/spark/hive-site.xml |
Tarball installations | installation_location/resources/spark/conf/hive-site.xml |
dse.yaml
The location of the dse.yaml file depends on the type of installation:Package installations | /etc/dse/dse.yaml |
Tarball installations | installation_location/resources/dse/conf/dse.yaml |
cassandra.yaml
The location of the cassandra.yaml file depends on the type of installation:Package installations | /etc/dse/cassandra/cassandra.yaml |
Tarball installations | installation_location/resources/cassandra/conf/cassandra.yaml |
spark-env.sh
The default location of the spark-env.sh file depends on the type of installation:Package installations | /etc/dse/spark/spark-env.sh |
Tarball installations | installation_location/resources/spark/conf/spark-env.sh |
cassandra-rackdc.properties
The location of the cassandra-rackdc.properties file depends on the type of installation:Package installations | /etc/dse/cassandra/cassandra-rackdc.properties |
Tarball installations | installation_location/resources/cassandra/conf/cassandra-rackdc.properties |
Modify the settings for Spark nodes security, performance, and logging.
Set environment variables
DataStax recommends using the default values of Spark environment variables unless you need to increase the memory settings due to an OutOfMemoryError condition or garbage collection taking too long. Use the Spark memory configuration options in the dse.yaml and spark-env.sh files.
You can set a user-specific SPARK_HOME
directory if you also set
ALLOW_SPARK_HOME=true
in your environment before starting DSE.
For example, on Debian or Ubuntu using a package installation:
export SPARK_HOME=$HOME/spark && export ALLOW_SPARK_HOME=true && sudo service dse start
The temporary directory for shuffle data, RDDs, and other ephemeral Spark data can be configured for both the locally running driver and for the Spark server processes managed by DSE (Spark Master, Workers, shuffle service, executor and driver running in cluster mode).
For the locally running Spark driver, the SPARK_LOCAL_DIRS
environment
variable can be customized in the user environment or in
spark-env.sh. By default, it is set to the system temporary
directory. For example, on Ubuntu it is /tmp/. If there's no system
temporary directory, then SPARK_LOCAL_DIRS
is set to a
.spark directory in the user's home directory.
For all other Spark server processes, the SPARK_EXECUTOR_DIRS
environment
variable can be customized in the user environment or in
spark-env.sh. By default it is set to
/var/lib/spark/rdd.
SPARK_LOCAL_DIRS
and SPARK_EXECUTOR_DIRS
environment variable values differ from non-DSE Spark.To configure worker cleanup, modify the SPARK_WORKER_OPTS
environment
variable and add the cleanup properties. The
SPARK_WORKER_OPTS
environment variable can be set in the user environment
or in spark-env.sh. For example, the following enables
worker cleanup,.sets the cleanup interval to 30 minutes (i.e. 1800 seconds), and sets the
length of time application worker directories will be retained to 7 days (i.e. 604800
seconds).
export SPARK_WORKER_OPTS="$SPARK_WORKER_OPTS \ -Dspark.worker.cleanup.enabled=true \ -Dspark.worker.cleanup.interval=1800 \ -Dspark.worker.cleanup.appDataTtl=604800"
Protect Spark directories
After you start up a Spark cluster, DataStax Enterprise creates a Spark work directory for each Spark Worker on worker nodes. A worker node can have more than one worker, configured by the SPARK_WORKER_INSTANCES option in spark-env.sh. If SPARK_WORKER_INSTANCES is undefined, a single worker is started. The work directory contains the standard output and standard error of executors and other application specific data stored by Spark Worker and executors; the directory is writable only by the DSE user.
By default, the Spark parent work directory is located in /var/lib/spark/work, with each worker in a subdirectory named worker-number, where the number starts at 0. To change the parent worker directory, configure SPARK_WORKER_DIR in the spark-env.sh file.
The Spark RDD directory is the directory where RDDs are placed when executors decide to
spill them to disk. This directory might contain the data from the database or the results
of running Spark applications. If the data in the directory is confidential, prevent access
by unauthorized users. The RDD directory might contain a significant amount of data, so
configure its location on a fast disk. The directory is writable only by the
cassandra
user. The default location of the Spark RDD directory is
/var/lib/spark/rdd. The directory should be located on a fast disk.
To change the RDD directory, configure SPARK_EXECUTOR_DIRS in the
spark-env.sh file.
Grant access to default Spark directories
sudo mkdir -p /var/lib/spark/rdd; sudo chmod a+w /var/lib/spark/rdd; sudo chown -R $USER:$GROUP /var/lib/spark/rdd &&
sudo mkdir -p /var/log/spark; sudo chown -R $USER:$GROUP /var/log/spark
In multiple datacenter clusters, use a virtual datacenter to isolate Spark jobs. Running Spark jobs consume resources that can affect latency and throughput.
DataStax Enterprise supports the use of virtual nodes (vnodes) with Spark.
Secure Spark nodes
- Client-to-node SSL
- Ensure that the truststore entries in cassandra.yaml are present as described in Client-to-node encryption, even when client authentication is not enabled.
- Enabling security and authentication
- Security is enabled using the
spark_security_enabled
option in dse.yaml. Setting it toenabled
turns on authentication between the Spark Master and Worker nodes, and allows you to enable encryption. To encrypt Spark connections for all components except the web UI, enablespark_security_encryption_enabled
. The length of the shared secret used to secure Spark components is set using thespark_shared_secret_bit_length
option, with a default value of 256 bits. These options are described in DSE Analytics. For production clusters, enable these authentication and encryption. Doing so does not significantly affect performance. - Authentication and Spark applications
- If authentication is enabled, users need to be authenticated in order to submit an application.
- Authorization and Spark applications
- If DSE authorization is enabled, users needs permission to submit an application. Additionally, the user submitting the application automatically receives permission to manage the application, which can optionally be extended to other users.
- Database credentials for the Spark SQL Thrift server
- In the hive-site.xml file, configure authentication
credentials for the Spark SQL Thrift server. Ensure that you use the
hive-site.xml file in the Spark directory:
- Package installations: /etc/dse/spark/hive-site.xml
- Tarball installations: installation_location/resources/spark/conf/hive-site.xml
- Kerberos with Spark
- With Kerberos authentication, the Spark launcher connects to DSE with Kerberos
credentials and requests DSE to generate a delegation token. The Spark driver and
executors use the delegation token to connect to the cluster. For valid authentication,
the delegation token must be renewed periodically. For security reasons, the user who is
authenticated with the token should not be able to renew it. Therefore, delegation
tokens have two associated users: token owner and token renewer.
The token renewer is none so that only a DSE internal process can renew it. When the application is submitted, DSE automatically renews delegation tokens that are associated with Spark application. When the application is unregistered (finished), the delegation token renewal is stopped and the token is cancelled.
Configure Spark memory and cores
Spark memory options affect different components of the Spark ecosystem:
- Spark History server and the Spark Thrift server memory
- The SPARK_DAEMON_MEMORY option configures the memory that is used by the Spark SQL Thrift server and history-server. Add or change this setting in the spark-env.sh file on nodes that run these server applications.
- Spark Worker memory
- The
memory_total
option inresource_manager_options.worker_options
section of dse.yaml configures the total system memory that you can assign to all executors that are run by the work pools on the particular node. Thedefault
work pool will use all of this memory if no other work pools are defined. If you define additional work pools, you can set the total amount of memory by setting thememory
option in the work pool definition. - Application executor memory
- You can configure the amount of memory that each executor can consume for the
application. Spark uses a 512MB default. Use either the
spark.executor.memory option, described in "Spark Available Properties", or the
--executor-memory mem
argument to the dse spark command.
Application memory
You can configure additional Java options that are applied by the worker when spawning an
executor for the application. Use the spark.executor.extraJavaOptions
property, described in Spark 1.6.2 Available Properties. For example:
spark.executor.extraJavaOptions -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two
three"
Core management
- Spark Worker cores
The
cores_total
option in theresource_manager_options.worker_options
section of dse.yaml configures the total number of system cores available to Spark Workers for executors. If no work pools are defined in theresource_manager_options.workpools
section of dse.yaml thedefault
work pool will use all the cores defined bycores_total
. If additional work pools are defined, thedefault
work pool will use the cores available after allocating the cores defined by the work pools.A single executor can borrow more than one core from the worker. The number of cores used by the executor relates to the number of parallel tasks the executor might perform. The number of cores offered by the cluster is the sum of cores offered by all the workers in the cluster.
- Application cores
In the Spark configuration object of your application, you configure the number of application cores that the application requests from the cluster using either the spark.cores.max configuration property or the
--total-executor-cores cores
argument to the dse spark command.
resource_manager_options.worker_options
section in the
dse.yaml file has options to configure the proportion of
system resources that are made available to Spark Workers and any defined work pools, or
explicit resource settings. When specifying decimal values of system resources the available
resources are calculated in the following way:Spark Worker memory = memory_total * (total system memory - memory assigned to DSE)
Spark Worker cores = cores_total * total system cores
This calculation is used for any decimal values. If the setting is not specified, the default value 0.7 is used. If the value does not contain a decimal place, the setting is the explicit number of cores or amount of memory reserved by DSE for Spark.
cores_total
or a workpool's
cores
to 1.0 is a decimal value, meaning 100% of the available
cores will be reserved. Setting cores_total
or
cores
to 1 (no decimal point) is an explicit value, and one
core will be reserved.The lowest values you can assign to a named work pool's memory and cores are 64 MB and 1 core, respectively. If the results are lower, no exception is thrown and the values are automatically limited.
The following example shows a work pool named workpool1
with 1 core and
512 MB of RAM assigned to it. The remaining resources calculated from the values in
worker_options
are assigned to the default
work
pool.
resource_manager_options:
worker_options:
cores_total: 0.7
memory_total: 0.7
workpools:
- name: workpool1
cores: 1
memory: 512M
Running Spark clusters in cloud environments
If you are using a cloud infrastructure provider like Amazon EC2, you must explicitly open the ports for publicly routable IP addresses in your cluster. If you do not, the Spark workers will not be able to find the Spark Master.
One work-around is to set the prefer_local
setting in your
cassandra-rackdc.properties snitch setup file to true:
# Uncomment the following line to make this snitch prefer the internal ip when possible, as the Ec2MultiRegionSnitch does.
prefer_local=true
This tells the cluster to communicate only on private IP addresses within the datacenter rather than the public routable IP addresses.
Configuring the number of retries to retrieve Spark configuration
When Spark fetches configuration settings from DSE, it will not fail immediately if it
cannot retrieve the configuration data, but will retry 5 times by default, with increasing
delay between retries. The number of retries can be set in the Spark configuration, by
modifying the spark.dse.configuration.fetch.retries
configuration property
when calling the dse spark command, or in
spark-defaults.conf.
Disabling continuous paging
Continuous paging streams bulk amounts of records from DSE to the DataStax Java Driver used
by DSE Spark. By default, continuous paging in queries is enabled. To disable it, set the
spark.dse.continuous_paging_enabled
setting to false when starting the
Spark SQL shell or in spark-defaults.conf. For example:
dse spark-sql --conf spark.dse.continuous_paging_enabled=false
Configuring the Spark web interface ports
By default the Spark web UI runs on port 7080. To change the port number, do the following:
- Open the spark-env.sh file in a text editor.
- Set the
SPARK_MASTER_WEBUI_PORT
variable to the new port number. For example, to set it to port 7082:export SPARK_MASTER_WEBUI_PORT=7082
- Repeat these steps for each Analytics node in your cluster.
- Restart the nodes in the cluster.
Enabling Graphite Metrics in DSE Spark
Users can add third party JARs to Spark nodes by adding them to the Spark lib directory on each node and restart the cluster. Add the Graphite Metrics JARs to this directory to enable metrics in DSE Spark.
- Package installations: /usr/share/dse/spark/lib
- Tarball installations: /var/lib/spark
cp metrics-graphite-3.1.2.jar /usr/share/dse/spark/lib/ && cp metrics-json-3.1.2.jar /usr/share/dse/spark/lib/
Setting Spark properties for the driver and executor
Additional Spark properties for the Spark driver and executors
are set in spark-defaults.conf. For example, to enable
Spark's commons-crypto
encryption library:
spark.network.crypto.enabled true
Using authorization with Spark
Set permissions on roles to allow Spark applications to be started, stopped, managed, and viewed.
See and .
Spark server configuration
The spark-daemon-defaults.conf file configures DSE Spark Masters and Workers.
cassandra.yaml
The location of the cassandra.yaml file depends on the type of installation:Package installations | /etc/dse/cassandra/cassandra.yaml |
Tarball installations | installation_location/resources/cassandra/conf/cassandra.yaml |
spark-daemon-defaults.conf
The default location of the spark-daemon-defaults.conf file depends on the type of installation:Package installations | /etc/dse/spark/spark-daemon-defaults.conf |
Tarball installations | installation_location/resources/spark/conf/spark-daemon-defaults.conf |
The spark-daemon-defaults.conf file configures DSE Spark Masters and Workers.
Option | Default value | Description |
---|---|---|
dse.spark.application.timeout |
30 | The duration in seconds after which the application will be considered dead if no heartbeat is received. |
spark.dseShuffle.sasl.port |
7447 | The port number on which a shuffle service for SASL secured applications is started.
Bound to the listen_address in
cassandra.yaml. |
spark.dseShuffle.noSasl.port |
7437 | The port number on which a shuffle service for unsecured applications is started. Bound
to the listen_address in
cassandra.yaml. |
By default Spark executor logs, which log the majority of your Spark Application output, are
redirected to standard output. The output is managed by Spark Workers. Configure logging by
adding spark.executor.logs.rolling.*
properties to
spark-daemon-defaults.conf file.
spark.executor.logs.rolling.maxRetainedFiles 3
spark.executor.logs.rolling.strategy size
spark.executor.logs.rolling.maxSize 50000
Additional Spark properties that affect the master and driver can be added to
spark-daemon-defaults.conf. For example, to enable Spark's
commons-crypto
encryption library:
spark.network.crypto.enabled true