Configuring Spark nodes
Modify the settings for Spark nodes security, performance, and logging.
To manage Spark performance and operations, follow the steps in this topic and see also:
Set environment variables
DataStax recommends using the default values of Spark environment variables unless you need to increase the memory settings due to an OutOfMemoryError condition or garbage collection taking too long.
Use the Spark memory configuration options in the dse.yaml
and spark-env.sh
files.
You can set a user-specific SPARK_HOME
directory if you also set ALLOW_SPARK_HOME=true
in your environment before starting DSE.
For example, on Debian or Ubuntu using a package installation:
export SPARK_HOME=$HOME/spark &&
export ALLOW_SPARK_HOME=true &&
sudo service dse start
The temporary directory for shuffle data, RDDs, and other ephemeral Spark data can be configured for both the locally running driver and for the Spark server processes managed by DSE (Spark Master, Workers, shuffle service, executor and driver running in cluster mode).
For the locally running Spark driver, the SPARK_LOCAL_DIRS
environment variable can be customized in the user environment or in spark-env.sh
.
By default, it is set to the system temporary directory.
For example, on Ubuntu it is /tmp/
.
If there’s no system temporary directory, then SPARK_LOCAL_DIRS
is set to a .spark
directory in the user’s home directory.
For all other Spark server processes, the SPARK_EXECUTOR_DIRS
environment variable can be customized in the user environment or in spark-env.sh
.
By default it is set to /var/lib/spark/rdd
.
The default |
To configure worker cleanup, modify the SPARK_WORKER_OPTS
environment variable and add the cleanup properties.
The SPARK_WORKER_OPTS
environment variable can be set in the user environment or in spark-env.sh
.
The following example enables worker cleanup, sets the cleanup interval to 30 minutes (1800 seconds), and sets the retention time for application worker directories to 7 days (604800 seconds):
export SPARK_WORKER_OPTS="$SPARK_WORKER_OPTS \
-Dspark.worker.cleanup.enabled=true \
-Dspark.worker.cleanup.interval=1800 \
-Dspark.worker.cleanup.appDataTtl=604800"
Protect Spark directories
After you start up a Spark cluster, DataStax Enterprise creates a Spark work directory for each Spark Worker on worker nodes.
In DSE releases prior to version 5.0, configure the SPARK_WORKER_INSTANCES
option in spark-env.sh
to allow a worker node to contain more than one worker.
DSE starts a single worker when SPARK_WORKER_INSTANCES
is undefined.
Starting with DSE 5.0, DSE ignores this option because DSE uses only a single worker capable of running multiple executors
for a single application.
Spark Worker and executors store their standard output, standard errors, and other application-specific data in the worker directory. DSE users are the only writers to the directory.
By default, the location of the Spark work directory is /var/lib/spark/worker
.
To change the parent worker directory, configure SPARK_WORKER_DIR in the spark-env.sh
file.
The Spark resilient distributed datasets (RDD) directory is the repository for RDDs when executors decide to spill them to disk.
This directory might contain the data from the database or the results of running Spark applications.
You can prevent access by unauthorized users when the data in the directory is confidential.
As the RDD directory might contain a significant amount of data, configure its location on a fast disk.
The cassandra
user is the only user able to write to the directory.
The default location of the Spark RDD directory is /var/lib/spark/rdd
.
To change the RDD directory, configure SPARK_EXECUTOR_DIRS
in the spark-env.sh
file.
Grant access to default Spark directories
Before starting up nodes on a tarball installation, you need permission to access the default Spark directory locations: /var/lib/spark
and /var/log/spark
.
Change ownership of these directories as follows:
sudo mkdir -p /var/lib/spark/rdd; sudo chmod a+w /var/lib/spark/rdd; sudo chown -R $USER:$GROUP /var/lib/spark/rdd && sudo mkdir -p /var/log/spark; sudo chown -R $USER:$GROUP /var/log/spark
In multiple datacenter clusters, use a virtual datacenter to isolate Spark jobs. Running Spark jobs consume resources that can affect latency and throughput.
DataStax Enterprise supports the use of virtual nodes (vnodes) with Spark.
Secure Spark nodes
- Client-to-node SSL
-
Read the description in Client-to-node encryption and make sure the truststore entries in
cassandra.yaml
are present, even if client authentication is not enabled. - Enabling security and authentication
-
Enable security using the
spark_security_enabled
option indse.yaml
. Setting it toenabled
turns on authentication between the Spark Master and Worker nodes, and allows you to enable encryption. To encrypt Spark connections for all components except the web UI, enablespark_security_encryption_enabled
. Set the length of the shared secret used to secure Spark components using thespark_shared_secret_bit_length
option, with a default value of 256 bits. These option descriptions are in DSE Analytics. For production clusters, enable these authentication and encryption options. Setting these options does not significantly affect performance. - Authentication and Spark applications
-
If authentication is enabled, users must be authenticated in order to submit an application.
- Authorization and Spark applications
-
When DSE authorization is enabled, users need permission to submit an application. Additionally, the user submitting the application automatically receives permission to manage the application. Optionally extend these permissions to other users.
- Database credentials for the Spark SQL Thrift server
-
In the
hive-site.xml
file, configure authentication credentials for the Spark SQL Thrift server. Ensure that you use thehive-site.xml
file in the Spark directory:-
Package installations:
/etc/dse/spark/hive-site.xml
-
Tarball installations:
<installation_location>/resources/spark/conf/hive-site.xml
-
- Kerberos with Spark
-
With Kerberos authentication, the Spark launcher connects to DSE with Kerberos credentials and requests DSE to generate a delegation token. The Spark driver and executors use the delegation token to connect to the cluster. For valid authentication, you must renew the delegation token periodically. For security reasons, do not allow the user who is authenticated with the token to renew it. Therefore, delegation tokens have two associated users: token owner and token renewer.
The token renewer is set to
none
so that only a DSE internal process can renew it. DSE automatically renews delegation tokens associated with the Spark application upon submission. When the application is unregistered (finished), DSE stops the delegation token renewal and cancels the token.To set Kerberos options, see Defining a Kerberos scheme.
Configure Spark memory and cores
Spark memory options affect different components of the Spark ecosystem:
- Spark History server and the Spark Thrift server memory
-
The
SPARK_DAEMON_MEMORY
option configures the memory that the Spark SQL Thrift server and history server use. Add or change this setting in thespark-env.sh
file on nodes that run these server applications. - Spark Worker memory
-
The
memory_total
option in theresource_manager_options.worker_options
section ofdse.yaml
configures the total system memory that you can assign to all executors run by the work pools on a worker node. In the absence of defined worker pools, thedefault
work pool uses all of this memory. You can set the total amount of memory for additional work pools by configuring thememory
option in the work pool definition. - Application executor memory
-
You can configure the amount of memory that each executor can consume for the application. Spark uses a 512MB default. Read the descriptions and use either the
spark.executor.memory
option in "Spark Available Properties", or the--executor-memory <mem>
argument to the DSE spark command.
Application memory
You can configure additional Java options, and the worker applies them when spawning an executor for the application.
Review the description and use the spark.executor.extraJavaOptions
property in Spark 1.6.2 Available Properties.
For example: spark.executor.extraJavaOptions -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
Core management
You can manage the number of cores by configuring these options.
-
Spark Worker cores
The
cores_total
option in theresource_manager_options.worker_options
section ofdse.yaml
configures the total number of system cores available to Spark Workers for executors. In the absence of work pool definitions in theresource_manager_options.workpools
section ofdse.yaml
, thedefault
work pool looks at thecores_total
option and uses all its defined cores. If you define additional work pools, then thedefault
work pool uses the remaining cores after allocating the cores that the additional work pools specify.A single executor can borrow more than one core from the worker. The executor uses the number of cores as they relate to the number of parallel tasks the executor might perform. The number of cores available in the cluster is the total of all cores that each worker in the cluster provides.
-
Application cores
In the Spark configuration object of your application, you configure the number of application cores that the application requests from the cluster using either the
spark.cores.max
configuration property or the--total-executor-cores <cores>
argument to the dse spark command.
See the Spark documentation for details about memory and core allocation.
DataStax Enterprise can semi-automatically control the allocation of memory and cores to specific Spark Workers.
In the resource_manager_options.worker_options
section of the dse.yaml
file, you can configure the proportion of system resources allocated to Spark Workers and any defined work pools, or specify explicit resource settings.
When specifying decimal values of system resources DSE calculates the available resources in the following way:
-
Spark Worker memory = memory_total * (total system memory - memory assigned to DSE)
-
Spark Worker cores = cores_total * total system cores
DSE uses this calculation for any decimal values.
If the setting is not specified, DSE uses the default value 0.7
.
If the value does not contain a decimal place, the setting is the explicit number of cores or amount of memory that DSE reserves for Spark.
Setting |
The lowest values that you can assign to a named work pool’s memory and cores are 64 MB and 1 core, respectively. Lower results throw no exception and DSE limits the values automatically.
The following example shows a work pool named workpool1
with assignments of 1 core and 512 MB of RAM.
DSE assigns the remaining resources calculated from the values in worker_options
to the default
work pool.
resource_manager_options:
worker_options:
cores_total: 0.7
memory_total: 0.7
workpools:
- name: workpool1
cores: 1
memory: 512M
Running Spark clusters in cloud environments
If you are using a cloud infrastructure provider like Amazon EC2, then you must explicitly open the ports for publicly-routable IP addresses in your cluster. Without open ports, the Spark workers are not able to find the Spark Master.
One work-around is to set the prefer_local
setting in your cassandra-rackdc.properties
snitch setup file to true
:
# Uncomment the following line to make this snitch prefer the internal IP when possible, as the `Ec2MultiRegionSnitch` does. prefer_local=true
This tells the cluster to communicate only on private IP addresses within the datacenter rather than on the public routable IP addresses.
Configuring the number of retries to retrieve Spark configuration
When Spark fetches configuration settings from DSE, it does not fail immediately if it cannot retrieve the configuration data.
It retries 5 times by default, with increasing delay between retries.
Set the number of retries in the Spark configuration by modifying the spark.dse.configuration.fetch.retries
configuration property when calling the dse spark
command, or in spark-defaults.conf
.
Disabling continuous paging
Continuous paging streams bulk amounts of records from DSE to the DataStax Java Driver that DSE Spark uses.
By default, DSE enables continuous paging in queries.
To disable it, set the spark.dse.continuous_paging_enabled
option to false
when starting the Spark SQL shell or set it in spark-defaults.conf
.
For example:
dse spark-sql --conf spark.dse.continuous_paging_enabled=false
Using continuous paging can potentially improve performance up to 3 times, though the improvement depends on the data and the queries. Some factors that impact the performance improvement are the number of executor JVMs per node and the number of columns that the query indicates. Fewer executor JVMs per node and selecting more columns resulted in greater performance gains. |
Configuring the Spark web interface ports
By default the Spark web UI runs on port 7080. To change the port number, do the following:
-
Open the
spark-env.sh
file in a text editor. -
Set the
SPARK_MASTER_WEBUI_PORT
variable to the new port number. For example, to set it to port 7082:export SPARK_MASTER_WEBUI_PORT=7082
-
Repeat these steps for each Analytics node in your cluster.
-
Restart the nodes in the cluster.
Enabling Graphite Metrics in DSE Spark
To add third-party JARs to Spark nodes, add them to the Spark lib
directory on each node and then restart the cluster.
Add the Graphite Metrics JARs to this directory to enable metrics in DSE Spark.
The default location of the Spark lib directory depends on the type of installation:
-
Package installations:
/usr/share/dse/spark/lib
-
Tarball installations:
/var/lib/spark
To add the Graphite JARs to Spark in a package installation, copy them to the Spark lib directory:
cp metrics-graphite-3.1.2.jar /usr/share/dse/spark/lib/ &&
cp metrics-json-3.1.2.jar /usr/share/dse/spark/lib/
Setting Spark properties for the driver and executor
Set additional Spark properties for the Spark driver and executors in spark-defaults.conf
.
For example, to enable Spark’s commons-crypto
encryption library:
spark.network.crypto.enabled true
Spark server configuration
The spark-daemon-defaults.conf
file configures DSE Spark Masters and Workers.
Option | Default value | Description |
---|---|---|
|
30 |
The duration in seconds after which the application will be considered dead if no heartbeat is received. |
|
7447 |
The port number on which a shuffle service for SASL secured applications is started.
Bound to the |
|
7437 |
The port number on which a shuffle service for unsecured applications is started.
Bound to the |
By default Spark executor logs, which log the majority of your Spark Application output, are redirected to standard output.
The output is managed by Spark Workers.
Configure logging by adding spark.executor.logs.rolling.*
properties to spark-daemon-defaults.conf
file.
spark.executor.logs.rolling.maxRetainedFiles 3 spark.executor.logs.rolling.strategy size spark.executor.logs.rolling.maxSize 50000
Additional Spark properties that affect the master and driver can be added to spark-daemon-defaults.conf
.
For example, to enable Spark’s commons-crypto
encryption library:
spark.network.crypto.enabled true