Configuring Apache Spark™ nodes
Modify the settings for Spark nodes security, performance, and logging.
To manage Spark performance and operations:
Set environment variables
DataStax recommends using the default values of Spark environment variables unless you need to increase the memory settings due to an OutOfMemoryError
condition or garbage collection taking too long.
Use the Spark memory configuration options in the dse.yaml
and spark-env.sh
files.
Where is the dse.yaml
file?
The location of the dse.yaml
file depends on the type of installation:
Installation Type | Location |
---|---|
Package installations + Installer-Services installations |
|
Tarball installations + Installer-No Services installations |
|
You can set a user-specific SPARK_HOME
directory if you also set ALLOW_SPARK_HOME=true
in your environment before starting DSE.
For example, on Debian or Ubuntu using a package installation:
export SPARK_HOME=$HOME/spark &&
export ALLOW_SPARK_HOME=true
&& sudo service dse start
To configure worker cleanup, modify the SPARK_WORKER_OPTS
environment variable and add the cleanup properties.
The SPARK_WORKER_OPTS
environment variable can be set in the user environment or in spark-env.sh
.
Where is the spark-env.sh
file?
The default location of the spark-env.sh
file depends on the type of installation:
Installation Type | Location |
---|---|
Package installations + Installer-Services installations |
|
Tarball installations + Installer-No Services installations |
|
For example, the following enables worker cleanup, sets the cleanup interval to 30 minutes (i.e. 1800 seconds), and sets the length of time application worker directories will be retained to 7 days (i.e. 604800 seconds).
export SPARK_WORKER_OPTS="$SPARK_WORKER_OPTS \
-Dspark.worker.cleanup.enabled=true \
-Dspark.worker.cleanup.interval=1800 \
-Dspark.worker.cleanup.appDataTtl=604800"
Protect Spark directories
After you start up a Spark cluster, DataStax Enterprise creates a Spark work directory for each Spark Worker on worker nodes.
A worker node can have more than one worker, configured by the SPARK_WORKER_INSTANCES
option in spark-env.sh
.
If SPARK_WORKER_INSTANCES
is undefined, a single worker is started.
The work directory contains the standard output and standard error of executors and other application specific data stored by Spark Worker and executors;
the directory is writable only by the DSE user.
By default, the Spark parent work directory is located in /var/lib/spark/work
, with each worker in a subdirectory named worker-number
, where the number starts at 0.
To change the parent worker directory, configure SPARK_WORKER_DIR
in the spark-env.sh
file.
The Spark RDD directory is the directory where RDDs are placed when executors decide to spill them to disk.
This directory might contain the data from the database or the results of running Spark applications.
If the data in the directory is confidential, prevent access by unauthorized users.
The RDD directory might contain a significant amount of data, so configure its location on a fast disk.
The directory is writable only by the cassandra
user.
The default location of the Spark RDD directory is /var/lib/spark/rdd
.
The directory should be located on a fast disk.
To change the RDD directory, configure SPARK_LOCAL_DIRS
in the spark-env.sh
file.
Grant access to default Spark directories
Before starting up nodes on a tarball installation, you need permission to access the default Spark directory locations: /var/lib/spark
and /var/log/spark
.
Change ownership of these directories as follows:
sudo mkdir -p /var/lib/spark/rdd; sudo chmod a+w /var/lib/spark/rdd; sudo chown -R $USER:$GROUP /var/lib/spark/rdd &&
sudo mkdir -p /var/log/spark; sudo chown -R $USER:$GROUP /var/log/spark
In multiple datacenter clusters, use a virtual datacenter to isolate Spark jobs. Running Spark jobs consume resources that can affect latency and throughput.
DataStax Enterprise supports the use of virtual nodes (vnodes) with Spark.
Secure Spark nodes
- Client-to-node SSL
-
Ensure that the truststore entries in
cassandra.yaml
are present as described in Client-to-node encryption, even when client authentication is not enabled.Where is the
cassandra.yaml
file?The location of the
cassandra.yaml
file depends on the type of installation:Installation Type Location Package installations + Installer-Services installations
/etc/dse/cassandra/cassandra.yaml
Tarball installations + Installer-No Services installations
<installation_location>/resources/cassandra/conf/cassandra.yaml
- Enabling security and authentication
-
Security is enabled using the
spark_security_enabled
option indse.yaml
. Setting it toenabled
turns on authentication between the Spark Master and Worker nodes, and allows you to enable encryption. To encrypt Spark connections for all components except the web UI, enablespark_security_encryption_enabled
. The length of the shared secret used to secure Spark components is set using thespark_shared_secret_bit_length
option, with a default value of 256 bits. These options are described in DSE Analytics options. For production clusters, enable these authentication and encryption. Doing so does not significantly affect performance. - Authentication and Spark applications
-
If authentication is enabled, users need to be authenticated in order to submit an application.
- Authorization and Spark applications
-
If DSE authorization is enabled, users needs permission to submit an application. Additionally, the user submitting the application automatically receives permission to manage the application, which can optionally be extended to other users.
- Database credentials for the Spark SQL Thrift server
-
In the
hive-site.xml
file, configure authentication credentials for the Spark SQL Thrift server. Ensure that you use thehive-site.xml
file in the Spark directory.
Where is the hive-site.xml
file?
The location of the hive-site.xml
file depends on the type of installation:
Installation Type | Location |
---|---|
Package installations + Installer-Services installations |
|
Tarball installations + Installer-No Services installations |
|
- Kerberos with Spark
-
With Kerberos authentication, the Spark launcher connects to DSE with Kerberos credentials and requests DSE to generate a delegation token. The Spark driver and executors use the delegation token to connect to the cluster. For valid authentication, the delegation token must be renewed periodically. For security reasons, the user who is authenticated with the token should not be able to renew it. Therefore, delegation tokens have two associated users: token owner and token renewer. The token renewer is none so that only a DSE internal process can renew it. When the application is submitted, DSE automatically renews delegation tokens that are associated with Spark application. When the application is unregistered (finished), the delegation token renewal is stopped and the token is cancelled. Set Kerberos options, see Defining a Kerberos scheme.
Using authorization with Spark
There are two kinds of authorization permissions which apply to Spark. Work pool permissions control the ability to submit a Spark application to DSE. Submission permissions control the ability to manage a particular application. All the following instructions assume you are issuing the CQL commands as a database superuser.
Use GRANT CREATE ON ANY WORKPOOL TO role
to grant permission to submit a Spark application to any Analytics datacenter.
Use GRANT CREATE ON WORKPOOL datacenter_name TO role
to grant permission to submit a Spark application to a particular Analytics datacenter.
There are similar revoke commands:
REVOKE CREATE ON ANY WORKPOOL FROM role
REVOKE CREATE ON WORKPOOL datacenter_name FROM role
When an application is submitted, the user who submits that application is automatically granted permission to manage and remove the application. You may also grant the ability to manage the application to another user or role.
Use GRANT MODIFY ON ANY SUBMISSION TO role
to grant permission to manage any submission in any work pool to the specified role.
Use GRANT MODIFY ON ANY SUBMISSION IN WORKPOOL datacenter_name TO role
to grant permission to manage any submission in a specified datacenter.
Use GRANT MODIFY ON SUBMISSION id IN WORKPOOL datacenter_name TO role
to grant permission to manage a submission identified by the provided id
in a given datacenter.
There are similar revoke commands:
REVOKE MODIFY ON ANY SUBMISSION FROM role
REVOKE MODIFY ON ANY SUBMISSION IN WORKPOOL datacenter_name FROM role
REVOKE MODIFY ON SUBMISSION id IN WORKPOOL datacenter_name FROM role
In order to issue these commands as a regular database user, the user needs to have permission to use the DSE resource manager RPC:
GRANT ALL ON REMOTE OBJECT DseResourceManager TO role
Each DSE Analytics user needs to have permission to use the client tools RPC:
GRANT ALL ON REMOTE OBJECT DseClientTool TO role
Configure Spark memory and cores
Spark memory options affect different components of the Spark ecosystem:
- Spark History server and the Spark Thrift server memory
-
The
SPARK_DAEMON_MEMORY
option configures the memory that is used by the Spark SQL Thrift server and history-server. Add or change this setting in thespark-env.sh
file on nodes that run these server applications. - Spark Worker memory
-
The
SPARK_WORKER_MEMORY
option configures the total amount of memory that you can assign to all executors that are run by a single Spark Worker on the particular node. - Application executor memory
-
You can configure the amount of memory that each executor can consume for the application. Spark uses a 512MB default. Use either the
spark.executor.memory
option, described in "Spark 1.6.2 Available Properties", or the--executor-memory mem
argument to thedse spark command
.
Application memory
You can configure additional Java options that are applied by the worker when spawning an executor for the application.
Use the spark.executor.extraJavaOptions
property, described in Spark 2.0.2 Available Properties.
For example: spark.executor.extraJavaOptions -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
Core management
You can manage the number of cores by configuring these options.
-
Spark Worker cores
The
SPARK_WORKER_CORES
option configures the number of cores offered by Spark Worker for executors. A single executor can borrow more than one core from the worker. The number of cores used by the executor relates to the number of parallel tasks the executor might perform. The number of cores offered by the cluster is the sum of cores offered by all the workers in the cluster. -
Application cores
In the Spark configuration object of your application, you configure the number of application cores that the application requests from the cluster using either the spark.cores.max configuration property or the
--total-executor-cores cores
argument to thedse spark command
.
See the Spark documentation for details about memory and core allocation.
DataStax Enterprise can control the memory and cores offered by particular Spark Workers in semi-automatic fashion.
The initial_spark_worker_resources
parameter in the dse.yaml
file specifies the fraction of system resources that are made available to the Spark Worker.
The available resources are calculated in the following way:
-
Spark Worker memory = initial_spark_worker_resources * (total system memory - memory assigned to DSE)
-
Spark Worker cores = initial_spark_worker_resources * total system cores
The lowest values you can assign to Spark Worker memory and cores are 64 MB and 1 core, respectively.
If the results are lower, no exception is thrown and the values are automatically limited.
The range of the initial_spark_worker_resources
value is 0.01 to 1.
If the range is not specified, the default value 0.7 is used.
This mechanism is used by default to set the Spark Worker memory and cores.
To override the default, uncomment and edit one or both SPARK_WORKER_MEMORY
and SPARK_WORKER_CORES
options in the spark-env.sh
file.
Running Spark clusters in cloud environments
If you are using a cloud infrastructure provider like Amazon EC2, you must explicitly open the ports for publicly routable IP addresses in your cluster. If you do not, the Spark workers will not be able to find the Spark Master.
One work-around is to set the prefer_local
setting in your cassandra-rackdc.properties
snitch setup file to true:
Where is the cassandra-rackdc.properties
file?
The location of the cassandra-rackdc.properties
depends on the type of installation:
Installation Type | Location |
---|---|
Package installations + Installer-Services installations |
|
Tarball installations + Installer-No Services |
|
# Uncomment the following line to make this snitch prefer the internal ip when possible, as the Ec2MultiRegionSnitch does.
prefer_local=true
This tells the cluster to communicate only on private IP addresses within the datacenter rather than the public routable IP addresses.
Configuring the number of retries to retrieve Spark configuration
When Spark fetches configuration settings from DSE, it does not fail immediately if it cannot retrieve the configuration data. Spark retries 5 times by default with increasing delay between retries.
The number of retries can be set in the Spark configuration, by modifying the spark.dse.configuration.fetch.retries
configuration property when calling the dse spark command
, or in spark-defaults.conf
.
Where is the spark-defaults.conf
file?
The location of the spark-defaults.conf
file depends on the type of installation:
Installation Type | Location |
---|---|
Package installations + Installer-Services installations |
|
Tarball installations + Installer-No Services installations |
|
Enabling continuous paging
Continuous paging streams bulk amounts of records from DSE to the DataStax Java Driver used by DSE Spark.
By default, continuous paging in queries is disabled.
To enable it, set the spark.dse.continuous_paging_enabled
setting to true when starting the Spark SQL shell or in spark-defaults.conf
.
For example:
dse spark-sql --conf spark.dse.continuous_paging_enabled=true
Using continuous paging can improve performance up to 3 times, though the improvement depends on the data and the queries. Some factors that impact the performance improvement are the number of executor JVMs per node and the number of columns included in the query. Greater performance gains were observed with fewer executor JVMs per node and more columns selected. |
Configuring the Spark web interface ports
By default the Spark web UI runs on port 7080. To change the port number, do the following:
-
Open the
spark-env.sh
file in a text editor. -
Set the
SPARK_MASTER_WEBUI_PORT
variable to the new port number. For example, to set it to port 7082:export SPARK_MASTER_WEBUI_PORT=7082
-
Repeat these steps for each Analytics node in your cluster.
-
Restart the nodes in the cluster.
Enabling Graphite Metrics in DSE Spark
Users can add third party JARs to Spark nodes by adding them to the Spark lib directory on each node and restart the cluster. Add the Graphite Metrics JARs to this directory to enable metrics in DSE Spark.
Where is the Spark lib directory?
The location of the Spark lib directory depends on the type of installation:
Installation Type | Location |
---|---|
Package installations + Installer-Services installations |
|
Tarball installations + Installer-No Services installations |
|
To add the Graphite JARs to Spark in a package installation, copy them to the Spark lib directory:
cp metrics-graphite-3.1.2.jar /usr/share/dse/spark/lib/ &&
cp metrics-json-3.1.2.jar /usr/share/dse/spark/lib/
Spark server configuration
Where is the spark-daemon-defaults.conf
file?
The location of the spark-daemon-defaults.conf
file depends on the type of installation:
Installation Type | Location |
---|---|
Package installations + Installer-Services installations |
|
Tarball installations + Installer-No Services installations |
|
Where is the cassandra.yaml
file?
The location of the cassandra.yaml
file depends on the type of installation:
Installation Type | Location |
---|---|
Package installations + Installer-Services installations |
|
Tarball installations + Installer-No Services installations |
|
The spark-daemon-defaults.conf
file configures DSE Spark Masters and Workers.
Option | Default value | Description |
---|---|---|
|
30 |
The duration in seconds after which the application is considered dead if no heartbeat is received. |
|
7447 |
The port number on which a shuffle service for SASL secured applications is started.
Bound to the |
|
7437 |
The port number on which a shuffle service for unsecured applications is started.
Bound to the |
By default Spark executor logs, which log the majority of your Spark Application output, are redirected to standard output.
The output is managed by Spark Workers.
Configure logging by adding spark.executor.logs.rolling.*
properties to spark-daemon-defaults.conf
file.
spark.executor.logs.rolling.maxRetainedFiles 3
spark.executor.logs.rolling.strategy size
spark.executor.logs.rolling.maxSize 50000