Configuring Spark nodes
Configure Spark nodes in datacenters that are separate from nodes running other types of workloads, such as Cassandra real-time and DSE Search.
Spark nodes must be configured in their own datacenters. Do not run Spark node types in the same datacenter with nodes running Cassandra node types. DSE SearchAnalytics clusters can use DSE Search queries within DSE Analytics jobs. You can run Spark alongside integrated Hadoop or BYOH, but not on the same node. Be sure to follow the workload isolation guidelines.
For use with Spark, the default location of the hive-site.xml file is:
Installer-Services and Package installations | /etc/dse/spark/hive-site.xml |
Installer-No Services and Tarball installations | install_location/resources/spark/conf/hive-site.xml |
For use with Hive, the default location of the hive-site.xml file is:
Installer-Services and Package installations | /etc/dse/hive/hive-site.xml |
Installer-No Services and Tarball installations | install_location/resources/hive/conf/hive-site.xml |
Performance
Set environment variables
DataStax recommends using the default values of Spark environment variables unless you need to increase the memory settings due to an OutOfMemoryError condition or garbage collection taking too long. Use the Spark memory configuration options in the dse.yaml and spark-env.sh files.
You can set a user-specific SPARK_HOME
directory if you also set
ALLOW_SPARK_HOME=true
in your environment before starting DSE.
For example, on Debian or Ubuntu using a package installation:
export SPARK_HOME=$HOME/spark $ export ALLOW_SPARK_HOME=true $ sudo service dse start
Installer-Services | /etc/dse/dse.yaml |
Package installations | /etc/dse/dse.yaml |
Installer-No Services | install_location/resources/dse/conf/dse.yaml |
Tarball installations | install_location/resources/dse/conf/dse.yaml |
Installer-Services and Package installations | /etc/dse/spark/spark-env.sh |
Installer-No Services and Tarball installations | install_location/resources/spark/conf/spark-env.sh |
Protect Spark directories
After you start up a Spark cluster, DataStax Enterprise creates a Spark work directory for each Spark Worker on worker nodes. A worker node can have more than one worker, configured by the SPARK_WORKER_INSTANCES option in spark-env.sh. If SPARK_WORKER_INSTANCES is undefined, a single worker is started. The work directory contains the standard output and standard error of executors and other application specific data stored by Spark Worker and executors; the directory is writable only by the Cassandra user.
By default, the Spark parent work directory is located in /var/lib/spark/work, with each worker in a subdirectory named worker-number, where the number starts at 0. To change the parent worker directory, configure SPARK_WORKER_DIR in the spark-env.sh file.
The Spark RDD directory is the directory where RDDs are placed when executors decide to spill them to disk. This directory might contain the data from the database or the results of running Spark applications. If the data in the directory is confidential, prevent access by unauthorized users. The RDD directory might contain a significant amount of data, so configure its location on a fast disk. The directory is writable only by the Cassandra user. The default location of the Spark RDD directory is /var/lib/spark/rdd. The directory should be located on a fast disk. To change the RDD directory, configure SPARK_LOCAL_DIRS in the spark-env.sh file.
Grant access to default Spark directories
$ sudo mkdir -p /var/lib/spark/rdd; sudo chmod a+w /var/lib/spark/rdd; sudo chown -R $USER:$GROUP /var/lib/spark/rdd
$ sudo mkdir -p /var/log/spark; sudo chown -R $USER:$GROUP /var/log/spark
In multiple datacenter clusters, use a virtual datacenter to isolate Spark jobs. Running Spark jobs consume resources that can affect latency and throughput.
DataStax Enterprise supports the use of Cassandra virtual nodes (vnodes) with Spark.
Secure Spark nodes
- Client-to-node SSL
- Ensure that the truststore entries in cassandra.yaml are present as described in Client-to-node encryption, even when client authentication is not enabled.
- Enabling security and authentication
- Shared secret authentication is enabled using the
spark_security_enabled
option. To encrypt Spark connections for all components except the web UI, enablespark_security_encryption_enabled
. Both options are described in DSE Analytics options. - JAR files on CFS
- When JAR files are on the Cassandra file system (CFS) and authentication is enabled, enable Spark applications in cluster mode.
- Cassandra credentials for the Spark SQL Thrift server
- In the hive-site.xml file, configure Cassandra
authentication credentials for the Spark SQL Thrift server. Ensure that you use the
hive-site.xml file in the Spark directory:
Installer-Services and Package installations /etc/dse/spark/hive-site.xml Installer-No Services and Tarball installations install_location/resources/spark/conf/hive-site.xml - Kerberos with Spark
- With Kerberos authentication, the Spark launcher connects to DSE with Kerberos
credentials and requests DSE to generate a delegation token. The Spark driver and
executors use the delegation token to connect to the cluster. For valid authentication,
the delegation token must be renewed periodically. For security reasons, the user who is
authenticated with the token should not be able to renew it. Therefore, delegation
tokens have two associated users: token owner and token renewer.
The token renewer is none so that only a DSE internal process can renew it. When the application is submitted, DSE automatically renews delegation tokens that are associated with Spark application. When the application is unregistered (finished), the delegation token renewal is stopped and the token is cancelled.
Configure Spark memory and cores
Spark memory options affect different components of the Spark ecosystem:
- Spark History server and the Spark Thrift server memory
- The SPARK_DAEMON_MEMORY option configures the memory that is used by the spark-sql-thriftserver and history-server. Add or change this setting in the spark-env.sh file on nodes that run these server applications.
- Spark Worker memory
- The SPARK_WORKER_MEMORY option configures the total amount of memory that you can assign to all executors that are run by a single Spark Worker on the particular node.
- Application executor memory
- You can configure the amount of memory that each executor can consume for the
application. Spark uses a 512MB default. Use either the
spark.executor.memory option, described in "Spark 1.6.2 Available Properties", or the
--executor-memory mem
argument to the dse spark command.
Application memory
You can configure additional Java options that are applied by the worker when spawning an
executor for the application. Use the spark.executor.extraJavaOptions
property, described in Spark 1.6.2 Available Properties. For example:
spark.executor.extraJavaOptions -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two
three"
Core management
- Spark Worker cores
The SPARK_WORKER_CORES option configures the number of cores offered by Spark Worker for use by executors. A single executor can borrow more than one core from the worker. The number of cores used by the executor relates to the number of parallel tasks the executor might perform. The number of cores offered by the cluster is the sum of cores offered by all the workers in the cluster.
- Application cores
In the Spark configuration object of your application, you configure the number of application cores that the application requests from the cluster using either the spark.cores.max configuration property or the
--total-executor-cores cores
argument to the dse spark command.
Spark Worker memory = initial_spark_worker_resources * (total system memory - memory assigned to Cassandra)
Spark Worker cores = initial_spark_worker_resources * total system cores
This mechanism is used by default to set the Spark Worker memory and cores. To override the default, uncomment and edit one or both SPARK_WORKER_MEMORY and SPARK_WORKER_CORES options in the spark-env.sh file.
Running Spark clusters in cloud environments
If you are using a cloud infrastructure provider like Amazon EC2, you must explicitly open the ports for publicly routable IP addresses in your cluster. If you do not, the Spark workers will not be able to find the Spark Master.
One work-around is to set the prefer_local
setting in your
cassandra-rackdc.properties snitch setup file to true:
# Uncomment the following line to make this snitch prefer the internal ip when possible, as the Ec2MultiRegionSnitch does.
prefer_local=true
This tells the cluster to only communicate on private IP addresses within the data-center rather than the public routable IP addresses.
Installer-Services and Package installations | /etc/dse/cassandra/cassandra-rackdc.properties |
Installer-No Services and Tarball installations | install_location/resources/cassandra/conf/cassandra-rackdc.properties |
Configuring the number of retries to retrieve Spark configuration
When Spark fetches configuration settings from DSE, it will not fail immediately if it
cannot retrieve the configuration data, but will retry 5 times by default, with increasing
delay between retries. The number of retries can be set in the Spark configuration, by
modifying the spark.dse.configuration.fetch.retries
configuration property
when calling the dse spark
command, or in spark-defaults.conf.
Package installations |
/etc/dse/spark/spark-defaults.conf |
Tarball installations |
installation_location/resources/spark/conf/spark-defaults.conf |
Adding additional JARs to Spark Nodes
Users can add third party JARs to Spark nodes by adding them to the Spark lib directory on each node and restart the cluster.
Installer-Services and Package installations | /usr/share/dse/spark/lib |
Installer-No Services and Tarball installations | install_location/resources/spark/lib/ |
cp metrics-graphite-3.1.2.jar /usr/share/dse/spark/lib/ $ cp metrics-json-3.1.2.jar /usr/share/dse/spark/lib/