• Glossary
  • Support
  • Downloads
  • DataStax Home
Get Live Help
Expand All
Collapse All

DataStax Enterprise 5.1 Documentation

    • Overview
      • Release notes
      • New features
    • Architecture
      • Architecture FAQ
      • Database architecture
        • Architecture in brief
        • Internode communications (gossip)
        • Data distribution and replication
          • Data distribution overview
          • Consistent hashing
          • Virtual nodes
            • Data replication
          • Partitioners
          • Snitches
            • Dynamic snitching
            • Types of snitches
        • Node repair
          • Hinted handoff: repair during write path
          • Read Repair: repair during read path
          • Anti-entropy repair
      • Component architecture
        • DSE Analytics
        • DSE Search
        • DSE Graph
          • When to use DSE Graph
          • DSE Graph, OLTP, and OLAP
          • Comparing DSE Graph and relational databases
          • Migrating to DSE Graph from a relational database
          • Migrating to DSE Graph from Apache Cassandra
      • Database internals
        • Storage engine
        • About reads and writes
          • How is data written?
          • How is data maintained?
          • How is data updated?
          • How is data deleted?
          • What are tombstones?
          • How are indexes stored and updated?
          • How is data read?
          • How do write patterns affect reads?
        • Data consistency
          • Read and write consistency
          • Differences between DSE and RDMBS transactions
          • Using lightweight transactions
          • Consistency level performance
          • Consistency level configuration
          • Configuring serial consistency
          • Read requests
            • Examples of read consistency levels
          • Write requests
            • Multiple datacenter write requests
    • Planning
      • Initializing a cluster
        • Initializing datacenters
          • Initializing a single datacenter per workload type
          • Initializing multiple datacenters per workload type
        • Initializing single-token architecture datacenters
          • Calculating tokens for single-token architecture nodes
    • Getting started
    • Installing DSE
      • Which install method should I use?
      • DataStax Installers
        • DataStax Installer (root permissions)
        • DataStax Installer (no root permissions)
        • DataStax Installer (unattended)
      • Lifecycle Manager
      • Package installer using Yum
      • RedHat systemd configuration
      • Package installer using APT
      • Binary tarball installer
      • Installing DSE patch releases
      • Installing on cloud providers
      • Installing on Docker
      • Installing supporting software
      • Uninstalling DSE
      • Installing CQLSH
      • Default file locations
        • Package and Installer-Services installations
        • Tarball and Installer-No Services installations
    • Managing
      • Configuration
        • Recommended production settings
        • YAML and configuration properties
          • cassandra.yaml
          • dse.yaml
          • remote.yaml
          • cassandra-rackdc.properties
          • cassandra-topology.properties
        • Configuring snitches for cloud providers
          • Ec2Snitch
          • Ec2MultiRegionSnitch
          • GoogleCloudSnitch
          • CloudstackSnitch
        • Start-up parameters
        • Choosing a compaction strategy
        • Using multiple network interfaces
        • Configuring gossip directory
        • Configuring heap dump directory
        • Configuring Virtual Nodes
          • Virtual node (vnode) configuration
          • Enabling virtual nodes on an existing production cluster
        • Logging configuration
          • Changing logging locations
          • Configuring logging
          • Commit log archive configuration
          • Change Data Capture (CDC) logging
      • Tools
        • DSE Metrics Collector
        • nodetool
          • About the nodetool utility
          • abortrebuild
          • assassinate
          • bootstrap
          • cfhistograms
          • cfstats
          • cleanup
          • clearsnapshot
          • compact
          • compactionhistory
          • compactionstats
          • decommission
          • describecluster
          • describering
          • disableautocompaction
          • disablebackup
          • disablebinary
          • disablegossip
          • disablehandoff
          • disablehintsfordc
          • disablethrift
          • drain
          • enableautocompaction
          • enablebackup
          • enablebinary
          • enablegossip
          • enablehandoff
          • enablehintsfordc
          • enablethrift
          • failuredetector
          • flush
          • garbagecollect
          • getcachecapacity
          • getcachekeystosave
          • gcstats
          • getcompactionthreshold
          • getcompactionthroughput
          • getconcurrentcompactors
          • getendpoints
          • getinterdcstreamthroughput
          • getlogginglevels
          • getseeds
          • getsstables
          • getstreamthroughput
          • gettimeout
          • gettraceprobability
          • gossipinfo
          • help
          • gethintedhandoffthrottlekb
          • info
          • invalidatecountercache
          • invalidatekeycache
          • invalidaterowcache
          • join
          • listendpointspendinghints
          • listsnapshots
          • mark_unrepaired
          • move
          • netstats
          • pausehandoff
          • proxyhistograms
          • rangekeysample
          • rebuild
          • rebuild_index
          • rebuild_view
          • refresh
          • refreshsizeestimates
          • reloadlocalschema
          • reloadtriggers
          • reloadseeds
          • relocatesstables
          • removenode
          • repair
          • replaybatchlog
          • resetlocalschema
          • resume
          • resumehandoff
          • ring
          • scrub
          • sequence
          • setcachecapacity
          • setcachekeystosave
          • setcompactionthreshold
          • setcompactionthroughput
          • setconcurrentcompactors
          • sethintedhandoffthrottlekb
          • setinterdcstreamthroughput
          • setlogginglevel
          • setstreamthroughput
          • settimeout
          • settraceprobability
          • sjk
          • snapshot
          • status
          • statusbackup
          • statusbinary
          • statusgossip
          • statushandoff
          • statusthrift
          • stop
          • stopdaemon
          • tablehistograms
          • tablestats
          • toppartitions
          • tpstats
          • truncatehints
          • upgradesstables
          • verify
          • version
          • viewbuildstatus
        • dse commands
          • About dse commands
          • dse connection options
          • add-node
          • beeline
          • cassandra
          • cassandra-stop
          • exec
          • fs
          • gremlin-console
          • hadoop fs
          • list-nodes
          • pyspark
          • remove-node
          • spark
          • spark-class
          • spark-jobserver
          • spark-history-server
          • spark-sql
          • spark-sql-thriftserver
          • spark-submit
          • SparkR
          • -v
        • dse client-tool
          • About dse client-tool
          • client-tool connection options
          • cassandra
          • configuration export
          • configuration byos-export
          • configuration import
          • spark
        • dsetool
          • About dsetool
          • dsetool connection options
          • autojt
          • checkcfs
          • core_indexing_status
          • create_core
          • createsystemkey
          • encryptconfigvalue
          • get_core_config
          • get_core_schema
          • help
          • index_checks
          • infer_solr_schema
          • inmemorystatus
          • insights_config
          • insights_filters
          • list_core_properties
          • list_index_files
          • list_subranges
          • managekmip list
          • managekmip expirekey
          • managekmip revoke
          • managekmip destroy
          • node_health
          • partitioner
          • perf
          • read_resource
          • rebuild_indexes
          • repaircfs
          • reload_core
          • ring
          • set_core_property
          • sparkmaster cleanup
          • sparkworker restart
          • status
          • stop_core_reindex
          • tieredtablestats
          • tsreload
          • unload_core
          • upgrade_index_files
          • write_resource
        • SSTable utilities
          • sstabledump
          • sstableexpiredblockers
          • sstablelevelreset
          • sstableloader
          • sstablemetadata
          • sstableofflinerelevel
          • sstablepartitions
          • sstablerepairedset
          • sstablescrub
          • sstablesplit
          • sstableupgrade
          • sstableutil
          • sstableverify
        • Preflight check tool
        • cluster_check and yaml_diff tools
      • Operations
        • Starting and stopping DSE
          • Starting as a service
          • Starting as a stand-alone process
          • Stopping a node
        • Adding or removing nodes, datacenters, or clusters
          • Adding vnodes to an existing cluster
          • Adding a datacenter to a cluster
          • Adding a datacenter to a cluster using a designated datacenter as a data source
          • Replacing a dead node or dead seed node
          • Replacing a running node
            • Adding a node and then decommissioning the old node
            • Replacing a running node
          • Moving a node from one rack to another
          • Decommissioning a datacenter
          • Removing a node
          • Changing the IP address of a node
          • Switching snitches
          • Changing keyspace replication strategy
          • Migrating or renaming a cluster
          • Adding single-token nodes to a cluster
          • Adding a datacenter to a single-token architecture cluster
          • Replacing a dead node in a single-token architecture cluster
        • Backing up and restoring data
          • About snapshots
          • Taking a snapshot
          • Deleting snapshot files
          • Enabling incremental backups
          • Restoring from a snapshot
          • Restoring a snapshot into a new cluster
          • Recovering from a single disk failure using JBOD
        • Repairing nodes
          • Manual repair: Anti-entropy repair
          • When to run anti-entropy repair
          • Changing repair strategies
            • Migrating to full repairs
            • Migrating to incremental repairs
        • Monitoring a DSE cluster
        • Tuning the database
          • Tuning Java resources
            • Changing heap size parameters
            • Configuring the garbage collector
              • G1 MaxGCPauseMillis
              • CMS parameters
          • Tuning Bloom filters
          • Configuring memtable thresholds
        • Data caching
          • Configuring data caches
            • Enabling and configuring caching
            • Tips for efficient cache use
          • Monitoring and adjusting caching
        • Compacting and compressing
          • Configuring compaction
          • Compression
            • When to compress data
            • Configuring compression
          • Testing compaction and compression
        • Migrating data to DSE
        • Collecting node health and indexing scores
        • Clearing data from DataStax Enterprise
      • DSE Management Services
        • DSE Performance Service
          • Performance Service
          • Configuring Performance Service replication strategy
          • Collecting data
            • Collecting slow queries
            • Collecting system level diagnostics
            • Collecting object I/O level diagnostics
            • Statistics gathered for objects
            • Collecting database summary diagnostics
            • Collecting cluster summary diagnostics
            • Collecting histogram diagnostics
            • Collecting user activity diagnostics
            • Statistics gathered for user activity
          • Collecting search data
            • Collecting slow search queries
            • Collecting indexing errors
            • Collecting Apache Solr performance statistics
            • Collecting cache statistics
            • Collecting index statistics
            • Collecting handler statistics
            • Collecting request handler metrics
          • Monitoring Spark with Spark Performance Objects
          • Diagnostic table reference
          • Solr diagnostic table reference
            • Frequently asked questions
            • Slow sub-query log for search
            • Indexing error log
            • Query latency snapshot
            • Update latency snapshot
            • Commit latency snapshot
            • Merge latency snapshot
            • Filter cache statistics
            • Query result cache statistics
            • Index statistics
            • Update handler statistics
            • Update request handler statistics
            • Search request handler statistics
      • DSE In-Memory
        • Creating or altering tables to use DSE In-Memory
        • Verifying table properties
        • Managing memory
        • Backing up and restoring data
      • DSE Tiered Storage
        • About DSE Tiered Storage
        • Configuring DSE Tiered Storage
        • Testing configurations
      • DSE Multi-Instance
        • About DSE Multi-Instance
        • DSE Multi-Instance architecture
        • Adding nodes to DSE Multi-Instance
        • DSE Multi-Instance commands
    • Securing
      • Security FAQs
      • Security checklists
      • Securing the environment
        • Securing ports
        • Securing the TMP directory
      • Configuring DSE Unified Authentication
        • About DSE Unified Authentication
          • Implementing DSE Unified Authentication
          • Implementing without downtime in production
        • Setting security keyspaces replication factors
        • Setting up Kerberos
          • Kerberos guidelines
          • Enabling JCE Unlimited
            • Removing AES-256
          • Preparing DSE nodes for Kerberos
        • Enabling DSE Unified Authentication
          • Defining a Kerberos scheme
          • Defining an LDAP scheme
        • Configuring JMX authentication
        • Managing credentials, role, and permissions cache settings
      • Connecting to authentication enabled clusters
        • About client connections
        • Providing credentials with DSE tools
        • Providing credentials with nodetool
        • Providing credentials with JConsole
        • Providing credentials with cqlsh
        • Using dsetool with Kerberos enabled cluster
        • Using cqlsh with Kerberos or user authentication
        • Loading data into a remote Kerberos enabled cluster
        • Graph and gremlin-console
        • Running Spark jobs with Kerberos
      • Managing roles
        • About roles
        • Creating superuser accounts
        • Creating roles for internal mode
        • Creating roles for LDAP mode
        • Creating roles for Kerberos principals
        • Binding a role to an authentication scheme
        • Configuring proxy roles for applications
      • Authorizing access to database resources
        • About permissions
        • Managing keyspace and table permissions
        • Setting row-level permissions
        • Managing access to DSE Graph keyspaces
        • Authorizing remote procedure calls for CQL execution
        • JMX MBean permissions
        • Search index permissions
        • Managing Spark application permissions
      • Auditing activity
        • Enabling data auditing
        • Configuring audit logging
          • Log formats
        • Configuring audit logging to a database table
          • CassandraAuditWriter table columns
        • Configuring auditing for DSE Search
      • Transparent data encryption
        • About Transparent Data Encryption
        • Configuring local encryption
          • Setting up local encryption keys
          • Encrypting configuration file properties
          • Encrypting system resources
          • Encrypting tables
          • Rekeying existing data
          • Troubleshooting encryption key errors
        • Configuring KMIP encryption
        • Encrypting Search indexes
          • Encrypting new Search indexes
          • Encrypting existing Search indexes
          • Tuning encrypted Search indexes
        • Migrating encrypted tables from earlier versions
        • Bulk loading data between TDE-enabled clusters
      • Configuring SSL
        • About SSL
        • Setting up SSL certificates
        • Securing internal transactional node connections
        • Securing client to cluster connections
        • Securing Spark connections
        • Using CQL shell (cqlsh) with SSL
        • Setting up SSL for nodetool, dsetool, and dse advrep
        • Setting up SSL for jconsole (JMX)
        • Connecting sstableloader to a secured cluster
        • Enabling SSL encryption for DSEFS
      • Dynamically set LDAP Authenticator Connection Search Password
    • Tooling Resources
      • Stress tools
        • The cassandra-stress tool
        • Interpreting the output of cassandra-stress
        • cfs-stress tool
      • OpsCenter services
        • Best Practice Service
        • Capacity Service
        • Repair Service
    • DSE Advanced Replication
      • About DSE Advanced Replication
      • Architecture
      • Traffic between the clusters
      • Terminology
      • Getting started
      • Keyspaces
      • Data Types
      • Operations
      • CQL queries
      • Metrics
      • Managing invalid messages
      • Managing audit logs
      • Command line tool
        • dse advrep commands
          • About the dse advrep command
          • channel create
          • channel update
          • channel delete
          • channel pause
          • channel resume
          • channel status
          • channel truncate
          • conf list
          • conf remove
          • conf update
          • destination create
          • destination update
          • destination delete
          • destination list
          • destination list-conf
          • destination remove-conf
          • metrics list
          • replog count
          • replog analyze-audit-log
    • DSE Analytics
      • Setting the replication factor for analytics keyspaces
      • DSE Analytics and Search integration
        • Using predicate push down in Spark SQL
      • About DSE Analytics Solo
      • DSEFS (DataStax Enterprise file system)
        • About DSEFS
        • Enabling DSEFS
        • Disabling DSEFS
        • Configuring DSEFS
        • Commands DSEFS
        • DSEFS compression
        • DSEFS authentication
        • DSEFS authorization
        • Using the DSEFS REST interface
        • Copying data from CFS to DSEFS
        • Programmatic access to DSEFS
        • Hadoop FileSystem interface implemented by DseFileSystem
        • Using JMX to read DSEFS metrics
      • Cassandra File System (deprecated)
    • DSE Graph
      • DSE Graph Terminology
      • Using JMX to read and execute operation with DSE Graph metrics
      • DSE Graph Configuration
        • Configuring DSE Graph options in the dse.yaml file
        • Configuring the Gremlin console in the remote.yaml
        • Configuring the Gremlin Server in the dse.yaml file
        • Configuring the Graph sandbox
        • Specifying the schema mode
        • Specifying DSE database and graph settings
        • Configuring DSE Graph Security
      • DSE Graph Tools
      • DSE Graph Reference
        • The schema API
          • clear
          • connection
          • config
          • describe
          • edgeLabel
          • exists
          • index - edge index
            • index - property index
            • index - vertex index
            • partitionKey - clusteringKey
            • properties
            • propertyKey
            • vertexLabel
          • The system API
          • create
          • drop
          • exists
          • graphs
          • option
          • replication
          • systemReplication
          • truncate
    • DSE Search
      • About DSE Search
        • DSE Search vs. OSS
        • Unsupported features for DSE Search
        • Apache Solr and Apache Lucene limitations
      • Configuring DSE Search
        • DSE Search reference
          • Search index config
          • Search index schema
          • dsetool search index commands
          • Configuration properties
        • Viewing search index schema and config
        • Customizing the search index schema
        • Changing auto-generated search index settings
        • Using LowerCaseStrField with search indexes
        • Set the location of search indexes
        • DSE Search logging
        • Enabling multi-threaded queries
        • Configuring additional search components
        • Shuffling shards to balance the load
        • Load balancing for distributed search queries
        • Excluding hosts from distributed queries
      • Managing search indexes
        • About search index management
        • Adjusting timeout for index management
        • About search indexes
        • Generating an index with joins disabled
        • Managing search index fields
          • Syntax for changing schema settings
          • Defining index field types
          • Adding a new field type
          • Adding a column to the index
          • Indexing tuples and UDTs fields
            • Tuple configuration example
            • UDT configuration example
            • Nesting tuples and UDTs
            • Tuples and UDTs as CQL map values
          • Indexing map columns
          • Dropping columns from the index
          • Indexing a column for different analysis
        • Configuring search index joins
        • Reloading the search index
        • Removing a search index
        • Updating the index after data expires (TTL)
        • Inserting/updating data
      • Filtering CQL queries with a search index
        • Search index syntax
        • Search index filtering best practices
        • Filtering on terms
          • Filtering on words, phrases, or substrings
          • Advanced term and phrase searches
        • Geospatial queries for Point and LineString
        • Using dynamic fields
        • Joining cores
        • Spatial queries with polygons require JTS
        • Limiting queries by time
        • UDT query examples
        • Querying CQL collections
        • Using date ranges in solr_query
      • Tutorials and demos
        • Creating a healthcare keyspace for tutorials
        • Multi-faceted search using healthcare data
        • Term and phrase searches using the wikipedia demo
          • Using secure cluster
        • Indexing and querying polygons
      • Performance tuning and monitoring DSE Search
        • Tuning search for maximum indexing throughput
        • Resolving query timeouts on restarted nodes
        • Table compression can optimize reads
        • Parallelizing large row reads
        • Changing the stack size and memtable space
        • Tuning index size and range query speed
        • Improving read performance
      • DSE Search operations
        • Initial data migration
        • Shard routing for distributed queries
        • Deleting a search index
        • Verifying indexing status
        • Backing up DSE Search data directories
        • Restoring a search node from backup
        • Monitoring DSE Search
        • Uploading the search index schema and config
      • Solr interfaces
        • Changing the Solr connector port
        • Allowing access from Solr Admin UI for core indexing (deprecated)
        • Changing Tomcat web server settings
        • Configuring the Solr library path
        • Changing the HTTP interface to Apache JServe Protocol
        • URP and FIT
          • FIT transformer API
          • FIT transformer class examples
          • Custom URP example
          • Interface for custom field types
        • Deleting by query
        • Monitoring Solr segments
      • HTTP API SolrJ and other Solr clients
    • DSE Spark
      • About Spark
      • Using Spark with DataStax Enterprise
        • Starting Spark
        • Running Spark commands against a remote cluster
        • Monitoring Spark with the web interface
        • Using DSE Spark with third party tools and integrations
      • Configuring Spark
        • Configuring Spark nodes
        • Automatic Spark Master election
        • Configuring Spark logging options
        • Running Spark processes as separate users
        • Configuring the Spark history server
        • Enabling Spark apps in cluster mode when authentication is enabled
        • Setting Spark Cassandra Connector-specific properties
        • Creating a DSE Analytics Solo datacenter
        • Spark JVMs and memory management
      • Using Spark modules with DataStax Enterprise
        • Getting started with Spark Streaming
        • Using Spark SQL to query data
          • Querying database data using Spark SQL in Scala
          • Querying database data using Spark SQL in Java
          • Querying DSE Graph vertices and edges with Spark SQL
          • Supported syntax of Spark SQL
          • Inserting data into tables with static columns using Spark SQL
          • Running HiveQL queries using Spark SQL
          • Using the DataFrames API
          • Using the Spark SQL Thrift server
          • Enabling SSL for the Spark SQL Thrift Server
          • Accessing the Spark SQL Thrift Server with the Simba JDBC driver
          • Simba ODBC Driver for Apache Spark (Windows)
            • Configuring the Spark ODBC Driver (Windows)
          • Simba ODBC Driver for Apache Spark (Linux)
          • Connecting to the Spark SQL Thrift server using Beeline
        • Using SparkR with DataStax Enterprise
      • Accessing DataStax Enterprise data from external Spark clusters
        • Overview of BYOS support in DataStax Enterprise
        • Generating the BYOS configuration file
        • Connecting to DataStax Enterprise using the Spark shell on an external Spark cluster
        • Generating Spark SQL schema files
        • Starting Spark SQL Thrift Server with Kerberos
        • Accessing HDFS or CFS resources using Kerberos
      • Using the Spark Jobserver
  • DataStax Enterprise 5.1 Documentation
  • DSE Spark
  • Configuring Spark
  • Configuring Spark nodes

Configuring Apache Spark™ nodes

Modify the settings for Spark nodes security, performance, and logging.

To manage Spark performance and operations:

  • Set the replication factor for DSE Analytics keyspaces

  • Set environment variables

  • Protect Spark directories

  • Grant access to default Spark directories

  • Secure Spark nodes

  • Configure Spark memory and cores

  • Configure Spark logging options

Set environment variables

DataStax recommends using the default values of Spark environment variables unless you need to increase the memory settings due to an OutOfMemoryError condition or garbage collection taking too long. Use the Spark memory configuration options in the dse.yaml and spark-env.sh files.

Where is the dse.yaml file?

The location of the dse.yaml file depends on the type of installation:

Installation Type Location

Package installations + Installer-Services installations

/etc/dse/dse.yaml

Tarball installations + Installer-No Services installations

<installation_location>/resources/dse/conf/dse.yaml

You can set a user-specific SPARK_HOME directory if you also set ALLOW_SPARK_HOME=true in your environment before starting DSE.

For example, on Debian or Ubuntu using a package installation:

export SPARK_HOME=$HOME/spark &&
export ALLOW_SPARK_HOME=true
&& sudo service dse start

To configure worker cleanup, modify the SPARK_WORKER_OPTS environment variable and add the cleanup properties. The SPARK_WORKER_OPTS environment variable can be set in the user environment or in spark-env.sh.

Where is the spark-env.sh file?

The default location of the spark-env.sh file depends on the type of installation:

Installation Type Location

Package installations + Installer-Services installations

/etc/dse/spark/spark-env.sh

Tarball installations + Installer-No Services installations

<installation_location>/resources/spark/conf/spark-env.sh

For example, the following enables worker cleanup, sets the cleanup interval to 30 minutes (i.e. 1800 seconds), and sets the length of time application worker directories will be retained to 7 days (i.e. 604800 seconds).

export SPARK_WORKER_OPTS="$SPARK_WORKER_OPTS \
  -Dspark.worker.cleanup.enabled=true \
  -Dspark.worker.cleanup.interval=1800 \
  -Dspark.worker.cleanup.appDataTtl=604800"

Protect Spark directories

After you start up a Spark cluster, DataStax Enterprise creates a Spark work directory for each Spark Worker on worker nodes. A worker node can have more than one worker, configured by the SPARK_WORKER_INSTANCES option in spark-env.sh. If SPARK_WORKER_INSTANCES is undefined, a single worker is started. The work directory contains the standard output and standard error of executors and other application specific data stored by Spark Worker and executors; the directory is writable only by the DSE user.

By default, the Spark parent work directory is located in /var/lib/spark/work, with each worker in a subdirectory named worker-number, where the number starts at 0. To change the parent worker directory, configure SPARK_WORKER_DIR in the spark-env.sh file.

The Spark RDD directory is the directory where RDDs are placed when executors decide to spill them to disk. This directory might contain the data from the database or the results of running Spark applications. If the data in the directory is confidential, prevent access by unauthorized users. The RDD directory might contain a significant amount of data, so configure its location on a fast disk. The directory is writable only by the cassandra user. The default location of the Spark RDD directory is /var/lib/spark/rdd. The directory should be located on a fast disk. To change the RDD directory, configure SPARK_LOCAL_DIRS in the spark-env.sh file.

Grant access to default Spark directories

Before starting up nodes on a tarball installation, you need permission to access the default Spark directory locations: /var/lib/spark and /var/log/spark. Change ownership of these directories as follows:

sudo mkdir -p /var/lib/spark/rdd; sudo chmod a+w /var/lib/spark/rdd; sudo chown -R  $USER:$GROUP /var/lib/spark/rdd &&
sudo mkdir -p /var/log/spark; sudo chown -R  $USER:$GROUP /var/log/spark

In multiple datacenter clusters, use a virtual datacenter to isolate Spark jobs. Running Spark jobs consume resources that can affect latency and throughput.

DataStax Enterprise supports the use of virtual nodes (vnodes) with Spark.

Secure Spark nodes

Client-to-node SSL

Ensure that the truststore entries in cassandra.yaml are present as described in Client-to-node encryption, even when client authentication is not enabled.

Where is the cassandra.yaml file?

The location of the cassandra.yaml file depends on the type of installation:

Installation Type Location

Package installations + Installer-Services installations

/etc/dse/cassandra/cassandra.yaml

Tarball installations + Installer-No Services installations

<installation_location>/resources/cassandra/conf/cassandra.yaml

Enabling security and authentication

Security is enabled using the spark_security_enabled option in dse.yaml. Setting it to enabled turns on authentication between the Spark Master and Worker nodes, and allows you to enable encryption. To encrypt Spark connections for all components except the web UI, enable spark_security_encryption_enabled. The length of the shared secret used to secure Spark components is set using the spark_shared_secret_bit_length option, with a default value of 256 bits. These options are described in DSE Analytics options. For production clusters, enable these authentication and encryption. Doing so does not significantly affect performance.

Authentication and Spark applications

If authentication is enabled, users need to be authenticated in order to submit an application.

DSE 5.1.4, DSE 5.1.5, and 5.1.6 users should refer to the release notes for information on using Spark SQL applications and DSE authentication.

Authorization and Spark applications

If DSE authorization is enabled, users needs permission to submit an application. Additionally, the user submitting the application automatically receives permission to manage the application, which can optionally be extended to other users.

Database credentials for the Spark SQL Thrift server

In the hive-site.xml file, configure authentication credentials for the Spark SQL Thrift server. Ensure that you use the hive-site.xml file in the Spark directory.

Where is the hive-site.xml file?

The location of the hive-site.xml file depends on the type of installation:

Installation Type Location

Package installations + Installer-Services installations

/etc/dse/spark/hive-site.xml

Tarball installations + Installer-No Services installations

<installation_location>/resources/spark/conf/hive-site.xml

Kerberos with Spark

With Kerberos authentication, the Spark launcher connects to DSE with Kerberos credentials and requests DSE to generate a delegation token. The Spark driver and executors use the delegation token to connect to the cluster. For valid authentication, the delegation token must be renewed periodically. For security reasons, the user who is authenticated with the token should not be able to renew it. Therefore, delegation tokens have two associated users: token owner and token renewer. The token renewer is none so that only a DSE internal process can renew it. When the application is submitted, DSE automatically renews delegation tokens that are associated with Spark application. When the application is unregistered (finished), the delegation token renewal is stopped and the token is cancelled. Set Kerberos options, see Defining a Kerberos scheme.

Using authorization with Spark

There are two kinds of authorization permissions which apply to Spark. Work pool permissions control the ability to submit a Spark application to DSE. Submission permissions control the ability to manage a particular application. All the following instructions assume you are issuing the CQL commands as a database superuser.

Use GRANT CREATE ON ANY WORKPOOL TO role to grant permission to submit a Spark application to any Analytics datacenter.

Use GRANT CREATE ON WORKPOOL datacenter_name TO role to grant permission to submit a Spark application to a particular Analytics datacenter.

There are similar revoke commands:

REVOKE CREATE ON ANY WORKPOOL FROM role
REVOKE CREATE ON WORKPOOL datacenter_name FROM role

When an application is submitted, the user who submits that application is automatically granted permission to manage and remove the application. You may also grant the ability to manage the application to another user or role.

Use GRANT MODIFY ON ANY SUBMISSION TO role to grant permission to manage any submission in any work pool to the specified role.

Use GRANT MODIFY ON ANY SUBMISSION IN WORKPOOL datacenter_name TO role to grant permission to manage any submission in a specified datacenter.

Use GRANT MODIFY ON SUBMISSION id IN WORKPOOL datacenter_name TO role to grant permission to manage a submission identified by the provided id in a given datacenter.

There are similar revoke commands:

REVOKE MODIFY ON ANY SUBMISSION FROM role
REVOKE MODIFY ON ANY SUBMISSION IN WORKPOOL datacenter_name FROM role
REVOKE MODIFY ON SUBMISSION id IN WORKPOOL datacenter_name FROM role

In order to issue these commands as a regular database user, the user needs to have permission to use the DSE resource manager RPC:

GRANT ALL ON REMOTE OBJECT DseResourceManager TO role

Each DSE Analytics user needs to have permission to use the client tools RPC:

GRANT ALL ON REMOTE OBJECT DseClientTool TO role

Configure Spark memory and cores

Spark memory options affect different components of the Spark ecosystem:

Spark History server and the Spark Thrift server memory

The SPARK_DAEMON_MEMORY option configures the memory that is used by the Spark SQL Thrift server and history-server. Add or change this setting in the spark-env.sh file on nodes that run these server applications.

Spark Worker memory

The SPARK_WORKER_MEMORY option configures the total amount of memory that you can assign to all executors that are run by a single Spark Worker on the particular node.

Application executor memory

You can configure the amount of memory that each executor can consume for the application. Spark uses a 512MB default. Use either the spark.executor.memory option, described in "Spark 1.6.2 Available Properties", or the --executor-memory mem argument to the dse spark command.

Application memory

You can configure additional Java options that are applied by the worker when spawning an executor for the application. Use the spark.executor.extraJavaOptions property, described in Spark 2.0.2 Available Properties. For example: spark.executor.extraJavaOptions -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"

Core management

You can manage the number of cores by configuring these options.

  • Spark Worker cores

    The SPARK_WORKER_CORES option configures the number of cores offered by Spark Worker for executors. A single executor can borrow more than one core from the worker. The number of cores used by the executor relates to the number of parallel tasks the executor might perform. The number of cores offered by the cluster is the sum of cores offered by all the workers in the cluster.

  • Application cores

    In the Spark configuration object of your application, you configure the number of application cores that the application requests from the cluster using either the spark.cores.max configuration property or the --total-executor-cores cores argument to the dse spark command.

See the Spark documentation for details about memory and core allocation.

DataStax Enterprise can control the memory and cores offered by particular Spark Workers in semi-automatic fashion. The initial_spark_worker_resources parameter in the dse.yaml file specifies the fraction of system resources that are made available to the Spark Worker. The available resources are calculated in the following way:

  • Spark Worker memory = initial_spark_worker_resources * (total system memory - memory assigned to DSE)

  • Spark Worker cores = initial_spark_worker_resources * total system cores

The lowest values you can assign to Spark Worker memory and cores are 64 MB and 1 core, respectively. If the results are lower, no exception is thrown and the values are automatically limited. The range of the initial_spark_worker_resources value is 0.01 to 1. If the range is not specified, the default value 0.7 is used.

This mechanism is used by default to set the Spark Worker memory and cores. To override the default, uncomment and edit one or both SPARK_WORKER_MEMORY and SPARK_WORKER_CORES options in the spark-env.sh file.

Running Spark clusters in cloud environments

If you are using a cloud infrastructure provider like Amazon EC2, you must explicitly open the ports for publicly routable IP addresses in your cluster. If you do not, the Spark workers will not be able to find the Spark Master.

One work-around is to set the prefer_local setting in your cassandra-rackdc.properties snitch setup file to true:

Where is the cassandra-rackdc.properties file?

The location of the cassandra-rackdc.properties depends on the type of installation:

Installation Type Location

Package installations + Installer-Services installations

/etc/dse/cassandra/cassandra-rackdc.properties

Tarball installations + Installer-No Services

<installation_location>/resources/cassandra/conf/cassandra-rackdc.properties

# Uncomment the following line to make this snitch prefer the internal ip when possible, as the Ec2MultiRegionSnitch does.
prefer_local=true

This tells the cluster to communicate only on private IP addresses within the datacenter rather than the public routable IP addresses.

Configuring the number of retries to retrieve Spark configuration

When Spark fetches configuration settings from DSE, it does not fail immediately if it cannot retrieve the configuration data. Spark retries 5 times by default with increasing delay between retries. The number of retries can be set in the Spark configuration, by modifying the spark.dse.configuration.fetch.retries configuration property when calling the dse spark command, or in spark-defaults.conf.

Where is the spark-defaults.conf file?

The location of the spark-defaults.conf file depends on the type of installation:

Installation Type Location

Package installations + Installer-Services installations

/etc/dse/spark/spark-defaults.conf

Tarball installations + Installer-No Services installations

<installation_location>/resources/spark/conf/spark-defaults.conf

Enabling continuous paging

Continuous paging streams bulk amounts of records from DSE to the DataStax Java Driver used by DSE Spark. By default, continuous paging in queries is disabled. To enable it, set the spark.dse.continuous_paging_enabled setting to true when starting the Spark SQL shell or in spark-defaults.conf. For example:

dse spark-sql --conf spark.dse.continuous_paging_enabled=true

Using continuous paging can improve performance up to 3 times, though the improvement depends on the data and the queries. Some factors that impact the performance improvement are the number of executor JVMs per node and the number of columns included in the query. Greater performance gains were observed with fewer executor JVMs per node and more columns selected.

Configuring the Spark web interface ports

By default the Spark web UI runs on port 7080. To change the port number, do the following:

  1. Open the spark-env.sh file in a text editor.

  2. Set the SPARK_MASTER_WEBUI_PORT variable to the new port number. For example, to set it to port 7082:

    export SPARK_MASTER_WEBUI_PORT=7082
  3. Repeat these steps for each Analytics node in your cluster.

  4. Restart the nodes in the cluster.

Enabling Graphite Metrics in DSE Spark

Users can add third party JARs to Spark nodes by adding them to the Spark lib directory on each node and restart the cluster. Add the Graphite Metrics JARs to this directory to enable metrics in DSE Spark.

Where is the Spark lib directory?

The location of the Spark lib directory depends on the type of installation:

Installation Type Location

Package installations + Installer-Services installations

/usr/share/dse/spark/lib

Tarball installations + Installer-No Services installations

/var/lib/spark

To add the Graphite JARs to Spark in a package installation, copy them to the Spark lib directory:

cp metrics-graphite-3.1.2.jar /usr/share/dse/spark/lib/ &&
cp metrics-json-3.1.2.jar /usr/share/dse/spark/lib/

Spark server configuration

Where is the spark-daemon-defaults.conf file?

The location of the spark-daemon-defaults.conf file depends on the type of installation:

Installation Type Location

Package installations + Installer-Services installations

/etc/dse/spark/spark-daemon-defaults.conf

Tarball installations + Installer-No Services installations

<installation_location>/resources/spark/conf/spark-daemon-defaults.conf

Where is the cassandra.yaml file?

The location of the cassandra.yaml file depends on the type of installation:

Installation Type Location

Package installations + Installer-Services installations

/etc/dse/cassandra/cassandra.yaml

Tarball installations + Installer-No Services installations

<installation_location>/resources/cassandra/conf/cassandra.yaml

The spark-daemon-defaults.conf file configures DSE Spark Masters and Workers.

Table 1. Spark server configuration properties
Option Default value Description

dse.spark.application.timeout

30

The duration in seconds after which the application is considered dead if no heartbeat is received.

spark.dseShuffle.sasl.port

7447

The port number on which a shuffle service for SASL secured applications is started. Bound to the listen_address in cassandra.yaml.

spark.dseShuffle.noSasl.port

7437

The port number on which a shuffle service for unsecured applications is started. Bound to the listen_address in cassandra.yaml.

By default Spark executor logs, which log the majority of your Spark Application output, are redirected to standard output. The output is managed by Spark Workers. Configure logging by adding spark.executor.logs.rolling.* properties to spark-daemon-defaults.conf file.

spark.executor.logs.rolling.maxRetainedFiles 3
spark.executor.logs.rolling.strategy size
spark.executor.logs.rolling.maxSize 50000
Configuring Spark Automatic Spark Master election

General Inquiries: +1 (650) 389-6000 info@datastax.com

© DataStax | Privacy policy | Terms of use

DataStax, Titan, and TitanDB are registered trademarks of DataStax, Inc. and its subsidiaries in the United States and/or other countries.

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries.

Kubernetes is the registered trademark of the Linux Foundation.

landing_page landingpage