Configuring the Spark history server

Load the event logs from Spark jobs that were run with event logging enabled.

The Spark history server provides a way to load the event logs from Spark jobs that were run with event logging enabled. The Spark history server works only when files were not flushed before the Spark Master attempted to build a history user interface.

Procedure

To enable the Spark history server:

  1. Create a directory for event logs in the Cassandra file system (CFS):
    dse hadoop fs -mkdir /spark/events 
  2. On each node in the cluster, edit the spark-defaults.conf file to enable event logging and specify the directory for event logs:
    #Turns on logging for applications submitted from this machine 
    spark.eventLog.dir cfs:/spark/events 
    spark.eventLog.enabled true 
    #Sets the logging directory for the history server 
    spark.history.fs.logDirectory cfs:/spark/events 
  3. Start the Spark history server on one of the nodes in the cluster:

    The Spark history server is a front-end application that displays logging data from all nodes in the Spark cluster. It can be started from any node in the cluster.

    dse spark-history-server start 
    Note: The Spark Master web UI does not show the historical logs. To work around this known issue, access the history from port 18080.
  4. When event logging is enabled, the default behavior is for all logs to be saved, which causes the storage to grow over time. To enable automated cleanup edit spark-defaults.conf and edit the following options:
    spark.history.fs.cleaner.enabled true 
    spark.history.fs.cleaner.interval 1d
    spark.history.fs.cleaner.maxAge 7d

    For these settings, automated cleanup is enabled, the cleanup is performed daily, and logs older than seven days are deleted.