Configuring logging

Logging functionality uses Simple Logging Facade for Java (SLF4J) with a logback backend.

logback.xml

The location of the logback.xml file depends on the type of installation:
Package installations /etc/dse/cassandra/logback.xml
Tarball installations installation_location/resources/cassandra/conf/logback.xml

Logging functionality uses Simple Logging Facade for Java (SLF4J) with a logback backend. Logs are written to the system.log and debug.log in the logging directory. You can configure logging programmatically or manually. Manual ways to configure logging are:

Logback looks for the logback-test.xml file first, and then for the logback.xml file.

The following example details the XML configuration of the logback.xml file:
<configuration scan="true">
  <jmxConfigurator />

<!-- SYSTEMLOG rolling file appender to system.log (INFO level) -->

 <appender name="SYSTEMLOG" class="ch.qos.logback.core.rolling.RollingFileAppender">
  <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
     <level>INFO</level>
  </filter>
     <file>${cassandra.logdir}/system.log</file>
  <rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
     <fileNamePattern>${cassandra.logdir}/system.log.%i.zip</fileNamePattern>
     <minIndex>1</minIndex>
     <maxIndex>20</maxIndex>
  </rollingPolicy>
  <triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
     <maxFileSize>20MB</maxFileSize>
  </triggeringPolicy>
  <encoder>
     <pattern>%-5level [%thread] %date{ISO8601} %X{service} %F:%L - %msg%n</pattern>
  </encoder>
 </appender>

<!-- DEBUGLOG rolling file appender to debug.log (all levels) -->

 <appender name="DEBUGLOG" class="ch.qos.logback.core.rolling.RollingFileAppender">
     <file>${cassandra.logdir}/debug.log</file>
  <rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
     <fileNamePattern>${cassandra.logdir}/debug.log.%i.zip</fileNamePattern>
     <minIndex>1</minIndex>
     <maxIndex>20</maxIndex>
  </rollingPolicy>
  <triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
     <maxFileSize>20MB</maxFileSize>
  </triggeringPolicy>
  <encoder>
     <pattern>%-5level [%thread] %date{ISO8601} %X{service} %F:%L - %msg%n</pattern>
  </encoder>
 </appender>

<!-- ASYNCLOG assynchronous appender to debug.log (all levels) -->

 <appender name="ASYNCDEBUGLOG" class="ch.qos.logback.classic.AsyncAppender">
     <queueSize>1024</queueSize>
     <discardingThreshold>0</discardingThreshold>
     <includeCallerData>true</includeCallerData>
     <appender-ref ref="DEBUGLOG" />
 </appender>

<!-- STDOUT console appender to stdout (INFO level) -->

 <if condition='isDefined("dse.console.useColors")'>
   <then>
    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
      <withJansi>true</withJansi>
      <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
        <level>INFO</level>
      </filter>
      <encoder>
        <pattern>%highlight(%-5level) [%thread] %green(%date{ISO8601}) %yellow(%X{service}) %F:%L - %msg%n<$
      </encoder>
    </appender>
   </then>
  </if>
 <if condition='isNull("dse.console.useColors")'>
  <then>
    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
      <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
        <level>INFO</level>
     </filter>
     <encoder>
        <pattern>%-5level [%thread] %date{ISO8601} %X{service} %F:%L - %msg%n</pattern>
     </encoder>
    </appender>
  </then>
 </if>

 <include file="${SPARK_SERVER_LOGBACK_CONF_FILE}"/>
 <include file="${GREMLIN_SERVER_LOGBACK_CONF_FILE}"/>

  <!-- Uncomment the LogbackMetrics appender and the corresponding appender-ref in the root to activate
  <appender name="LogbackMetrics" class="com.codahale.metrics.logback.InstrumentedAppender" />
  -->

 <root level="${logback.root.level:-INFO}">
   <appender-ref ref="SYSTEMLOG" />
   <appender-ref ref="STDOUT" />
 <!-- Comment out the ASYNCDEBUGLOG appender to disable debug.log -->
   <appender-ref ref="ASYNCDEBUGLOG" />
 <!-- Uncomment LogbackMetrics and its associated appender to enable metric collecting for logs. -->
 <!-- <appender-ref ref="LogbackMetrics" /> -->
   <appender-ref ref="SparkMasterFileAppender" />
   <appender-ref ref="SparkWorkerFileAppender" />
   <appender-ref ref="GremlinServerFileAppender" />
 </root>

<!--audit log-->
 <appender name="SLF4JAuditWriterAppender" class="ch.qos.logback.core.rolling.RollingFileAppender">
   <file>${cassandra.logdir}/audit/audit.log</file>
   <encoder>
     <pattern>%-5level [%thread] %date{ISO8601} %X{service} %F:%L - %msg%n</pattern>
     <immediateFlush>true</immediateFlush>
     </encoder>
   <rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
     <fileNamePattern>${cassandra.logdir}/audit/audit.log.%i.zip</fileNamePattern>
     <minIndex>1</minIndex>
     <maxIndex>5</maxIndex>
   </rollingPolicy>
   <triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
     <maxFileSize>200MB</maxFileSize>
   </triggeringPolicy>
 </appender>

 <logger name="SLF4JAuditWriter" level="INFO" additivity="false">
   <appender-ref ref="SLF4JAuditWriterAppender"/>
 </logger>

 <appender name="DroppedAuditEventAppender" class="ch.qos.logback.core.rolling.RollingFileAppender" prudent=$
   <file>${cassandra.logdir}/audit/dropped-events.log</file>
   <encoder>
     <pattern>%-5level [%thread] %date{ISO8601} %X{service} %F:%L - %msg%n</pattern>
     <immediateFlush>true</immediateFlush>
   </encoder>
   <rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
     <fileNamePattern>${cassandra.logdir}/audit/dropped-events.log.%i.zip</fileNamePattern>
     <minIndex>1</minIndex>
     <maxIndex>5</maxIndex>
   </rollingPolicy>
   <triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
     <maxFileSize>200MB</maxFileSize>
   </triggeringPolicy>
 </appender>

 <logger name="DroppedAuditEventLogger" level="INFO" additivity="false">
   <appender-ref ref="DroppedAuditEventAppender"/>
 </logger>

 <logger name="org.apache.cassandra" level="DEBUG"/>
 <logger name="com.datastax.bdp.db" level="DEBUG"/>
 <logger name="com.datastax.driver.core.NettyUtil" level="ERROR"/>
 <logger name="com.datastax.bdp.search.solr.metrics.SolrMetricsEventListener" level="DEBUG"/>
 <logger name="org.apache.solr.core.CassandraSolrConfig" level="WARN"/>
 <logger name="org.apache.solr.core.SolrCore" level="WARN"/>
 <logger name="org.apache.solr.core.RequestHandlers" level="WARN"/>
 <logger name="org.apache.solr.handler.component" level="WARN"/>
 <logger name="org.apache.solr.search.SolrIndexSearcher" level="WARN"/>
 <logger name="org.apache.solr.update" level="WARN"/>
 <logger name="org.apache.lucene.index" level="INFO"/>
 <logger name="com.cryptsoft" level="OFF"/>
 <logger name="org.apache.spark.rpc" level="ERROR"/>
</configuration>
The appender configurations specify where to print the log and its configuration. Each appender is defined as appendername="appender", and are described as follows.
SYSTEMLOG
Directs logs and ensures that WARN and ERROR messages are written synchronously to the /var/log/cassandra/system.log file.
DEBUGLOG | ASYNCDEBUGLOG
Generates the /var/log/cassandra/debug.log file, which contains an asynchronous log of events written to the system.log file, plus production logging information useful for debugging issues.
STDOUT
Directs logs to the console in a human-readable format.
LogbackMetrics
Records the rate of logged events by their logging level.
SLF4JAuditWriterAppender | DroppedAuditEventAppender
Used by the audit logging functionality. See for more information.
The following logging functionality is configurable:
  • Rolling policy
    • The policy for rolling logs over to an archive
    • Location and name of the log file
    • Location and name of the archive
    • Minimum and maximum file size to trigger rolling
  • Format of the message
  • The log level

Log levels

The valid values for setting the log level include ALL for logging information at all levels, TRACE through ERROR, and OFF for no logging. TRACE creates the most verbose log, and ERROR, the least.
  • ALL
  • TRACE
  • DEBUG
  • INFO (Default)
  • WARN
  • ERROR
  • OFF
Note: When set to TRACE or DEBUG output appears only in the debug.log. When set to INFO the debug.log is disabled.
Note: Increasing logging levels can generate heavy logging output on a moderately trafficked cluster.
Use the nodetool getlogginglevels command to see the current logging configuration.
bin\nodetool getlogginglevels
Logger Name                                        Log Level
ROOT                                               INFO
com.thinkaurelius.thrift                           ERROR
To add debug logging to a class permanently using the logback framework, use nodetool setlogginglevel to confirm the component or class before setting it in the logback.xml file in installation_location/conf. Modify to include the following line or similar at the end of the file:
<logger name="org.apache.cassandra.gms.FailureDetector" level="DEBUG"/>
Restart the node to invoke the change.

Migrating to logback from log4j

If you upgrade from an earlier version that used log4j, you can convert log4j.properties files to logback.xml using the logback PropertiesTranslator web-application.

Using log file rotation

The default policy rolls the system.log file after the size exceeds 20MB. Archives are compressed in zip format. Logback names the log files system.log.1.zip, system.log.2.zip, and so on. For more information, see logback documentation.

Enabling extended compaction logging

To configure collection of in-depth information about compaction activity on a node, and write it to a dedicated log file, see the log_all property for compaction.