Configuring audit logging to a logback log file

Steps to configure audit logging in DataStax Enterprise.

If you've enabled audit logging and set the logger to output to the SLF4JAuditWriter as described in Enabling data auditing in DataStax Enterprise, you can configure the logger by setting options in logback.xml or running the nodetool setlogginglevel command.

DataStax Enterprise places the audit log in the directory defined in the logback.xml configuration file. After the log file reaches the configured size threshold, it rolls over, and the log file name is changed. The file names include a numerical suffix that is determined by the maxBackupIndex property.

The location of the logback.xml file depends on the type of installation:
Installer-Services and Package installations /etc/dse/cassandra/logback.xml
Installer-No Services and Tarball installations install_location/resources/cassandra/conf/logback.xml

Sensitive data in log files 

Because auditing is configured through a text file in the file system, the file is vulnerable to OS-level security breaches. You can address this issue by changing DataStax Enterprise's umask setting to change the permissions to 600 on the audit files by default. Be aware that if other tools look at the data, changing this setting can cause read problems. Alternately, you can store the audit file on an OS-level encrypted file system such as Vormetric.

Tip: Redact sensitive data before you share log files for troubleshooting purposes.
For example, when:
  • A password is inserted in a table in a column named password.
  • And the audit logging options in dse.yaml are set to included_categories: DML, ... to include DML (insert, update, delete and other data manipulation language events) in the audit log.
  • You can redact the values for that column so that passwords do not show in the log. Use the following to replace that string in the <encoder> section of the logback.xml file:
    <encoder>
        <pattern>%-5level [%thread] %date{ISO8601} %X{service} %F:%L - %replace(%msg){"password='.*'", "password='xxxxx'"}%n</pattern>
        ...
    </encoder>
The replace layout option uses regular expressions to modify the data before it ends up in the log file. For more information on using the replace filter and other advanced features, see the logback documentation.

Configuring data auditing 

You can configure which categories of audit events (administration, authentication, DML, DDL, DCL, and query operations) to log, and whether to omit operations against specific keyspaces from audit logging.

The audit logger logs at INFO level, so the DataAudit logger must be configured at INFO (or lower) level in logback.xml. Setting the logger to a higher level, such as WARN, prevents any log events from being recorded, but it does not completely disable the data auditing. Some overhead occurs that is caused by regular processing.

Procedure

  1. Open the logback.xml file in a text editor.
  2. To configure data auditing, accept the default settings or change the following properties. See the logback.xml configuration file example for additional logging properties.
    <!--audit log-->  
    <appender name="SLF4JAuditWriterAppender" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>${cassandra.logdir}/audit/audit.log</file> <!-- logfile location -->
        <encoder>
          <pattern>%-5level [%thread] %date{ISO8601} %F:%L - %msg%n</pattern> <!-- the layout pattern used to format log entries -->
          <immediateFlush>true</immediateFlush> 
        </encoder>
        <rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
          <fileNamePattern>${cassandra.logdir}/audit/audit.log.%i.zip</fileNamePattern>
          <minIndex>1</minIndex>
          <maxIndex>20</maxIndex> <!-- max number of archived logs that are kept -->
        </rollingPolicy>
        <triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
          <maxFileSize>200MB</maxFileSize> <!-- The size of the logfile that triggers a switch to a new logfile, and the current one archived -->
        </triggeringPolicy>
      </appender>
    <logger name="SLF4JAuditWriter" level="INFO" additivity="false">
        <appender-ref ref="SLF4JAuditWriterAppender"/>
      </logger>
  3. Optional: Generate Kerberos debug output:
    ...
    <logger name="com.datastax.bdp.transport.server" level="TRACE"/>
    <logger name="com.datastax.bdp.cassandra.auth" level="TRACE"/>
    ...
  4. Optional: Generate LDAP debug output:
    ...
    <logger name="com.datastax.bdp.cassandra.auth" level="TRACE"/>
    ...
  5. Restart the node to see changes in the log.

Logback.xml configuration file 

The XML configuration files looks like this:
<configuration scan="true">
  <jmxConfigurator />

<!-- SYSTEMLOG rolling file appender to system.log (INFO level) -->

 <appender name="SYSTEMLOG" class="ch.qos.logback.core.rolling.RollingFileAppender">
  <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
     <level>INFO</level>
  </filter>
     <file>${cassandra.logdir}/system.log</file>
  <rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
     <fileNamePattern>${cassandra.logdir}/system.log.%i.zip</fileNamePattern>
     <minIndex>1</minIndex>
     <maxIndex>20</maxIndex>
  </rollingPolicy>
  <triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
     <maxFileSize>20MB</maxFileSize>
  </triggeringPolicy>
  <encoder>
     <pattern>%-5level [%thread] %date{ISO8601} %X{service} %F:%L - %msg%n</pattern>
  </encoder>
 </appender>

<!-- DEBUGLOG rolling file appender to debug.log (all levels) -->

 <appender name="DEBUGLOG" class="ch.qos.logback.core.rolling.RollingFileAppender">
     <file>${cassandra.logdir}/debug.log</file>
  <rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
     <fileNamePattern>${cassandra.logdir}/debug.log.%i.zip</fileNamePattern>
     <minIndex>1</minIndex>
     <maxIndex>20</maxIndex>
  </rollingPolicy>
  <triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
     <maxFileSize>20MB</maxFileSize>
  </triggeringPolicy>
  <encoder>
     <pattern>%-5level [%thread] %date{ISO8601} %X{service} %F:%L - %msg%n</pattern>
  </encoder>
 </appender>

<!-- ASYNCLOG assynchronous appender to debug.log (all levels) -->

 <appender name="ASYNCDEBUGLOG" class="ch.qos.logback.classic.AsyncAppender">
     <queueSize>1024</queueSize>
     <discardingThreshold>0</discardingThreshold>
     <includeCallerData>true</includeCallerData>
     <appender-ref ref="DEBUGLOG" />
 </appender>

<!-- STDOUT console appender to stdout (INFO level) -->

 <if condition='isDefined("dse.console.useColors")'>
   <then>
    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
      <withJansi>true</withJansi>
      <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
        <level>INFO</level>
      </filter>
      <encoder>
        <pattern>%highlight(%-5level) [%thread] %green(%date{ISO8601}) %yellow(%X{service}) %F:%L - %msg%n<$
      </encoder>
    </appender>
   </then>
  </if>
 <if condition='isNull("dse.console.useColors")'>
  <then>
    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
      <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
        <level>INFO</level>
     </filter>
     <encoder>
        <pattern>%-5level [%thread] %date{ISO8601} %X{service} %F:%L - %msg%n</pattern>
     </encoder>
    </appender>
  </then>
 </if>

 <include file="${SPARK_SERVER_LOGBACK_CONF_FILE}"/>
 <include file="${GREMLIN_SERVER_LOGBACK_CONF_FILE}"/>

  <!-- Uncomment the LogbackMetrics appender and the corresponding appender-ref in the root to activate
  <appender name="LogbackMetrics" class="com.codahale.metrics.logback.InstrumentedAppender" />
  -->

 <root level="${logback.root.level:-INFO}">
   <appender-ref ref="SYSTEMLOG" />
   <appender-ref ref="STDOUT" />
 <!-- Comment out the ASYNCDEBUGLOG appender to disable debug.log -->
   <appender-ref ref="ASYNCDEBUGLOG" />
 <!-- Uncomment LogbackMetrics and its associated appender to enable metric collecting for logs. -->
 <!-- <appender-ref ref="LogbackMetrics" /> -->
   <appender-ref ref="SparkMasterFileAppender" />
   <appender-ref ref="SparkWorkerFileAppender" />
   <appender-ref ref="GremlinServerFileAppender" />
 </root>

<!--audit log-->
 <appender name="SLF4JAuditWriterAppender" class="ch.qos.logback.core.rolling.RollingFileAppender">
   <file>${cassandra.logdir}/audit/audit.log</file>
   <encoder>
     <pattern>%-5level [%thread] %date{ISO8601} %X{service} %F:%L - %msg%n</pattern>
     <immediateFlush>true</immediateFlush>
     </encoder>
   <rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
     <fileNamePattern>${cassandra.logdir}/audit/audit.log.%i.zip</fileNamePattern>
     <minIndex>1</minIndex>
     <maxIndex>5</maxIndex>
   </rollingPolicy>
   <triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
     <maxFileSize>200MB</maxFileSize>
   </triggeringPolicy>
 </appender>

 <logger name="SLF4JAuditWriter" level="INFO" additivity="false">
   <appender-ref ref="SLF4JAuditWriterAppender"/>
 </logger>

 <appender name="DroppedAuditEventAppender" class="ch.qos.logback.core.rolling.RollingFileAppender" prudent=$
   <file>${cassandra.logdir}/audit/dropped-events.log</file>
   <encoder>
     <pattern>%-5level [%thread] %date{ISO8601} %X{service} %F:%L - %msg%n</pattern>
     <immediateFlush>true</immediateFlush>
   </encoder>
   <rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
     <fileNamePattern>${cassandra.logdir}/audit/dropped-events.log.%i.zip</fileNamePattern>
     <minIndex>1</minIndex>
     <maxIndex>5</maxIndex>
   </rollingPolicy>
   <triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
     <maxFileSize>200MB</maxFileSize>
   </triggeringPolicy>
 </appender>

 <logger name="DroppedAuditEventLogger" level="INFO" additivity="false">
   <appender-ref ref="DroppedAuditEventAppender"/>
 </logger>

 <logger name="org.apache.cassandra" level="DEBUG"/>
 <logger name="com.datastax.bdp.db" level="DEBUG"/>
 <logger name="com.datastax.driver.core.NettyUtil" level="ERROR"/>
 <logger name="com.datastax.bdp.search.solr.metrics.SolrMetricsEventListener" level="DEBUG"/>
 <logger name="org.apache.solr.core.CassandraSolrConfig" level="WARN"/>
 <logger name="org.apache.solr.core.SolrCore" level="WARN"/>
 <logger name="org.apache.solr.core.RequestHandlers" level="WARN"/>
 <logger name="org.apache.solr.handler.component" level="WARN"/>
 <logger name="org.apache.solr.search.SolrIndexSearcher" level="WARN"/>
 <logger name="org.apache.solr.update" level="WARN"/>
 <logger name="org.apache.lucene.index" level="INFO"/>
 <logger name="com.cryptsoft" level="OFF"/>
 <logger name="org.apache.spark.rpc" level="ERROR"/>
</configuration>
The appender configurations specify where to print the log and its configuration. Each appender is defined as appendername="appender", and are described as follows.
SYSTEMLOG
Directs logs and ensures that WARN and ERROR messages are written synchronously to the /var/log/cassandra/system.log file.
DEBUGLOG | ASYNCDEBUGLOG
Generates the /var/log/cassandra/debug.log file, which contains an asynchronous log of events written to the system.log file, plus production logging information useful for debugging issues.
STDOUT
Directs logs to the console in a human-readable format.
LogbackMetrics
Records the rate of logged events by their logging level.
SLF4JAuditWriterAppender | DroppedAuditEventAppender
Used by the audit logging functionality. See Enabling data auditing in DataStax Enterprise for more information.