Configuring audit logging to a logback log file

Configuring audit logging in DataStax Enterprise.

If you've enabled audit logging and set the logger to output to the SLF4JAuditWriter as described in Configuring and using data auditing, you can configure the logger by setting options in logback.xml. DataStax Enterprise places the audit log in the directory defined in the logback.xml configuration file. After the log file reaches the configured size threshold, it rolls over, and the log file name is changed. The file names include a numerical suffix that is determined by the maxBackupIndex property.

The location of the logback.xml file depends on the type of installation:
Installer-Services and Package installations /etc/dse/cassandra/logback.xml
Installer-No Services and Tarball installations install_location/resources/cassandra/logback.xml

Sensitive data in log files 

Because auditing is configured through a text file in the file system, the file is vulnerable to OS-level security breaches. You can address this issue by changing DataStax Enterprise's umask setting to change the permissions to 600 on the audit files by default. Be aware that if other tools look at the data, changing this setting can cause read problems. Alternately, you can store the audit file on an OS-level encrypted file system such as Vormetric.

Tip: Redact sensitive data before you share log files for troubleshooting purposes.
For example, when:
  • A password is inserted in a table in a column named password.
  • And the audit logging options in dse.yaml are set to included_categories: DML, ... to include DML (insert, update, delete and other data manipulation language events) in the audit log.
  • You can redact the values for that column so that passwords do not show in the log. Use the following to replace that string in the <encoder> section of the logback.xml file:
    <encoder>
        <pattern>%-5level [%thread] %date{ISO8601} %X{service} %F:%L - %replace(%msg){"password='.*'", "password='xxxxx'"}%n</pattern>
        ...
    </encoder>
The replace layout option uses regular expressions to modify the data before it ends up in the log file. For more information on using the replace filter and other advanced features, see the logback documentation.

Configuring data auditing 

You can configure which categories of audit events to log, and whether to omit operations against specific keyspaces from audit logging.

Procedure

  1. Open the logback.xml file in a text editor.
  2. Accept the default settings or change the properties in the logback.xml file to configure data auditing:
    <!--audit log-->  
    <appender name="SLF4JAuditWriterAppender" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>${cassandra.logdir}/audit/audit.log</file> <!-- logfile location -->
        <encoder>
          <pattern>%-5level [%thread] %date{ISO8601} %F:%L - %msg%n</pattern> <!-- the layout pattern used to format log entries -->
          <immediateFlush>true</immediateFlush> 
        </encoder>
        <rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
          <fileNamePattern>${cassandra.logdir}/audit/audit.log.%i.zip</fileNamePattern>
          <minIndex>1</minIndex>
          <maxIndex>5</maxIndex> <!-- max number of archived logs that are kept -->
        </rollingPolicy>
        <triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
          <maxFileSize>200MB</maxFileSize> <!-- The size of the logfile that triggers a switch to a new logfile, and the current one archived -->
        </triggeringPolicy>
      </appender>
    <logger name="SLF4JAuditWriter" level="INFO" additivity="false">
        <appender-ref ref="SLF4JAuditWriterAppender"/>
      </logger>
    The audit logger logs at INFO level, so the DataAudit logger must be configured at INFO (or lower) level in logback.xml. Setting the logger to a higher level, such as WARN, prevents any log events from being recorded, but it does not completely disable the data auditing. Some overhead occurs beyond overhead that is caused by regular processing.
  3. Restart the node to see changes in the log.