Configuring audit logging
If you’ve enabled audit logging and set the logger to output to the SLF4JAuditWriter as described in Enabling data auditing in DataStax Enterprise, you can configure the logger by setting options in logback.xml.
DataStax Enterprise places the audit log in the directory defined in the logback.xml configuration file.
After the log file reaches the configured size threshold, it rolls over, and the log file name is changed.
The file names include a numerical suffix that is determined by the maxBackupIndex
property.
Sensitive data in log files
Because auditing is configured through a text file in the file system, the file is vulnerable to OS-level security breaches. You can address this issue by changing DataStax Enterprise’s unmask setting, which is 0700, by setting the permissions to 0600 on the audit files. Be aware that if other tools look at the data, changing this setting can cause read problems. Alternately, you can store the audit file on an OS-level encrypted file system such as Vormetric.
Redact sensitive data before you share log files for troubleshooting purposes. |
For example, when:
-
A password is inserted in a table in a column named password.
-
And the audit logging options in dse.yaml are set to
included_categories: DML, ...
to include DML (insert, update, delete and other data manipulation language events) in the audit log. -
You can redact the values for that column so that passwords do not show in the log. Use the following to replace that string in the logback.xml file:
%-5level [%thread] %date{ISO8601} %X{service} %F:%L - %replace(%msg){"password='.*'", "password='xxxxx'"}%n
The replace layout option uses regular expressions to modify the data before it ends up in the log file. For more information on using the replace filter, see the logback documentation.
Configuring data auditing
You can configure which categories of audit events (administration, authentication, DML, DDL, DCL, and query operations) to log, and whether to omit operations against specific keyspaces from audit logging.`
The audit logger logs at INFO level, so the DataAudit logger must be configured at INFO (or lower) level in logback.xml. Setting the logger to a higher level, such as WARN, prevents any log events from being recorded, but it does not completely disable the data auditing. Some overhead occurs that is caused by regular processing.
Procedure
-
Open the logback.xml file in a text editor.
-
To configure data auditing, accept the default settings or change the properties:
<!--audit log--> <appender name="SLF4JAuditWriterAppender" class="ch.qos.logback.core.rolling.RollingFileAppender"> <file>${cassandra.logdir}/audit/audit.log</file> <!-- logfile location --> <encoder> <pattern>%-5level [%thread] %date{ISO8601} %F:%L - %msg%n</pattern> <!-- the layout pattern used to format log entries --> <immediateFlush>true</immediateFlush> </encoder> <rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy"> <fileNamePattern>${cassandra.logdir}/audit/audit.log.%i.zip</fileNamePattern> <minIndex>1</minIndex> <maxIndex>20</maxIndex> <!-- max number of archived logs that are kept --> </rollingPolicy> <triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy"> <maxFileSize>200MB</maxFileSize> <!-- The size of the logfile that triggers a switch to a new logfile, and the current one archived --> </triggeringPolicy> </appender> <logger name="SLF4JAuditWriter" level="INFO" additivity="false"> <appender-ref ref="SLF4JAuditWriterAppender"/> </logger>
-
Generate Kerberos debug output:
... <logger name="com.datastax.bdp.transport.server" level="TRACE"/> <logger name="com.datastax.bdp.cassandra.auth" level="TRACE"/> ...
-
Generate LDAP debug output:
... <logger name="com.datastax.bdp.cassandra.auth" level="TRACE"/> ...
-
Restart the node to see changes in the log.
- Formats of DataStax Enterprise logs
-
The log format is a simple set of pipe-delimited name/value pairs. A name/value pair, or field, is included in the log line only if a value exists for that particular event.