Enabling data auditing in DataStax Enterprise
Steps to enable data auditing in DataStax Enterprise.
The audit logger logs information only on nodes set up for logging. For example, node 0 has audit turned on, node 1 does not. This means issuing updates and other commands on node 1 does not affect the node 0 audit log. For maximum information from data auditing, turn on data auditing on every node.
System log files | Database tables |
---|---|
When you turn on audit logging, the default is to write to logback file system log files. Cassandra provides logging functionality using Simple Logging Facade for Java (SLF4J) with a logback backend. | As audit logs increase in size, logging audit data to a Cassandra table is more useful. |
File-based logs are stored per node and are secured with standard Linux file system permissions. | Audit events stored in database tables can be secured like any other table using RBAC. For example, store database table-based logs in encrypted SSTables. Control access to the tables with object permissions. |
The log files can be read from a terminal for troubleshooting queries or managing security. | Larger clusters use Cassandra tables because logback audit logs become cumbersome. The data can be queried like any other table, making analysis easier and custom audit reports possible. |
Audit logging of queries and prepared statements submitted to the DataStax drivers, which use the CQL binary protocol, is supported.
When using audit logging with Kerberos authentication, the login events take place on Kerberos and are not logged in DataStax Enterprise. Authentication history is available only on Kerberos. When DataStax Enterprise is unable to authenticate a client with Kerberos, a LOGIN_ERROR event is logged.
Procedure
Example
The following example sets the audit logger to log to a Cassandra table.
# Audit logging options
audit_logging_options:
enabled: true
logger: CassandraAuditWriter