Setting up Database Auditing

Capture DataStax Enterprise (DSE) database activity to a log file or table. Each node only records the events that happen locally. Use the configuration to refine the type of events captured. DSE provides the following customizations:

  • Keyspace filtering - Capture activity in every keyspace or only targeted keyspaces. Filter keyspace names using regex.

  • Category filtering - Identify event categories to limit the number of events captured.

  • Role filtering - Track the activity of particular users or groups by their login role name.

  • Node specific - Enable auditing on one or more nodes. Allows auditing of only specific nodes, an entire datacenter, or the whole cluster.

Configure logging levels, mask sensitive data, and for the log file set the file name, location, size threshold, and max log files in the logback.xml file.

The location of the logback.xml file depends on the type of installation:

  • Package installations: /etc/dse/cassandra/logback.xml

  • Tarball installations: <installation_location>/resources/cassandra/conf/logback.xml

Choosing a File versus Table

Audit logging options are configured on a per node basis and therefore can be different on each node. DataStax Enterprise (DSE) supports the following methods to record database activity:

  • Log file (per node) - The SLF4JAuditWriter [Simple Logging Facade for Java (SLF4J) Audit Writer] logger records all database activity that occurs on the local node to the audit.log file. When tracing a multi-node request, collect and parse log files from all the nodes that participated.

  • Unified table - The CassandraAuditWriter logger records all database activity that happens on the local node in the dse_audit.audit_log table. Events from all nodes with the logger enabled are captured in the same table. This allows you to easily create reports that include multiple nodes.

Enable audit logging to a file

Use Simple Logging Facade for Java (SLF4J) audit writer (SLF4JAuditWriter) logger to record all database activity that occurs on the local node to the audit.log file. Secure the log file by controlling access using standard Linux file system permissions.

DSE does not support data encryption for the audit.log file. Encryption is only available for the dse_audit.audit_log table.

audit.log

The location of the audit.log file is:

  • /var/log/cassandra/audit/audit.log

To capture events to the audit.log file:

  1. Locate the dse.yaml configuration file. The location of this file depends on the type of installation:

    • Package installations: /etc/dse/dse.yaml

    • Tarball installations: <installation_location>/resources/dse/conf/dse.yaml

  2. Set the audit_logging_options in the dse.yaml file:

audit_logging_options:
    enabled: true
    logger: SLF4JAuditWriter
  • enabled: true - Turns on logging after the next start up.

  • logger: SLF4JAuditWriter - Logger name.

In the Cassandra log directory, DataStax Enterprise creates audit/audit.log. After the log file reaches the configured size threshold, it rolls over, and the log file name is changed. The file names include a numerical suffix that is determined by the maxBackupIndex property.

Enabling Audit Logging to a Table

Use the CassandraAuditWriter logger to record all database activity that happens on the local node to the dse_audit.audit_log table. Events from all nodes are captured in the same table, allowing you to easily create reports that include multiple nodes.

Using the table option provides a centralized location for all events across the cluster. Secure the table with DataStax role-based access control (RBAC), see Assigning permissions and Transparent Data Encryption (TDE), see Encrypting tables.

Procedure

To capture events to the dse_audit.audit_log table:

  1. Locate the dse.yaml configuration file. The location of this file depends on the type of installation:

    • Package installations: /etc/dse/dse.yaml

    • Tarball installations: <installation_location>/resources/dse/conf/dse.yaml

  2. Set the audit_logging_options in the dse.yaml file:

    audit_logging_options:
        enabled: true
        logger: CassandraAuditWriter
    #    included_categories:
    #    excluded_categories:
    #    included_keyspaces:
    #    excluded_keyspaces:
    #    included_roles:
    #    excluded_roles:
        retention_time: 12
    
        cassandra_audit_writer_options:
            mode: sync
            batch_size: 50
            flush_time: 250
            queue_size: 30000
            write_consistency: QUORUM
    #         dropped_event_log: /var/log/cassandra/dropped_audit_events.log
    #         day_partition_millis: 3600000
    • enabled: true - Turns on logging after the next start up.

    • logger: CassandraAuditWriter - Logger name.

    • retention_time: 12 - Number of hours to set the TTL (time-to-live) on the dse_audit.audit_log table. Use this setting to automatically expire data. The default is 0 (disabled).

    • Customize the cassandra_audit_writer_options parameters as required (the default are shown above).

      The audit_log table has the following settings:

      DESC TABLE dse_audit.audit_log
      
      CREATE TABLE dse_audit.audit_log (
          date timestamp,
          node inet,
          day_partition int,
          event_time timeuuid,
          authenticated text,
          batch_id uuid,
          category text,
          consistency text,
          keyspace_name text,
          operation text,
          source text,
          table_name text,
          type text,
          username text,
          PRIMARY KEY ((date, node, day_partition), event_time)
      ) WITH CLUSTERING ORDER BY (event_time ASC)
          AND bloom_filter_fp_chance = 0.01
          AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
          AND comment = ''
          AND compaction = {'class': 'org.apache.cassandra.db.compaction.TimeWindowCompactionStrategy', 'max_threshold': '32', 'min_threshold': '4'}
          AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}
          AND crc_check_chance = 1.0
          AND default_time_to_live = 0
          AND gc_grace_seconds = 7776000
          AND max_index_interval = 2048
          AND memtable_flush_period_in_ms = 3600000
          AND min_index_interval = 128
          AND nodesync = {'enabled': 'true'}
          AND speculative_retry = '99PERCENTILE';
  3. Optional: By default, NodeSync is diabled for the audit_log table. To use NodeSync in order to maintain data consistency, enable it.

    ALTER TABLE dse_audit.audit_log WITH
    nodesync = { 'enabled' : true };
  4. Set the keyspace replication strategy to NetworkTopologyStrategy and set a replication factor for each datacenter in the cluster where auditing is enabled:

    ALTER KEYSPACE dse_audit WITH
    replication = {
                   'class': 'NetworkTopologyStrategy',
                   'datacenter1' : 2,
                   'datacenter2' : 1   };
  5. Force data replication to the newly added datacenters using one of the following methods:

    • If NodeSync is enabled:

      nodesync -cu <user_name> -cp <password> \
      validation submit dse_audit.audit_log

      where <user_name> is an internal login role name or LDAP/Kerberos with permission to access the CQL table.

    • If NodeSync is disabled:

      nodetool repair dse_audit audit_log

Filtering event categories

Configure which categories to capture in the audit_logging_options section of the dse.yaml file.

The location of the dse.yaml configuration file depends on the type of installation:

  • Package installations: /etc/dse/dse.yaml

  • Tarball installations: <installation_location>/resources/dse/conf/dse.yaml

audit_logging_options:
    enabled: true
    logger: <logger_name>
#    included_categories:
#    excluded_categories:

where <logger_name> is SLF4JAuditWriter or CassandraAuditWriter.

Category types

By default, DataStax Enterprise captures all event categories when auditing is enabled, and the following parameters are commented out. Use either the included_categories or excluded_categories parameter to limit which categories are captured.

  • included_categories - Includes only listed categories; excludes all others.

  • excluded_categories - Excludes listed categories; includes all others.

Audit logging event categories and types

All events have both a category and a type. A type usually maps directly to a CQL command. The following list shows all the types in each category.

Data Definition Language (DDL)

DDL (Data Definition Language) logs the following database schema changes:

Event category CQL or dsetool commands

ADD_KS

CREATE KEYSPACE

DROP_KS

DROP KEYSPACE

UPDATE_KS

ALTER KEYSPACE

ADD_CF

CREATE TABLE

DROP_CF

DROP TABLE

UPDATE_CF

ALTER TABLE

CREATE_INDEX

CREATE INDEX

DROP_INDEX

DROP INDEX

CREATE_TYPE

CREATE TYPE

DROP_TYPE

DROP TYPE

UPDATE_TYPE

ALTER TYPE

CREATE_FUNCTION

CREATE FUNCTION

DROP_FUNCTION

DROP FUNCTION

CREATE_AGGREGATE

CREATE AGGREGATE

DROP_AGGREGATE

DROP AGGREGATE

CREATE_VIEW

CREATE VIEW

DROP_VIEW

DROP VIEW

ALTER_VIEW

ALTER VIEW

CREATE_TRIGGER

CREATE TRIGGER

DROP_TRIGGER

DROP TRIGGER

SOLR_CREATE_SEARCH_INDEX_STATEMENT

CREATE SEARCH INDEX

SOLR_ALTER_SEARCH_INDEX_STATEMENT

ALTER SEARCH INDEX CONFIG or ALTER SEARCH INDEX SCHEMA

SOLR_DROP_SEARCH_INDEX_STATEMENT

DROP SEARCH INDEX

SOLR_RELOAD_SEARCH_INDEX

RELOAD SEARCH INDEX

SOLR_REBUILD_SEARCH_INDEX

REBUILD SEARCH INDEX

SOLR_GET_RESOURCE

dsetool create_core

SOLR_UPDATE_RESOURCE

dsetool reload_core

Data Manipulation Language (DML)

DML (Data Manipulation Language) logs the following database data changes:

Event category CQL command

SET_KS

USE

INSERT

INSERT

BATCH

BATCH

TRUNCATE

TRUNCATE

CQL_UPDATE

UPDATE

CQL_DELETE

DELETE

CQL_PREPARE_STATEMENT

DataStax driver prepared statement, such as Java - Prepared Statement

SOLR_UPDATE

SOLR_COMMIT_SEARCH_INDEX_STATEMENT

COMMIT SEARCH INDEX

MANAGEMENT_API_OP

Data Control Language (DCL)

DCL (Data Control Language) logs the following database control, or role or permission changes:

Event category CQL command

CREATE_ROLE

CREATE ROLE

ALTER_ROLE

ALTER ROLE

DROP_ROLE

DROP ROLE

LIST_ROLES

LIST ROLES

LIST_USERS

LIST USERS

LIST_PERMISSIONS

LIST PERMISSIONS

GRANT

GRANT

REVOKE

REVOKE

RESTRICT

RESTRICT

UNRESTRICT

UNRESTRICT

RESTRICT_ROWS_STATEMENT

RESTRICT ROWS

UNRESTRICT_ROWS_STATEMENT

UNRESTRICT ROWS

QUERY

QUERY logs the following data retrieval events:

Event type CQL or other command

CQL_SELECT

SELECT

SOLR_QUERY

SELECT statement filtered by the Search index.

GRAPH_TINKERPOP_TRAVERSAL

Calls to a table using the graph traversal instance (g). See Discovering properties about graphs and traversals.

RPC_CALL_STATEMENT

Remote Procedure Call (RPC) statement.

ADMIN

ADMIN logs the following backup and restore events:

Event type CQL command or other information

CREATE_BACKUP_CONFIG

CREATE BACKUP CONFIGURATION

DROP_BACKUP_CONFIG

DROP BACKUP CONFIGURATION

ALTER_BACKUP_CONFIG

ALTER BACKUP CONFIGURATION

LIST_BACKUP_CONFIGS

LIST BACKUP CONFIGURATIONS

ENABLE_BACKUP_CONFIG

Enabling the backup and restore service

DISABLE_BACKUP_CONFIG

Enabling the backup and restore service

CREATE_BACKUP_STORE

CREATE BACKUP STORE

DROP_BACKUP_STORE

DROP BACKUP STORE

ALTER_BACKUP_STORE

ALTER BACKUP STORE

LIST_BACKUP_STORES

LIST BACKUP STORES

VALIDATE_BACKUP_STORE

VALIDATE BACKUP STORE

RUN_BACKUP

RUN BACKUP

CANCEL_BACKUP

CANCEL BACKUP

RUN_RESTORE

RESTORE

CANCEL_RESTORE

CANCEL RESTORE

FORCE_RESTORE

FORCE RESTORE

LIST_BACKUPS

LIST BACKUPS

CLEAN_BACKUPS

CLEAN BACKUPS

AUTH

AUTH logs the following authentication and authorization events:

Event type CQLSH command

LOGIN_SUCCESS

Successful login attempt. LOGIN and login requests sent from DataStax drivers.

LOGIN_ERROR

Failed login attempt.

UNAUTHORIZED_ATTEMPT

Unauthorized access attempt.

Messages not captured by DataStax Enterprise (DSE) include:

  • Successful Kerberos authentication events. You can audit Kerberos authentication from the Kerberos Distribution Server (KDS). DSE logs a failed Kerberos authentication request as type LOGIN_ERROR.

  • AUTH messages, when whitelisting keyspaces; included_keyspaces.

ERROR

ERROR logs the following error events:

Event type Information

ERROR

Logs CQL statement failures.

REQUEST_FAILURE

Logs failed requests.

UNKNOWN

UNKNOWN logs the following unknown events:

Event type Information

UNKNOWN

Logs unknown events.

Example

Include only data retrieval and manipulation events:

audit_logging_options:
    enabled: true
    logger: SLF4JAuditWriter
    included_categories: QUERY, DDL, AUTH
#    excluded_categories:

Filtering keyspaces

Locate the dse.yaml configuration file. The location of this file depends on the type of installation:

  • Package installations: /etc/dse/dse.yaml

  • Tarball installations: <installation_location>/resources/dse/conf/dse.yaml

Configure which keyspaces to capture in the audit_logging_options section of the dse.yaml.

audit_logging_options:
    enabled: true
    logger: <logger_name>
#    included_categories:
#    excluded_categories:
#    included_keyspaces:
#    excluded_keyspaces:

where <logger_name> is SLF4JAuditWriter or CassandraAuditWriter.

By default, both keyspace parameters are commented out and therefore all events are captured. Use only one of the following parameters to limit which events are captured:

  • included_keyspaces - Includes only matching keyspaces; excludes all others.

    When specifying that keyspaces are part of an access list by enabling the parameter included_keyspaces, AUTH messages are not captured.

  • excluded_keyspaces - Excludes matching keyspaces; includes all others.

Match keypsaces using a comma separated list of names or a single regular expression.

Example

The system local keyspace is queried on every log in, the following exclusion will show login events without showing additional queries to the system_local keyspace.

audit_logging_options:
    enabled: true
    logger: <logger_name>
#    included_categories:
#    excluded_categories:
#    included_keyspaces:
    excluded_keyspaces: system_local

Filtering roles

Locate the dse.yaml configuration file. The location of this file depends on the type of installation:

  • Package installations: /etc/dse/dse.yaml

  • Tarball installations: <installation_location>/resources/dse/conf/dse.yaml

Set up activity tracking of specific login roles in the audit_logging_options section of the dse.yaml.

audit_logging_options:
    enabled: true
    logger: <logger_name>
#    included_categories:
#    excluded_categories:
#    included_keyspaces:
#    excluded_keyspaces:
#    included_roles:
#    excluded_roles:

where <logger_name> is SLF4JAuditWriter or CassandraAuditWriter.

By default, both roles parameters are commented out and therefore events are captured for all roles. Use only one of the following parameters to limit the roles whose activity is tracked:

  • included_roles - Includes only matching roles; excludes all others.

  • excluded_keyspaces - Excludes matching roles; includes all others.

Match roles using a comma-separated list of names.

Example

Events of all roles except for dse_admin and jim are captured.

audit_logging_options:
    enabled: true
    logger: CassandraAuditWriter
#    included_categories:
#    excluded_categories:
#    included_keyspaces:
#    excluded_keyspaces:
#    included_roles:
    excluded_roles: dse_admin, jim

Replacing Sensitive Data in the Audit Log

When audit logging is enabled and includes DML category events, sensitive data, such as password columns, are exposed in the message portion of the audit log. You can configure the audit log appender (SLF4JAuditWriterAppender) to replace strings in log message to mask the sensitive data. The replace function uses a regular expressions to modify the data. For more information on using the replace filter, see the logback documentation.

Additionally, because the auditing is configured through a text file in the file system, the file is vulnerable to OS-level security breaches. You can secure the file by setting the permissions to 0600 on the audit files. Be aware that if other tools look at the data, changing this setting can cause read problems. Alternately, you can store the audit file on an OS-level encrypted file system such as Vormetric.

Before sharing log files, redact sensitive data.

Procedure

  1. Locate the logback.xml file. The location of the logback.xml file depends on the type of installation:

    • Package installations: /etc/dse/cassandra/logback.xml

    • Tarball installations: <installation_location>/resources/cassandra/conf/logback.xml

  2. Open the logback.xml file in a text editor.

  3. Locate the pattern element.

    <!--audit log-->
    <appender name="SLF4JAuditWriterAppender" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>${cassandra.logdir}/audit/audit.log</file> <!-- logfile location -->
        <encoder>
          <pattern>%-5level [%thread] %date{ISO8601} %F:%L - %msg%n</pattern> (1)
          <!-- the layout pattern used to format log entries -->
          <immediateFlush>true</immediateFlush>
        </encoder>
        <rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
          <fileNamePattern>${cassandra.logdir}/audit/audit.log.%i.zip</fileNamePattern>
          <minIndex>1</minIndex>
          <maxIndex>20</maxIndex> <!-- max number of archived logs that are kept -->
        </rollingPolicy>
        <triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
          <maxFileSize>200MB</maxFileSize> <!-- The size of the logfile that triggers a switch to a new logfile, and the current one archived -->
        </triggeringPolicy>
      </appender>
    <logger name="SLF4JAuditWriter" level="INFO" additivity="false">
        <appender-ref ref="SLF4JAuditWriterAppender"/>
      </logger>
    1 Pattern element
  4. Using the replace function on the message text to replace the first term with the second term. Here is an example to find password set to any characters and replace the password characters with xxxxx:

    %replace(%msg){"password\s*=\s*'.*'", "password='xxxxx'"}

    DataStax automatically masks the password specified in the CREATE ROLE command.

  5. Restart the node or wait for the configuration to automatically reload.

Managing the Audit Logger Settings

The logback.xml file sets the SLF4JAuditWriter log file configuration including location, name, roll frequency and retention, and level.

<!--audit log-->
<appender name="SLF4JAuditWriterAppender" class="ch.qos.logback.core.rolling.RollingFileAppender">
    <file>${cassandra.logdir}/audit/audit.log</file> <!-- logfile location -->
    <encoder>
      <pattern>%-5level [%thread] %date{ISO8601} %F:%L - %msg%n</pattern> <!-- the layout pattern used to format log entries -->
      <immediateFlush>true</immediateFlush>
    </encoder>
    <rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
      <fileNamePattern>${cassandra.logdir}/audit/audit.log.%i.zip</fileNamePattern>
      <minIndex>1</minIndex>
      <maxIndex>20</maxIndex> <!-- max number of archived logs that are kept -->
    </rollingPolicy>
    <triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
      <maxFileSize>200MB</maxFileSize> <!-- The size of the logfile that triggers a switch to a new logfile, and the current one archived -->
    </triggeringPolicy>
  </appender>
<logger name="SLF4JAuditWriter" level="INFO" additivity="false">
    <appender-ref ref="SLF4JAuditWriterAppender"/>
  </logger>

Most database activity level is INFO.

Procedure

  1. Locate the logback.xml file. The location of this file depends on the type of installation:

    • Package installations: /etc/dse/cassandra/logback.xml

    • Tarball installations: <installation_location>/resources/cassandra/conf/logback.xml

  2. Open the logback.xml file in a text editor.

  3. To configure data auditing, change the properties.

    • file - Path to log file on the location system.

    • pattern - Variables to write the log message. To hide sensitive data, see Replacing sensitive data in the audit log.

    • immediateFlush - Whether to write event message to log immediately or not.

    • fileNamePattern - Name convention for the log file when they roll.

    • maxFileSize - Threshold to roll log (maximum log size).

  4. Restart the node or wait for the configuration to automatically reload.

Enable Authentication Debugging

When initially setting up external authentication such as Kerberos or LDAP, use server and authentication loggers with TRACE level to capture internal, LDAP, and Kerberos debugging messages in the debug.log.

Procedure

  1. Locate the logback.xml file. The location of this file depends on the type of installation:

    • Package installations: /etc/dse/cassandra/logback.xml

    • Tarball installations: <installation_location>/resources/cassandra/conf/logback.xml

  2. Add the following lines before the end of the file:

       <logger name="com.datastax.bdp.transport.server" level="TRACE"/>
       <logger name="com.datastax.bdp.cassandra.auth" level="TRACE"/>

    Only activity from the local node is recorded. Enable authentication tracing on the node where you want to troubleshoot authentication.

  3. Restart the node or wait for the configuration to automatically reload.

  4. Tail the debug.log:

    tail -f /etc/dse/cassandra/debug.log
  5. Log in using an account from the external server.

Example

Output from a successful LDAP login:

TRACE [IOThread-0] 2018-04-24 21:07:26,565  LdapUtils.java:484 - [ldap-fetch-user] username: dantest1 connection: org.apache.directory.ldap.client.api.LdapNetworkConnection@430680df
TRACE [IOThread-0] 2018-04-24 21:07:26,566  LdapUtils.java:499 - [ldap-fetch-user] bind to connection
TRACE [IOThread-0] 2018-04-24 21:07:26,568  LdapUtils.java:508 - [ldap-fetch-user] user_search_base: ou=users,dc=qaldap,dc=datastax,dc=lan, user_search_filter: (uid=dantest1)
TRACE [IOThread-0] 2018-04-24 21:07:26,608  LdapUtils.java:517 - [ldap-fetch-user] found entry for username: dantest1
TRACE [IOThread-0] 2018-04-24 21:07:26,610  LdapUtils.java:433 - [ldap-bind] userDN: cn=dantest1,ou=users,dc=qaldap,dc=datastax,dc=lan connection: org.apache.directory.ldap.client.api.LdapNetworkConnection@430680df
TRACE [IOThread-0] 2018-04-24 21:07:26,611  LdapUtils.java:438 - [ldap-bind] SUCCESS - bind succcessful for userDN: cn=dantest1,ou=users,dc=qaldap,dc=datastax,dc=lan
TRACE [IOThread-0] 2018-04-24 21:07:26,613  LdapUtils.java:325 - [ldap-authenticate] SUCCESS - username: dantest1, userDN: cn=dantest1,ou=users,dc=qaldap,dc=datastax,dc=lan

Example

Output from a failed LDAP login:

TRACE [IOThread-0] 2018-04-24 21:18:23,434  LdapUtils.java:303 - [ldap-authenticate] username: dantest1 not found in cache
TRACE [IOThread-0] 2018-04-24 21:18:23,434  LdapUtils.java:484 - [ldap-fetch-user] username: dantest1 connection: org.apache.directory.ldap.client.api.LdapNetworkConnection@430680df
TRACE [IOThread-0] 2018-04-24 21:18:23,434  LdapUtils.java:499 - [ldap-fetch-user] bind to connection
TRACE [IOThread-0] 2018-04-24 21:18:23,437  LdapUtils.java:508 - [ldap-fetch-user] user_search_base: ou=users,dc=qaldap,dc=datastax,dc=lan, user_search_filter: (uid=dantest1)
TRACE [IOThread-0] 2018-04-24 21:18:23,440  LdapUtils.java:517 - [ldap-fetch-user] found entry for username: dantest1
TRACE [IOThread-0] 2018-04-24 21:18:23,441  LdapUtils.java:433 - [ldap-bind] userDN: cn=dantest1,ou=users,dc=qaldap,dc=datastax,dc=lan connection: org.apache.directory.ldap.client.api.LdapNetworkConnection@430680df
TRACE [IOThread-0] 2018-04-24 21:18:23,451  LdapUtils.java:447 - [ldap-bind] ERROR - bind failed for userDN: cn=dantest1,ou=users,dc=qaldap,dc=datastax,dc=lan
...

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2024 DataStax | Privacy policy | Terms of use

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000, info@datastax.com