Formats of DataStax Enterprise logs

The log format is a simple set of pipe-delimited name/value pairs. The pairs themselves are separated by the pipe symbol (|), and the name and value portions of each pair are separated by a colon. A name/value pair, or field, is included in the log line only when a value exists for that particular event. Some fields always have a value, and are always present. Other fields might not be relevant for a given operation. To make parsing with automated tools easier, the order in which fields appear (when present) in the log line is predictable. For example, the text of CQL statements is unquoted, but if present, is always the last field in the log line.

Field Label Field Value Optional

host

dse node address

no

source

client address

no

user

authenticated user

no

timestamp

system time of log event

no

category

DML/DDL/QUERY for example

no

type

API level operation

no

batch

batch id

yes

ks

keyspace

yes

cf

column family

yes

operation

textual description

yes

The textual description value for the operation field label is currently only present for CQL.

Auditing is completely separate from authorization, although the data points logged include the client address and authenticated user, which may be a generic user if the default authenticator is not overridden. Logging of requests can be activated for any or all of the list of categories described in Enabling data auditing in DataStax Enterprise.

CQL logging examples

Generally, SELECT queries are placed into the QUERY category. The INSERT, UPDATE, and DELETE statements are categorized as DML. CQL statements that affect schema, such as CREATE KEYSPACE and DROP KEYSPACE, are categorized as DDL.

CQL USE

USE dsp904;

host:/192.168.56.1|source:/192.168.56.101|user:#User allow_all groups=[]
  |timestamp:1351003707937|category:DML|type:SET_KS|ks:dsp904|operation:use dsp904;

CLI USE

USE dsp904;

host:/192.168.56.1|source:/192.168.56.101|user:#User allow_all groups=[]
  |timestamp:1351004648848|category:DML|type:SET_KS|ks:dsp904

CQL query

SELECT * FROM t0;

host:/192.168.56.1|source:/192.168.56.101|user:#User allow_all groups=[]
  |timestamp:1351003741953|category:QUERY|type:CQL_SELECT|ks:dsp904|cf:t0|operation:select * from t0;

CQL BATCH

BEGIN BATCH
  INSERT INTO t0(id, field0) VALUES (0, 'foo')
  INSERT INTO t0(id, field0) VALUES (1, 'bar')
  DELETE FROM t1 WHERE id = 2
APPLY BATCH;

host:192.168.56.1|source:/192.168.56.101|user:#User allow_all groups=[]
  |timestamp:1351005482412|category:DML|type:CQL_UPDATE
  |batch:fc386364-245a-44c0-a5ab-12f165374a89|ks:dsp904|cf:t0
  |operation:INSERT INTO t0 ( id , field0 ) VALUES ( 0 , 'foo' )

host:192.168.56.1|source:/192.168.56.101|user:#User allow_all groups=[]
  |timestamp:1351005482413|category:DML|type:CQL_UPDATE
  |batch:fc386364-245a-44c0-a5ab-12f165374a89|ks:dsp904|cf:t0
  |operation:INSERT INTO t0 ( id , field0 ) VALUES ( 1 , 'bar' )

host:192.168.56.1|source:/192.168.56.101|user:#User allow_all groups=[]
  |timestamp:1351005482413|category:DML|type:CQL_DELETE
  |batch:fc386364-245a-44c0-a5ab-12f165374a89|ks:dsp904|cf:t1
  |operation:DELETE FROM t1 WHERE id = 2

CQL DROP KEYSPACE

DROP KEYSPACE dsp904;

host:/192.168.56.1|source:/192.168.56.101|user:#User allow_all groups=[]
  |timestamp:1351004777354|category:DDL|type:DROP_KS
  |ks:dsp904|operation:drop keyspace dsp904;

CQL prepared statement

host:/10.112.75.154|source:/127.0.0.1|user:allow_all
  |timestamp:1356046999323|category:DML|type:CQL_UPDATE
  |ks:ks|cf:cf|operation:INSERT INTO cf (id, name) VALUES (?, ?) [id=1,name=vic]

Thrift batch_mutate

host:/192.168.56.1|source:/192.168.56.101|user:#User allow_all groups=[]
  |timestamp:1351005073561|category:DML|type:INSERT
  |batch:7d13a423-4c68-4238-af06-a779697088a9|ks:Keyspace1|cf:Standard1

host:/192.168.56.1|source:/192.168.56.101|user:#User allow_all groups=[]
  |timestamp:1351005073562|category:DML|type:INSERT
  |batch:7d13a423-4c68-4238-af06-a779697088a9|ks:Keyspace1|cf:Standard1

host:/192.168.56.1|source:/192.168.56.101|user:#User allow_all groups=[]
  |timestamp:1351005073562|category:DML|type:INSERT
  |batch:7d13a423-4c68-4238-af06-a779697088a9|ks:Keyspace1|cf:Standard1

DataStax Java Driver queries

host:ip-10-85-22-245.ec2.internal/10.85.22.245|source:/127.0.0.1|user:anonymous
  |timestamp:1370537557052|category:DDL|type:ADD_KS
  |ks:test|operation:create keyspace test with replication = {'class':'NetworkTopologyStrategy', 'Analytics': 1};

host:ip-10-85-22-245.ec2.internal/10.85.22.245|source:/127.0.0.1|user:anonymous
  |timestamp:1370537557208|category:DDL|type:ADD_CF
  |ks:test|cf:new_cf|operation:create COLUMNFAMILY test.new_cf ( id text PRIMARY KEY , col1 int, col2 ascii, col3 int);

host:ip-10-85-22-245.ec2.internal/10.85.22.245|source:/127.0.0.1|user:anonymous
  |timestamp:1370537557236|category:DML|type:CQL_UPDATE
  |ks:test|cf:new_cf|operation:insert into test.new_cf (id, col1, col2, col3) values ('test1', 42, 'blah', 3);

host:ip-10-85-22-245.ec2.internal/10.85.22.245|source:/127.0.0.1|user:anonymous
  |timestamp:1370537704885|category:QUERY|type:CQL_SELECT
  |ks:test|cf:new_cf|operation:select * from test.new_cf;

Batch updates

Batch updates, whether received via a Thrift batch_mutate call, or in CQL BEGIN BATCH....APPLY BATCH block, are logged in the following way: A UUID is generated for the batch, then each individual operation is reported separately, with an extra field containing the batch id.

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2024 DataStax | Privacy policy | Terms of use

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000, info@datastax.com