How data is written to the DataStax platform

Data from the Kafka topic is written to the mapped DataStax platform database table using a batch request containing multiple write statements.

The Kafka Connect workers run one or more instances of the DataStax Apache Kafka™ Connector. Each instance of the DataStax Connector creates a single session with the DataStax cluster. Data is pulled from the Kafka topic and written to the mapped table using a CQL batch that contains multiple write statements.

The Connector supports DataStax Enterprise (DSE) or DataStax Distribution of Apache Cassandra™ (DDAC) nodes.
Note: All nodes in each cluster must be uniformly licensed to use the same subscription. For example, if a cluster contains 5 nodes, all 5 must be DDAC nodes, or all 5 must be DSE nodes.

The DataStax database session is created using the DSE or DDAC Java driver. A single connector instance can process records from multiple Kafka topics and write to several database tables.

A map specification binds a Kafka topic field to a table column. Fields that are omitted from the specification are not included in the write request. Fields with null values are written to the database as UNSET (see nullToUnset). To ensure proper ordering, all records are written using the Kafka record timestamp.

Use multiple connectors when different global connect settings are required for different scenarios, such as writing to different clusters or datacenters.