DataStax Apache Kafka Connector release notes
Release notes for open source DataStax Apache Kafka Connector.
Release notes for open source DataStax Apache Kafka™ Connector.
- DataStax Astra cloud databases
- DataStax Enterprise (DSE) 4.7 and later databases
- Open source Apache Cassandra® 2.1 and later databases
DataStax Apache Kafka Connector 1.4.0 release notes
22 July 2020
DataStax Apache Kafka™ Connector 1.4.0 release notes1.4.0 Changes and enhancements
- Starting with version 1.4.0, DataStax Apache Kafka™ Connector is now available under the Apache-2.0 license as open-source software (OSS). This change and enhancement makes it possible for the open-source community of developers to contribute to features that enable streaming Kafka data into Apache Cassandra, DataStax Enterprise (DSE), and DataStax Astra databases.
- The public GitHub repo is https://github.com/datastax/kafka-sink.
- Sample resource files provided by the installation, which were formerly named with a
-dse
prefix, have new names:- conf/cassandra-sink-distributed.json.sample
- conf/cassandra-sink-standalone.properties.sample
- Updated the public kafka-examples GitHub repo with new class names, as part of the open-source DataStax Apache Kafka Connector 1.4.0 release. (KAF-196)
- Improved documentation about logging configuration. See Configure logging for Kafka Connector. (KAF-203)
1.4.0 Resolved issues
- In prior releases, the
ignoreErrors
setting only ignored driver errors. For example, errors that occurred during the conversion phase were treated as fatal, and usually caused the Kafka connector task to crash. Now you can useignoreErrors
to ignore specific types or all runtime errors, depending on the settings you choose. See Configure error handling. (KAF-200)
DataStax Apache Kafka Connector 1.3.1 release notes
12 March 2020
DataStax Apache Kafka™ Connector 1.3.1 release notes1.3.1 Resolved issue
DataStax Apache Kafka Connector 1.3.1 removed an unnecessary TinkerPop dependency in the DataStax Java driver (KAF-189).
DataStax Apache Kafka Connector 1.3.0 release notes
02 March 2020
DataStax Apache Kafka™ Connector 1.3.0 release notes1.3.0 Changes and enhancements
- A new histogram metric,
batchSizeInBytes
, showing the calculated size of batch statements (KAF-174). See DataStax Apache Kafka Connector - Batch size metrics. - Enhanced rate-based metrics for failed Kafka topic records (KAF-72, KAF-175). See Failed Kafka topic record metrics.
- You can selectively update maps and User Defined Types (UDTs) based on present Kafka fields (KAF-182). See Selectively update maps and UDTs based on Kafka fields.
- DataStax Apache Kafka Connector 1.3.0 adds support for the DataStax Unified Driver 4.4.0. For background information, read the Better drivers for Cassandra blog post. Then refer to this start page for the DataStax drivers documentation.
- The writetime timestamp mapping topic has been improved by emphasizing that the provided column must be a number (KAF-185). See Specify writetime timestamp column.
- You can use the
now()
function (KAF-173). For examples, see the The now() function in mappings. - Starting in this 1.3.0 release, you can optionally provide a CQL query that should run when each new record to the Kafka topic mapping arrives (KAF-180). See Provide CQL queries in mappings.
- The Kafka connect-api 2.4 introduced a new way of handling
BigDecimal
inJsonConverter
with a newdecimal.format
config setting. This new option defaults toBASE64
to maintain the previous behavior. However, you have the option of changingdecimal.format
toNUMERIC
to serialize decimal values as normal JSON numbers. Thedecimal.format
setting works only from Kafka connect-api 2.4 and later, DataStax Apache Kafka Connector 1.3.0 (this release, with KAF-181), and Confluent 5.4.0 and later. Here's the behavior:- If the client sets
decimal.format
toBASE64
(or leaves it unset), the resulting deserializedBigDecimal
is of typeString
, which needs to be decoded. - If the client sets
decimal.format
toNUMERIC
, the resulting deserializedBigDecimal
is of typeDoubleNode
. - The
decimal.format
setting is perkey
orvalue
. You must setBASE64
orNUMERIC
forvalue.converter.decimal.format
:value.converter.decimal.format={BASE64 | NUMERIC}
Forkey.converter.decimal.format
, the default ofBASE64
is backward compatible with Confluent 5.4.0.key.converter.decimal.format={BASE64 | NUMERIC}
Attention: The JSON converter automatically deserializes using either format. Be sure to upgrade your consumer applications and sink connectors before changing source connector converters to use theRefer to https://cwiki.apache.org/confluence/display/KAFKA/KIP-481%3A+SerDe+Improvements+for+Connect+Decimal+type+in+JSONNUMERIC
format, should you choose to do so.
- If the client sets
datastax-java-driver.basic.contact-point
setting with the DataStax Java
driver (KAF-155), instead of the combination of two deprecated configuration parameters,
contactPoints
and port
. See Mapping Kafka Connector settings to Java driver properties.DataStax Apache Kafka Connector 1.2.1 release notes
16 December 2019
DataStax Apache Kafka™ Connector 1.2.1 release notes1.2.1 Changes and enhancements
DataStax Apache Kafka Connector 1.2.1 added support for topic-to-table mappings with table rows of open source Apache Cassandra® databases. (KAF-165)
- DataStax Astra cloud databases (DataStax Astra on AWS | DataStax Astra on GCP)
- DataStax Enterprise (DSE) 4.7 and later databases
DataStax Apache Kafka Connector 1.2.0 release notes
12 November 2019
DataStax Apache Kafka™ Connector 1.2.0 release notes1.2.0 Changes and enhancements
- You can deploy DataStax Apache Kafka Connector to stream records from an Apache Kafka
topic to a cloud-based DataStax Astra database
by using the
cloud.secureConnectBundle
. See Streaming data with the DataStax Apache Kafka Connector. (KAF-143)Note: DataStax Astra Open Beta participants can download the secure connect bundle from the DataStax Cloud console after creating an Astra database. - You can pass all DataStax Apache Kafka Connector settings to the DataStax Java driver
directly by using the
datastax-java-driver
prefix. See Pass Kafka Connector settings directly to the DataStax Java driver (KAF-79) - In the DataStax Apache Kafka Connector topic mapping, you can optionally extract a
value from the record's header by using
header.header-field-name
. See Extract Kafka record header values. (KAF-142)
DataStax Apache Kafka Connector 1.1.1 release notes
23 September 2019
DataStax Apache Kafka Connector 1.1.1 release notes include:1.1.1 Changes and enhancements
- This release adds a new parameter,
ignoreErrors
. When set totrue
, it allows the Kafka Connector to continue processing records even after an error occurred on the prior record. Refer to Configure error handling. (KAF-132) - DataStax Apache Kafka Connector has been upgraded to use the latest:
- DataStax Enterprise Java Driver 2.2.0
- Apache Cassandra OSS Driver 4.2.0
1.1.1 Resolved issues
- Bootstrapping node results in task failure and NPE. (KAF-126)
- The dse-reference.conf file not loaded because wrong config Loader used. (KAF-135)
DataStax Apache Kafka Connector 1.1.0 release notes
20 May 2019
DataStax Apache Kafka Connector 1.1.0 release notes include:1.1.0 Changes and enhancements
- New mapping properties:
__timestamp
. By setting this property you can specify which column should be used as the write-time timestamp when doing an insert to DSE. Refer to Specify writetime timestamp column (KAF-46)__ttl
. By setting this property you can specify which column is used as the TTL (Time-to-Live) when doing an insert to DSE. Refer to Setting row-level TTL values from Kafka fields. (KAF-107)
- Counters and histograms are now applicable per topic, keyspace, and table. Previously
counters and histograms were implemented at a global level. The change allows for finer
granularity in reporting operations. As a result, the
name
parameter value inobjectName
has changed.ForrecordCount
:- From:
objectName='com.datastax.kafkaconnector:connector=*,name=recordCount'
- To:
objectName='com.datastax.kafkaconnector:connector=*,name=topic.keyspace.table.recordCount'
ForRefer to Metrics for the processed Kafka topic records and Failed Kafka topic record metrics. (KAF-72).failedRecordCount
:- From:
objectName='com.datastax.kafkaconnector:connector=*,name=failedRecordCount'
- To:
objectName='com.datastax.kafkaconnector:connector=*,name=topic.keyspace.table.failedRecordCount'
- From:
- Support for the Confluent Kafka Connect 2.1.0 and 2.2.0 API. (KAF-81, KAF-109).
- Kafka topic names may now include one or more period (.) characters. Example:
org.datastax.init.event.history
(KAF-104) - In cases where a Kafka producer writes a record's entire value as
null
, DataStax Apache Kafka Connector now allows the DSE record to be deleted, provided:topic.<topic>.<keyspace>.<table>.deletesEnabled": "true"
is set in the configuration- The record's primary key and clustering key columns are present for the event
{"key":{"symbol":"DEC", "industry": "tech"}, "value": null}
Note: The connector continues to support the scenario where the DSE(KAF-113)DELETE
can be performed when all field values (other than the primary key and clustering keys) arenull
. - To allow for the recording of latency records, DataStax Apache Kafka Connector
dynamically sets the default value of
advanced.metrics.session.cql-requests.highest-latency
so that it exceeds the configuredrequest-timeout
value. (KAF-115) - DataStax Apache Kafka Connector has been upgraded to use the externally released version of the DSE Java Driver 2.x. (KAF-112)
1.1.0 Resolved issues
java.lang.ArrayIndexOutOfBoundsException
while inserting (KAF-114)
DataStax Apache Kafka Connector 1.0 release notes
5 December 2018
The latest version of DataStax Apache Kafka Connector is 1.4.0.
- JSON, Avro, Struct, Primitive Types mapping support. (KAF-1, KAF-3, KAF-7)
- DSE data types support. (KAF-4)
- Configurable row TTL. (KAF-6)
- Configurable consistency level. (KAF-9)
- Treat nulls as unset. (KAF-11)
- Support multiple topics for single connector instance. (KAF-14)
- Report metrics via JMX. (KAF-15)
- Configurable max request rate with maxConcurrentRequests. (KAF-16)
- Support for connections to DataStax Enterprise (DSE) 5.0 and later databases.
- Connector to DSE SSL. (KAF-18)
- Connector to DSE username/password authentication. (KAF-19)
- Connector to DSE Kerberos authentication. (KAF-20)
- Configurable deletes. (KAF-21)
- Configurable date/time formats. (KAF-26)
- Mapping single topic to multiple DSE tables. (KAF-43)
- Connector to DSE compression. (KAF-45)
- Configurable connector to DSE execution timeout with queryExecutionTimeout. (KAF-49)
- Configurable max statements per batch with maxNumberOfRecordsInBatch. (KAF-60)
- Configurable connections per DSE host with connectionPoolLocalSize. (KAF-95)