CDC for Astra DB

CDC for Astra DB automatically captures changes in real time, de-duplicates the changes, and streams the clean set of changed data into Astra Streaming where it can be processed by client applications or sent to downstream systems.

Astra Streaming processes data changes via a Pulsar topic. By design, the Astra Streaming Change Data Capture (CDC) component is simple, with a 1:1 correspondence between the table and a single Pulsar topic.

This doc will show you how to create a CDC connector for your Astra DB deployment and send change data to an Elasticsearch sink.

Enabling CDC for Astra DB will result in increased costs based on your Astra Streaming usage. See Pricing.

Supported data structures

The following Cassandra CQL 3.x data types (with the associated AVRO type or logical-type) are supported for CDC for Astra DB:

  • ascii (string)

  • bigint (long)

  • blob (bytes)

  • boolean (boolean)

  • counter (counter)

  • date (date)

  • decimal (cql_decimal)

  • double (double)

  • float (float)

  • inet (string)

  • int (int)

  • smallint (int)

  • text (string)

  • timeuuid (uuid)

  • tinyint (int)

  • uuid (uuid)

  • varchar (string)

  • varint (cql_varint)

Cassandra static columns are supported:

  • On row-level updates, static columns are included in the message value.

  • On partition-level updates, the clustering keys are null in the message key. The message value only has static columns on INSERT/UPDATE operations.

For columns using data types that are not supported, the data types are omitted from the events sent to the data topic. If a row update contains both supported and unsupported data types, the event will include only columns with supported data types.

AVRO interpretation

Astra DB keys are strings, while CDC produces AVRO messages which are structures. The conversion for some AVRO structures requires additional tooling that can result in unexpected output.

The table below describes the conversion of AVRO logical types. The record type is a schema containing the listed fields.

Table 1. AVRO complex types
Name AVRO type Fields Explanation

collections

array

lists, sets

Sets and Lists are treated as AVRO type array, with the attribute items containing the schema of the array’s items.

decimal

record

BIG_INT, DECIMAL_SCALE

The Cassandra DECIMAL type is converted to a record with the cql_decimal logical type

duration

record

CQL_DURATION_MONTHS, CQL_DURATION_DAYS, CQL_DURATION_NANOSECONDS

The Cassandra DURATION type is converted to a record with the cql_duration logical type

maps

map

The Cassandra MAP type is converted to the AVRO map type, but the keys are converted to strings.
For complex types, the key is represented in JSON.

Limitations

CDC for Astra DB has the following limitations:

  • Does not manage table truncates.

  • Does not sync data available before starting the CDC agent.

  • Does not replay logged batches.

  • Does not manage time-to-live.

  • Does not support range deletes.

  • CQL column names must not match a Pulsar primitive type name (ex: INT32).

  • Does not support multi-region.

Creating a tenant and a topic

  1. In astra.datastax.com, select Create Streaming.

  2. Enter the name for your new streaming tenant and select a provider.

    Create new tenant
  3. Select Create Tenant.

Use the default persistent and non-partitioned topic.

Astra DB CDC can only be used in a region that supports both Astra Streaming and Astra DB. See Regions for more information.

Creating a table

  1. In your database, create a table with a primary key column:

    CREATE TABLE IF NOT EXISTS <keyspacename>.tbl1 (key text PRIMARY KEY, c1 text);
  2. Confirm you created your table:

    select * from <mykeyspace>.tbl1;

    Results:

    Create a CDC table

Connecting to CDC for Astra DB

  1. Select the CDC tab in your database dashboard.

  2. Select Enable CDC.

  3. Complete the fields to connect CDC.

    Enable CDC
  4. Select Enable CDC. Once created, your CDC connector will appear:

Confirm CDC Created

Connecting Elasticsearch sink

Once you have created your CDC connector, connect an Elasticsearch sink to it. DataStax recommends using the default Astra DB settings.

  1. Select Add Elastic Search Sink from the database CDC console to enforce the default settings.

    Connect ECS Sink
  2. Use your Elasticsearch deployment to complete the fields.

    To find your Elasticsearch URL, navigate to your deployment within the Elastic Common Schema (ECS). Copy the Elasticsearch endpoint to the Elastic Search URL field.

    Find ECS URL
  3. Complete the remaining fields.

    Most values will auto-populate. These values are recommended:

    • ignoreKey as false

    • nullValueAction as DELETE

    • schemaEnable as true

      Connect ECS Sink
  4. When the fields are completed, select Create.

If creation is successful, <sink-name> created successfully will print at the top of the screen. You can confirm your new sink was created in the Sinks tab.

ECS Created

Sending messages

Let’s process some changes with CDC.

  1. Go to the CQL console.

  2. Modify the table you created.

    INSERT INTO <keyspacename>.tbl1 (key,c1) VALUES ('32a','bob3123');
    INSERT INTO <keyspacename>.tbl1 (key,c1) VALUES ('32b','bob3123b');
  3. Confirm the changes you’ve made:

    select * from <keyspacename>.tbl1;

    Results:

    Table Changes

Confirming ECS is receiving data

To confirm ECS is receiving your CDC changes, use a curl request to your ECS deployment.

  1. Get your index name from your ECS sink tab:

    ECS Index
  2. Issue your curl request with your Elastic username, password, and index name:

    curl  -u <username>:<password>  \
       -XGET "https://asdev.es.westus2.azure.elastic-cloud.com:9243/<index_name>.tbl1/_search?pretty=true"  \
       -H 'Content-Type: application/json'

    If you have a trial account, the username is elastic.

You will receive a JSON response with your changes to the index, which confirms Astra Streaming is sending your CDC changes to your ECS sink.

{
    "_index" : "index.tbl1",
    "_type" : "_doc",
    "_id" : "32a",
    "_score" : 1.0,
    "_source" : {
        "c1" : "bob3123"
    }
}
{
    "_index" : "index.tbl1",
    "_type" : "_doc",
    "_id" : "32b",
    "_score" : 1.0,
    "_source" : {
        "c1" : "bob3123b"
    }
}