• Glossary
  • Support
  • Downloads
  • DataStax Home
Get Live Help
Expand All
Collapse All

Astra DB Serverless

    • Overview
      • Release notes
      • Astra DB FAQs
      • Astra DB Architecture FAQ
      • CQL for Astra
      • Astra DB glossary
      • Get support
    • Getting Started
      • Create your database
      • Grant a user access
      • Load and retrieve data
        • Use DSBulk to load data
        • Use Data Loader in Astra Portal
      • Connect a driver
      • Build sample apps
    • Planning
      • Plan options
      • Database regions
    • Securing
      • Security highlights
      • Security guidelines
      • Default user permissions
      • Change your password
      • Reset your password
      • Authentication and Authorization
      • Astra DB Plugin for HashiCorp Vault
    • Connecting
      • Connecting private endpoints
        • AWS Private Link
        • Azure Private Link
        • GCP Private Endpoints
        • Connecting custom DNS
      • Connecting Change Data Capture (CDC)
      • Connecting CQL console
      • Connect the Spark Cassandra Connector to Astra
      • Drivers for Astra DB
        • Connecting C++ driver
        • Connecting C# driver
        • Connecting Java driver
        • Connecting Node.js driver
        • Connecting Python driver
        • Drivers retry policies
      • Get Secure Connect Bundle
    • Migrating
      • Components
      • FAQs
      • Preliminary steps
        • Feasibility checks
        • Deployment and infrastructure considerations
        • Create target environment for migration
        • Understand rollback options
      • Phase 1: Deploy ZDM Proxy and connect client applications
        • Set up the ZDM Proxy Automation with ZDM Utility
        • Deploy the ZDM Proxy and monitoring
        • Configure Transport Layer Security
        • Connect client applications to ZDM Proxy
        • Leverage metrics provided by ZDM Proxy
        • Manage your ZDM Proxy instances
      • Phase 2: Migrate and validate data
        • Cassandra Data Migrator
        • DSBulk Migrator
      • Phase 3: Enable asynchronous dual reads
      • Phase 4: Change read routing to Target
      • Phase 5: Connect client applications directly to Target
      • Troubleshooting
        • Troubleshooting tips
        • Troubleshooting scenarios
      • Glossary
      • Contribution guidelines
      • Release Notes
    • Managing
      • Managing your organization
        • User permissions
        • Pricing and billing
        • Audit Logs
        • Delete an account
        • Bring Your Own Key
          • BYOK AWS Astra Portal
          • BYOK GCP Astra Portal
          • BYOK AWS DevOps API
          • BYOK GCP DevOps API
        • Configuring SSO
          • Configure SSO for Microsoft Azure AD
          • Configure SSO for Okta
          • Configure SSO for OneLogin
      • Managing your database
        • Create your database
        • View your databases
        • Database statuses
        • Use DSBulk to load data
        • Use Data Loader in Astra Portal
        • Monitor your databases
        • Export metrics to third party
          • Export metrics via Astra Portal
          • Export metrics via DevOps API
        • Manage access lists
        • Manage multiple keyspaces
        • Using multiple regions
        • Terminate your database
      • Managing with DevOps API
        • Managing database lifecycle
        • Managing roles
        • Managing users
        • Managing tokens
        • Managing BYOK AWS
        • Managing BYOK GCP
        • Managing access list
        • Managing multiple regions
        • Get private endpoints
        • AWS PrivateLink
        • Azure PrivateLink
        • GCP Private Service
    • API QuickStarts
      • Data API QuickStart
      • Document API QuickStart
      • REST API QuickStart
      • GraphQL CQL-first API QuickStart
    • Developing with APIs
      • Developing with the Data API
      • Developing with Document API
      • Developing with REST API
      • Developing with GraphQL API
        • Developing with GraphQL API (CQL-first)
        • Developing with GraphQL API (Schema-first)
      • Developing with gRPC API
        • gRPC Rust Client
        • gRPC Go Client
        • gRPC Node.js Client
        • gRPC Java Client
      • Developing with CQL API
      • Tooling Resources
      • Node.js Document Collection Client
      • Node.js REST Client
    • API References
      • Astra DB Data API v1
      • Astra DB REST API v2
      • Astra DB Document API v2
      • Astra DB DevOps API v2
    • Integrations
    • Astra CLI
  • Astra DB Serverless
  • DataStax Apache Kafka Connector

DataStax Apache Kafka Connector

Deploy the DataStax Apache Kafka™ Connector to stream records from an Apache Kafka topic to your DataStax Astra DB database.

The DataStax Apache Kafka Connector download package includes a sample JSON properties file (dse-sink-distributed.json.sample). Use the sample file as a reference when configuring your deployment. The dse-sink-distributed.json.sample file is located in the conf directory of the DataStax Apache Kafka Connector distribution package.

Prerequisites

  • Download and install the DataStax Apache Kafka Connector.

  • Configure the distributed worker configuration file connect-distributed.properties to fit your needs. Use this example from DataStax as a starting point. Specify the converter for the key.converter and value.converter properties that matches the form of your Kafka data. See Configuring converters in the Confluent documentation for more information on these properties.

Procedure

  1. From the directory where you installed Apache Kafka, start the distributed worker:

bin/connect-distributed.sh config/connect-distributed.properties

The worker startup process outputs a large number of informational messages. The following message displays after the process completes: [2019-10-13 19:49:25,385] INFO Finished starting connectors and tasks (org.apache.kafka.connect.runtime.distributed.DistributedHerder:852) . Configure the JSON configuration file (such as dse-sink.json) to use the Astra DB secure connect bundle.

{ "name": "dse-sink",     "config":
  { "connector.class": "com.datastax.kafkaconnector.DseSinkConnector",
    "cloud.secureConnectBundle": "/path/to/secure-connect-database-name.zip",
    "auth.username": "clientId",
    "auth.password": "clientSecret" ...
  }
}
  • name: Unique name for the connector. Default: dse-sink

  • connector.class: DataStax connector Java class provided in the kafka-connect-dse-N.N.N.jar. Default: com.datastax.kafkaconnector.DseSinkConnector

  • cloud.secureConnectBundle: The full path to the secure connect bundle for your Astra DB database (secure-connect-database_name.zip).

Download the secure connect bundle from Astra Portal. If this option is specified, you must also include the auth.username and auth.password for the database user.

  • auth.username: Astra DB database username

When authorization is enabled, the DataStax connector login role must have a minimum of modify privileges on tables receiving data from the DataStax Apache Kafka® Connector.

  • auth.password: Astra DB database password for the specified username

  1. Register the connector configuration with the distributed worker:

curl -X POST -H "Content-Type: application/json" -d @dse-sink.json "http://ip:port/connectors"

ip and port are the IP address and port number of the Kafka worker. Use the same port as the rest.port parameter set in connect-distributed.properties. The default port is 8083.

You configured the dse-sink.json or dse-sink.properties file when installing the DataStax Apache Kafka Connector.

General Inquiries: +1 (650) 389-6000 info@datastax.com

© DataStax | Privacy policy | Terms of use

DataStax, Titan, and TitanDB are registered trademarks of DataStax, Inc. and its subsidiaries in the United States and/or other countries.

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries.

Kubernetes is the registered trademark of the Linux Foundation.

landing_page landingpage