Starlight for Kafka

Starlight for Kafka brings native Apache Kafka® protocol support to Apache Pulsar®, enabling migration of existing Kafka applications and services to Pulsar without modifying the code. Kafka applications can now leverage Pulsar’s powerful features, such as:

  • Streamlined operations with enterprise-grade multi-tenancy

  • Simplified operations with a rebalance-free architecture

  • Infinite event stream retention with Apache BookKeeper™ and tiered storage

  • Serverless event processing with Pulsar Functions

By integrating two popular event streaming ecosystems, Starlight for Kafka unlocks new use cases and reduces barriers for users adopting Pulsar. Leverage advantages from each ecosystem and build a truly unified event streaming platform with Starlight for Kafka to accelerate the development of real-time applications and services.

This document will help you get started producing and consuming Kafka messages on a Pulsar cluster.

Starlight for Kafka Quickstart

  1. To start connecting Starlight for Kafka, select Kafka in the Astra Streaming Connect tab.

  2. When the popup appears, confirm you want to enable Kafka on your tenant.

    You will not be able to remove the Kafka namespaces created on your tenant with this step. You must remove the tenant itself to remove these namespaces.

  3. Select Enable Kafka.

    Three new namespaces are created in your Astra Streaming tenant:

    • kafka for producing and consuming messages

    • __kafka for functionality

    • __kafka_unlimited for storing metadata

      A new configuration file will be generated in the Connect tab that looks like this:

      username: <tenant-name>
      password: token:******
      security.protocol: SASL_SSL
      sasl.mechanism: PLAIN
  4. Copy and paste the code or download it as a config file (it will be called

You’re now ready to connect Kafka and Pulsar.

Example: Hello Pulsar

  1. Create a new topic in the newly created kafka namespace. For this example, we created test-topic within the kafka namespace on the tenant-1 tenant.

    Create Kafka Topic

    This example uses tools included with the Apache Kafka tarball.

  2. Move the file you downloaded to your Kafka_2.13-3.1.0/config folder. These values are required for SSL encryption. For this example, the values are:
    security.protocol=SASL_SSL required username='tenant-1' password='token:{pulsar tenant token}'
  3. Change directory to Kafka.

  4. Create a Kafka producer to produce messages on tenant-1/kafka/test-topic.

    Once the producer is ready, it accepts standard input from the user.

    $ bin/kafka-console-producer --broker-list --topic tenant-1/kafka/test-topic --producer.config config/
    >hello pulsar
  5. In a new terminal window, create a Kafka consumer to consume messages from the beginning of tenant-1/kafka/test-topic:

    $ bin/kafka-console-consumer --bootstrap-server --topic tenant-1/kafka/test-topic --consumer.config config/ --from-beginning
    hello pulsar
  6. Send as many messages as you’d like, then return to your kafka namespace dashboard in Astra Streaming and monitor your activity.

Your Kafka messages are being produced and consumed in a Pulsar cluster!

Monitor Kafka Activity

Starlight for Kafka video

Follow along with this video from our Five Minutes About Pulsar series to migrate from Kafka to Pulsar.

What’s next?

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2024 DataStax | Privacy policy | Terms of use

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000,