Starlight for Kafka
Starlight for Kafka brings native Apache Kafka® protocol support to Apache Pulsar®, enabling migration of existing Kafka applications and services to Pulsar without modifying the code. Kafka applications can now leverage Pulsar’s powerful features, such as:
-
Streamlined operations with enterprise-grade multi-tenancy
-
Simplified operations with a rebalance-free architecture
-
Infinite event stream retention with Apache BookKeeper™ and tiered storage
-
Serverless event processing with Pulsar Functions
By integrating two popular event streaming ecosystems, Starlight for Kafka unlocks new use cases and reduces barriers for users adopting Pulsar. Leverage advantages from each ecosystem and build a truly unified event streaming platform with Starlight for Kafka to accelerate the development of real-time applications and services.
This document will help you get started producing and consuming Kafka messages on a Pulsar cluster.
Starlight for Kafka Quickstart
-
To start connecting Starlight for Kafka, select Kafka in the Astra Streaming Connect tab.
-
When the popup appears, confirm you want to enable Kafka on your tenant.
You will not be able to remove the Kafka namespaces created on your tenant with this step.
-
Select Enable Kafka.
Three new namespaces are created in your Astra Streaming tenant:
-
kafka
for producing and consuming messages -
__kafka
for functionality -
__kafka_unlimited
for storing metadataA new configuration file will be generated in the Connect tab that looks like this:
username: <tenant-name> password: <token:your-token> bootstrap.servers: kafka-aws-useast2.dev.streaming.datastax.com:9093 schema.registry.url: https://kafka-aws-useast2.dev.streaming.datastax.com:8081 security.protocol: SASL_SSL sasl.mechanism: PLAIN
-
-
Copy and paste the code or download it as a config file (it will be called
ssl.properties
).
You’re now ready to connect Kafka and Pulsar.
Example: Boo!
-
Create a new partitioned topic in the newly created
kafka
namespace. For this example, we createdtest-topic
within thekafka
namespace on thetenant-1
tenant.This example uses tools included with the Apache Kafka tarball.
-
Move the
ssl.properties
file you downloaded to yourKafka_2.13-3.1.0/config
folder. These values are required for SSL encryption. For this example, the values are:bootstrap.servers=kafka-aws-useast2.dev.streaming.datastax.com:9093 security.protocol=SASL_SSL sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username='tenant-1' password='token:{pulsar tenant token}' sasl.mechanism=PLAIN session.timeout.ms=45000
-
Change directory to Kafka.
-
Create a Kafka producer to produce messages on
tenant-1/kafka/test-topic
.Once the producer is ready, it accepts standard input from the user.
$ bin/kafka-console-producer --broker-list kafka-aws-useast2.dev.streaming.datastax.com:9093 --topic tenant-1/kafka/test-topic --producer.config config/ssl.properties >boo
-
In a new terminal window, create a Kafka consumer to consume messages from the beginning of
tenant-1/kafka/test-topic
:$ bin/kafka-console-consumer --bootstrap-server kafka-aws-useast2.dev.streaming.datastax.com:9093 --topic tenant-1/kafka/test-topic --consumer.config config/ssl.properties --from-beginning boo
-
Frighten yourself with
boo
as many times as you’d like, then return to yourkafka
namespace dashboard in Astra Streaming and monitor your activity.
Your Kafka messages are being produced and consumed in a Pulsar cluster!

What’s next?
Starlight for Kafka is based on the DataStax Starlight for Kafka project.
For more on Astra Streaming, see Astra Streaming Quickstart.