Installing the Starlight for Kafka extension
This document will help you get started producing and consuming Kafka messages on a Pulsar cluster.
This guide is focused on installing the extension into an existing Pulsar cluster. There is also the "Getting started with the Starlight for Kafka extension" which provides more options. |
Prerequisites
Starlight for Kafka requires the following Pulsar versions and Kafka clients.
Starlight for Kafka Version | Compatible Pulsar version | Supported Kafka client |
---|---|---|
2.10 |
2.10.x |
0.10, 1.x, 2.x, 3.x |
2.8 |
2.8.x |
0.10, 1.x, 2.x, 3.x |
Starlight for Kafka supports all major Kafka clients (0.10, 1.x, 2.x, 3.x).
Starlight for Kafka supports Kafka Streams and the Kafka CLI.
Starlight for Kafka requires JDK11.
Download
-
If this is the first time you’ve used protocol handlers with your Pulsar deployment, create a
protocols
folder in the root of your Pulsar directory. -
Download
pulsar-protocol-handler-kafka-2.10.3.4.nar
here. -
Copy the
.nar
file to yourPulsar/protocols
directory. -
Proceed to Configure protocol handler.
Build Starlight for Kafka protocol handler from source (optional)
If you prefer, build the protocol handler .nar
from source.
-
Clone the Starlight for Kafka GitHub repo to your local machine and change directory into the repo.
git clone https://github.com/datastax/starlight-for-kafka.git cd starlight-for-kafka
Starlight for Kafka requires JDK11. Set
jenv global 11
to use JDK 11. -
Build the Maven project.
mvn clean install -DskipTests
You will get an output like this:
[INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 03:51 min [INFO] Finished at: 2022-06-24T12:04:14-04:00
-
If this is the first time you’ve used protocol handlers with your Pulsar deployment, create a
protocols
folder in the root of your Pulsar directory. -
The protocol handler
.nar
is now available atstarlight-for-kafka/kafka-impl/target/pulsar-protocol-handler-kafka-2.10.3.4.nar
. -
Copy the
.nar
file to your Pulsarprotocols
directory and proceed to Configure protocol handler.
Configure protocol handler
Configure the Pulsar broker to run the Starlight for Kafka protocol handler as a plugin by adding configurations in the Pulsar configuration file broker.conf
.
Modify |
-
Add these configuration values to
broker.conf
:messagingProtocols=kafka protocolHandlerDirectory=./protocols allowAutoTopicCreationType=partitioned
Property Default Value Suggested Value messagingProtocols
kafka
protocolHandlerDirectory
./protocols
Location of Starlight for Kafka NAR file
allowAutoTopicCreationType
non-partitioned
partitioned
allowAutoTopicCreationType
is set tonon-partitioned
by default. Since topics are partitioned by default in Kafka, it’s better to avoid creating non-partitioned topics unless Kafka clients need to interact with existing non-partitioned topics. -
Set Kafka listeners.
kafkaListeners
is a comma-separated list of listeners, and the host/IP and port which Kafka binds to for listening.kafkaListeners=PLAINTEXT://127.0.0.1:9092
-
Set Kafka advertised listeners.
kafkaAdvertisedListeners
is a comma-separated list of listeners with their host/IP and port.
kafkaAdvertisedListeners
is not required unless you want to expose another address to the Kafka client. It defaults to the same address askafkaListeners
by default.kafkaAdvertisedListeners=PLAINTEXT://127.0.0.1:9092
-
Set offset management. Offset management is required because Starlight for Kafka depends upon Pulsar broker-level entry metadata.
brokerEntryMetadataInterceptors=org.apache.pulsar.common.intercept.AppendIndexMetadataInterceptor
-
Disable the deletion of inactive topics. This is not required but very important in Starlight for Kafka.
By default, Pulsar deletes inactive partitions of a partitioned topic, but the metadata of the partitioned topic is not deleted.
Starlight for Kafka cannot create missed partitions ifbrokerDeleteInactiveTopicsEnabled
is set totrue
.brokerDeleteInactiveTopicsEnabled=false
Additional configurations
-
Set both retention and time to live policies TTL for Starlight for Kafka namespaces. If you only configure retention without configuring TTL, all messages on Starlight for Kafka topics cannot be deleted because Starlight for Kafka does not update a durable cursor.
-
If a Pulsar consumer and a Kafka consumer both subscribe to the same topic with the same subscription (or group) name, the two consumers consume messages independently and they do not share the same subscription, even though the subscription name of a Pulsar client is the same with the group name of a Kafka client.
-
Starlight for Kafka supports interaction between Pulsar client and Kafka client by default. If your topic is used only by the Pulsar client or only by the Kafka client, you can set
entryFormat=kafka
inbroker.conf
for better performance.
Test
After you have installed the Starlight for Kafka protocol handler and modified the Pulsar broker configuration, verify your Starlight for Kafka deployment works by running a Kafka client and consuming the messages on Pulsar.
-
Restart your Pulsar brokers.
-
Download Kafka 3.0.0 and untar the release package.
tar -xzf kafka-3.0.0.tgz cd kafka-3.0.0
-
Run the Kafka command-line consumer to listen for messages from the server. Here we’re using localhost and Pulsar standalone.
bin/kafka-console-consumer.sh --bootstrap-server PLAINTEXT://127.0.0.1:9092 --topic test --from-beginning
-
Create a consumer on Pulsar to consume messages. Here we’re adding
-n 0
to tell Pulsar to continue running instead of closing the connection after consuming a message.pulsar-client consume test -s "my-subscription" -n 0
-
Run the Kafka command-line producer to send messages to the server, and send a message.
bin/kafka-console-producer.sh --bootstrap-server PLAINTEXT://127.0.0.1:9092 --topic test >hi pulsar, it's me, kafka!
-
If Starlight for Kafka is working, your message will appear on your Pulsar consumer.
----- got message ----- key:[null], properties:[], content:hi pulsar, it's me, kafka!
What’s next?
You can configure and manage Starlight for Kafka based on your requirements. Check the following guides for more details.