Kafka
- Supported Kafka versions: 3.x
Kafka can be used as an intermediary buffer between collector and an actual storage. Jaeger can be configured to act both as the collector that exports trace data into a Kafka topic as well as the ingester to read data from Kafka and write it to a storage backend.
Writing to Kafka is particularly useful for building post-processing data pipelines.
Kafka also has the following officially supported resources available from the community:
- Docker container for getting a single node up quickly
- Helm chart by Bitnami
- Strimzi Kubernetes Operator
Configuration
Please refer to these sample configuration files:
- collector: config-kafka-collector.yaml
- ingester: config-kafka-ingester.yaml
Jaeger uses Kafka exporter and receiver from opentelemetry-collector-contrib
repository. Please refer to their respective README’s for configuration details.
Topic & partitions
Unless your Kafka cluster is configured to automatically create topics, you will need to create it ahead of time. You can refer to the Kafka quickstart documentation to learn how.
You can find more information about topics and partitions in general in the official documentation . This article provide more details about how to choose the number of partitions.