What, why, when to use Apache Kafka, with an example (2023)

· 10 minutes of reading

I have seen, heard and been asked questions and comments such as

What is Kafka and when should I use it?

I don't understand why we have to use Kafka

The purpose of this post is to introduce you to what Apache Kafka is, when to use them, and the basic concepts of Apache Kafka with a simple example.

First, let's understand what Apache Kafka is. According to the official definition, it is a distributed streaming platform. This means you have a cluster of connected machines (Kafka Cluster) that can

  1. Receiving data from multiple applications are called data (or message) generating applications.producers.

  2. Reliably store received data (aka message).

  3. Allow apps to read stored data, these apps are calledconsumersbecause they consume data (aka message). Consumers usually read one message at a time.

  4. Data sequence guarantee. This means that if a cluster receives a message, say m1, at time t1, and another message, say m2, is received by the cluster at a later time t1 + 5 seconds, then the consumer reading the messages will read m1 before m2.

  5. Deliverat least oncedata delivery. This means that every message sent to the Apache Kafka cluster is guaranteed to be received by the consumer at least once. This means that the consumer may experience duplicate data. The most common reason is that the message sent by the manufacturer gets lost due to a network failure. There are techniques to deal with this which we will see in a future post.

  6. Supports running applications on a Kafka cluster usingconnectorand a message processing frameworkStreams API.

We will continue to use itmessageto tag the data that the producer sends to the Apache Kafka cluster and the data that the consumer reads from the Apache Kafka cluster.

What, why, when to use Apache Kafka, with an example (1)

To understand why Apache Kafka is popular in streaming applications, let's find out what is commonly used and why Apache Kafka is a better choice for certain use cases.

Currently used RESTful systems - HTTPS

Modern web applications areCalmservices. What does that mean

(Video) Apache Kafka in 6 minutes

  1. The server or client (usually a browser) sends HTTP(S)(eitherGET, PUT, POST, DELETEthere are more but these are popular) request to another server (backend server).

  2. The server that receives this HTTP(S) request authenticates the request, performs custom processing according to its logic, and in most cases responds with a status code and data.

  3. The requesting server or client receives the response and follows the defined logic.

In most cases, the request is routed from the client (your browser) to the server (aka backend) running in the cloud, which performs the required processing and responds with the appropriate data.

What, why, when to use Apache Kafka, with an example (2)

For example, on this site, if youright click on this page -> check -> network taband refresh the page, searchCo. Click on it and selectHeadlinesyou will be able to see the HTTPS requests your client (also known as browser) has sent tofacilities. wAnsweryou will also be able to see the response which in this case will be the html code to display.

What, why, when to use Apache Kafka, with an example (3)What, why, when to use Apache Kafka, with an example (4)

TheCalmthe service model works well in most cases. Let's take a look at the following use cases and try to come up with effective ways to solve them

  1. Suppose our application has 10 million users and we want to record user actions (hover, movement, idle etc.) every 5 seconds. This will create 120 million user activity events per minute. In this case, we do not need to inform the user that we have successfully processed information about his activity. To respond to 120 million requests per minute, we will need multiple servers running copies of your app. How will you solve it?

  2. Suppose one of our applications needs to send a message to 3 other applications. In this case, suppose that the application sending the message does not need to know whether the message has been processed. How will you solve it?

Before you read on, take some time to brainstorm the use cases above.

One of the key things that sets the above use cases apart from a normal web request is that we don't need to process the data and respond immediately. In case 1, we don't need to process the data right away, we can just store the data somewhere and process it later depending on the constraints of the project. In case 2, we can send HTTPS requests to 3 other apps and get responses from those apps, but since the sender app doesn't need to know the state of the process, we can write data to a location and read the other 3 apps from that.

This is basically what Apache Kafka does. In the Apache Kafka cluster you haveTopicswhich are ordered message queues.

Solution for case 1

We will be sending 120 million messages per minute toTopiclet's sayuser action-eventfrom the user agent (web browser) and you can read the manufacturer's applications from them at their own processing speed.

Solution for case 2

Our producer app will send messages to aTopiclet's saymulti-application eventsthen all 3 apps can read from that theme. This reduces the burden on the producer, since he only cares about sending messages.

You can set the manufacturer asFire up the model and forget itwhere the producer sends a message to the Apache Kafka cluster and proceeds ormessage confirmation modelwhere the producer sends a message to the Apache Kafka cluster and waits for an acknowledgment from the Apache Kafka cluster. UseFire up the model and forget itif few message loss is acceptable and increase producer speed. Usemessage confirmation modelwhen you need to be sure you don't want to lose messages due to network outage or similar.

(Video) Apache Kafka in 5 minutes

What, why, when to use Apache Kafka, with an example (5)ref:https://kafka.apache.org/documentation/

Let's understand how Apache Kafka works with a simple example.


  1. Docker(also make sure you havedocker composing)
  2. Python3

organize something

First, let's set up the Apache Kafka docker container. In this example, we'll use the popular onesausage masterpicture. We can take advantage ofTributarydocker image but requires 8 GB of docker memory.

Let's clone the file firstsausage masterrepo.

cdgit clone https://github.com/wurstmeister/kafka-docker.gitcd kafka-docker

Then we modify thekafkaservice section indocker-compose.ymlfile to have the following configuration

version:„2”services:zoo keeper:picture: wurstmeister/zookeeper:3.4.6 ports: -„2181:2181” kafka:build: .ports: -„9092:9092” reveal: -"9093" environment:KAFKA_ADVERTISED_LISTENERS: INSIDE/kafka:9093,OUTSIDE://localhost:9092 KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAIN TEXT, OUT:PLAIN TEXTKAFKA_LISTENERS: INSIDE://,APART FROM:// KAFKA_INTER_BROKER_LISTENER_NAME: INSIDEKAFKA_ZOOKEEPER_CONNECT: zoo keeper:2181 volumes: - /var/run/docker.sock:/var/run/docker.sock

Environment VariablesKAFKA_*are settings that allow connection between Apache Kafka, Apache Zookeeper (a cluster management service), and from producers and consumers outside of the docker container.

Now run the Apache Kafka and Apache Zookeeper docker containers as shown below

create docker -d

The-Druns docker containers in detached mode. Each node in the Apache Kafka cluster is named abroker. Now you can check the list of running containers with

creating dockers ps

You will see

kafka-docker_kafka_1start-kafka.shUp0.0.0.0:9092->9092/tcp, 9093/tcp
kafka-docker_zookeeper_1/bin/sh -c /usr/sbin/sshd …Up0.0.0.0:2181->2181/tcp, 22/tcp, 2888/tcp, 3888/tcp

You see yourskafka-docker_kafka_1(Apache Kafka container) andkafka-docker_zookeeper_1Started.

For this simple example, we'll use the Kafka Python library. You can install it using

pip install kafka-python
(Video) Kafka in 100 Seconds

In Apache Kafka, as we saw earlier, messages are stored in queues calledtopics. By default, topics are created automatically when a producer posts a message to that topic. It is controlled withautomatically.create.themes.enablebroker configuration variable.

Producer and Consumers

Suppose we have a simple event listener on the client that sends a message every 3 seconds to the backend server, we can write a naive producer in python and call ituser_event_producer.py

zdatagodzinaimportdatagodzinaimportjsonzkafkaimportKafka producerimportrandomimporttimeimportuuidEVENT_TYPE_LIST=['to buy','sell','Click','to float',"idle_5"]producer=KafkaProducer (value_serializer=lambdamessage: json.drops (message).encode('utf-8'),# we serialize our data to json format for efficient transferbootstrap_servers=["localhost: 9092"])THEME NAME= "event_topics"pok _production_event():"""Function to create events""" # UUID creates a universally unique identifier return{"event_id": str(uuid.uuid4(),"date of the event": date time.Now().strftime('%Y-%m-%D-%H-%M-%S'),'event type':random.selection(EVENT_TYPE_LIST) }pok send_events():one sec(True): data=_produce_event() time.to sleep(3)# simulate processing logicproducer.send(TOPIC_NAME, value=dane)If__name__== '__main__': send_events()

Now the consumer script. Let's call ituser_event_consumer.py

importjsonzkafkaimportKafkaConsumerTOPIC_NAME= "event_topics"consumer=KafkaConsumer( TEMAT_NAME, auto_offset_reset="earliest",# where to start reading messagesGroup ID="event collector group-1",# consumer group identifierbootstrap_servers=["localhost: 9092"], deserializer_value=lambdam: json.loads (incl.decode('utf-8'))# we deserialize our data from json)pok consume_events():DoMWconsumer:# any custom logic you need print(M.value)If__name__== '__main__': consume_events()

Some key points from the python script above

  1. balance: indicates the position of the message in the subject. This helps consumers decide which message to start reading from.

  2. auto_offset_reset: Possible values ​​areearliestIlatestwhich tells the consumer to read from the earliest available message or the latest message that the consumer has not yet read in the subject, respectively.

  3. Group ID: denotes a group of which the consumer application is a part. Usually, many recipients operate in a group, and the group ID allows consumers to track which messages have been read and which have not.

What, why, when to use Apache Kafka, with an example (6)ref:https://kafka.apache.org/documentation/

Let's run the python scripts

python user_event_producer.py &# & runs a python script in the backgroundpython user_event_consumer.py

you will start to see something like this

{"event_id":"3f847c7b-e015-4f01-9f5c-81c536b9d89b","event_datetime":„2020-06-12-21-12-27”,"event type":"to buy"}{"event_id":"e9b51a88-0e86-47cc-b412-8159bfda3128","event_datetime":"2020-06-12-21-12-30","event type":"sell"}{"event_id":"55c115f1-0a23-4c89-97b2-fc388bee28f5","event_datetime":„2020-06-12-21-12-33”,"event type":"click"}{"event_id":"28c01bae-5b5b-421b-bc8b-f3bed0e1d77f","event_datetime":"2020-06-12-21-12-36","event type":"sell"}{"event_id":"8d6b1cbe-304f-4dec-8389-883d77f99084","event_datetime":„2020-06-12-21-12-39”,"event type":"to float"}{"event_id":"50ffcd7c-5d40-412e-9223-cc7a26948fa9","event_datetime":"2020-06-12-21-12-42","event type":"to float"}{"event_id":"6dbb5438-482f-4f77-952e-aaa54f11320b","event_datetime":"2020-06-12-21-12-45","event type":"click"}

You can stop using python consumer scriptctrl + c. Remember that your producer is running in the background. You can stop using it

pkill -f user_event_producer.py
(Video) Apache Kafka 101: Your First Kafka Application in 10 Minutes or Less (Hands On - 2023)

The above kills the process based on its name. Now let's check the Apache Kafka cluster to see the list of available themes, we must necessarily see the theme that was created by our producer onevents_topic.

docker exec -it$(docker ps -aqf"name=kafka-docker_kafka")roar# get into the Kafka container$KAFKA_HOME/bin/kafka-topics.sh --list --bootstrap-server kafka:9092# list all topics in this Kafka cluster# you will see the topics# __consumer_offsets# event_topic# display messages stored in topic `events_topic`$KAFKA_HOME/bin/kafka-console-consumer.sh --bootstrap-server kafka:9092 --topic events_topic --from-beginningexit# exit the container

Automatic approval

Now suppose our consumer does some processing and saves the processed data to our database. In this case, if our consumer dies and we restart it, based on our current configuration settings, we will reprocess already processed data. Take some time to think about why this is happening and what configuration settings we can use to prevent it from happening.

We can avoid this problem by settingauto_offset_resetsetting onlatestinstead of thisearliestthat we have in oursKafkaConsumercode sample. Each consumer enters the last offset position of the read message into the metadata subject in Kafka every 5 seconds (set as default withauto.commit.interval.ms). This function is set to true using by defaultenable.auto.approval. The topic that contains the offset information is called__consumer_shifts.


As with most distributed systems, Apache Kafka distributes its data across multiple cluster nodes. A topic in Apache Kafka is broken into chunksseptumwhich are duplicated (3 copies by default) and stored on multiple nodes in the cluster. This prevents data loss in the event of a node failure.

What, why, when to use Apache Kafka, with an example (7)ref:https://kafka.apache.org/documentation/

In the diagram above, you can see how a topic is partitioned and how incoming messages are duplicated between them.

You can stop all running docker containers

Stop creating docker

We saw in this post

  1. What is Apache Kafka

  2. When to use Apache Kafka with some common use cases

  3. Apache Kafka Concepts - Producer, Theme, Broker, Consumer, Shift and auto commit

There are more advanced concepts such as partition size, partition function, Apache Kafka connectors, Streams API etc which we will cover in future posts.

I hope this article gives you a good introduction to Apache Kafka and insight into when to use it and when not to. Let me know if you have any comments or questions in the comments section below.


  1. kafka-docker-setup

    (Video) 2. Motivations and Customer Use Cases | Apache Kafka Fundamentals

  2. Kafka setup


Why would you use Apache Kafka? ›

Why would you use Kafka? Kafka is used to build real-time streaming data pipelines and real-time streaming applications. A data pipeline reliably processes and moves data from one system to another, and a streaming application is an application that consumes streams of data.

What is Apache Kafka with example? ›

Apache Kafka is a publish-subscribe based durable messaging system. A messaging system sends messages between processes, applications, and servers. Apache Kafka is a software where topics can be defined (think of a topic as a category), applications can add, process and reprocess records.

In which scenario is it most appropriate to use Kafka? ›

Kafka is particularly valuable in scenarios requiring real-time data processing and application activity tracking, as well as for monitoring purposes. It's less appropriate for data transformations on-the-fly, data storing, or when all you need is a simple task queue.

What problems does Apache Kafka solve? ›

Kafka takes on the burden of handling all the problems related to distributed computing: node failures, replication or ensuring data integrity. It makes Kafka a great candidate for the fundamental piece of architecture, a central log that can serve as a source of truth for other services.

What is Kafka in simple words? ›

In a nutshell, Kafka Streams lets you read data in real time from a topic, process that data (such as by filtering, grouping, or aggregating it) and then write the resulting data into another topic or to other systems of record.

What is point to point with Apache Kafka? ›

< What is a messaging system in Kafka? In point to point system, a specific message can be consumed by only one consumer and it will instantly disappear from the queue once it is read. However, one or more consumers can consume it in a queue as messages remain in the queue for some time.

How do I use Kafka for messaging? ›

Creating Apache Kafka Queue
  1. Read a message from the queue topic to process.
  2. Send a start marker with the message offset to the marker's topic and wait for Apache Kafka to acknowledge the transmission.
  3. Commit the offset of the message read from the queue to Apache Kafka.
Jan 31, 2022

What are the major components of Kafka? ›

Overview of Kafka Architecture

The compute layer consists of four core components—the producer, consumer, streams, and connector APIs, which allow Kafka to scale applications across distributed systems.

What are the advantages of Kafka streams? ›

Kafka Streams greatly simplifies the stream processing from topics. Built on top of Kafka client libraries, it provides data parallelism, distributed coordination, fault tolerance, and scalability.

What are the limitations of Kafka? ›

Disadvantages Of Apache Kafka

Do not have complete set of monitoring tools: Apache Kafka does not contain a complete set of monitoring as well as managing tools. Thus, new startups or enterprises fear to work with Kafka. Message tweaking issues: The Kafka broker uses system calls to deliver messages to the consumer.

How does Kafka work in real-time? ›

What is Kafka? It is an open-source distributed data streaming platform that Linkedin developed. It can handle both batch data as well as real-time data with very low latency. Kafka lets you publish and subscribe to events and can store those events as long as you want.

Is Kafka used for real-time? ›

Apache Kafka is the de facto standard for real-time data streaming. Kafka is good enough for almost all real-time scenarios. But dedicated proprietary software is required for niche use cases. Kafka is NOT the right choice if you need microsecond latency!

Why Kafka is difficult? ›

Because Kafka works in a Java Virtual Machine (JVM) ecosystem, the main programming language of the client is Java. This could be a problem if your preferred language is Python or C, for example. While there are open source clients available in other languages, these don't come with Kafka itself.

Why we use Kafka in Microservices? ›

Using Kafka for asynchronous communication between microservices can help you avoid bottlenecks that monolithic architectures with relational databases would likely run into. Because Kafka is highly available, outages are less of a concern and failures are handled gracefully with minimal service interruption.

What is everything about Apache Kafka? ›

Apache Kafka is a distributed publish-subscribe messaging system and a robust queue that can handle a high volume of data and enables you to pass messages from one end-point to another.

Is Apache Kafka a database? ›

TL;DR: Kafka is a database and provides ACID properties. However, it works differently than other databases.

Is Kafka a language or tool? ›

Apache Kafka is written in Scala and Java. Scala is a general-purpose programming language that is designed to be concise and expressive. It is often used for building distributed systems and data-intensive applications.

What is the message key in Kafka? ›

Kafka message keys can be string values or Avro messages, depending on how your Kafka system is configured. The format of the message keys determines how message key values are stored in the record, and how you work with those values.

Why does Kafka use TCP? ›

Kafka uses a binary TCP-based protocol that is optimized for efficiency and relies on a "message set" abstraction that naturally groups messages together to reduce the overhead of the network roundtrip.

What is a Kafka topology? ›

A topology is an acyclic graph of sources, processors, and sinks. A source is a node in the graph that consumes one or more Kafka topics and forwards them to its successor nodes.

What is upstream and downstream in Kafka? ›

Source topics: Topics from which data is consumed. It's also called upstream for Kafka Streams applications. Sink: Downstream where data is published/stored after processing. It could be another Kafka topic, database, or calls to external services.

Why Kafka is better than other messaging systems? ›

Kafka replicates data and is able to support multiple subscribers. Additionally, it automatically balances consumers in the event of failure. That means that it's more reliable than similar messaging services available. Kafka Offers High Performance.

Is Kafka a topic or a queue? ›

What Is Apache Kafka? In short, Kafka is a message queuing system with a couple of twists. It offers low-latency message processing just like a great message queue, along with high availability and fault tolerance, but it brings additional possibilities that simple queuing can't offer.

Can Kafka handle large messages? ›

The Kafka max message size is 1MB. In this lesson we will look at two approaches for handling larger messages in Kafka. Kafka has a default limit of 1MB per message in the topic. This is because very large messages are considered inefficient and an anti-pattern in Apache Kafka.

What data structure is used in Kafka? ›

Kafka is essentially a commit log with a simplistic data structure. The Kafka Producer API, Consumer API, Streams API, and Connect API can be used to manage the platform, and the Kafka cluster architecture is made up of Brokers, Consumers, Producers, and ZooKeeper.

What is the basic architecture of Kafka? ›

Its core architectural concept is an immutable log of messages that can be organized into topics for consumption by multiple users or applications. A file system or database commit log keeps a permanent record of all messages so Kafka can replay them to maintain a consistent system state.

What are the types of messages in Kafka? ›

Messages can have any format, the most common are string, JSON, and Avro. The messages always have a key-value structure; a key or value can be null. If the producer does not indicate where to write the data, the broker uses the key to partition and replicate messages.

What is a ZooKeeper in Kafka? ›

At a detailed level, ZooKeeper handles the leadership election of Kafka brokers and manages service discovery as well as cluster topology so each broker knows when brokers have entered or exited the cluster, when a broker dies and who the preferred leader node is for a given topic/partition pair.

How long data can be stored in Kafka? ›

The short answer: Data can be stored in Kafka as long as you want. Kafka even provides the option to use a retention time of -1. This means “forever”.

Why not use Kafka as database? ›

Kafka's data storage is a little different from how a database functions. Because of its streaming services, users consume data on this platform using tail reads. It does not rely much on disk reads as a database would because it wants to leverage the page cache to serve the data.

Does Kafka use one or multiple topics? ›

Yes, a single consumer or consumer group can read from multiple Kafka topics. We recommend using different consumer IDs and consumer groups per topic.

Who uses Kafka? ›

Among these are Box, Goldman Sachs, Target, Cisco, Intuit, and more. As the trusted tool for empowering and innovating companies, Kafka allows organizations to modernize their data strategies with event streaming architecture.

How do I send events to Kafka? ›

What are the Steps to Set Up Kafka Event Streaming?
  1. Step 1: Set Up Kafka Environment.
  2. Step 2: Create a Kafka Topic to Store Kafka Events.
  3. Step 3: Write Kafka Events into the Topic.
  4. Step 4: Read Kafka Events.
  5. Step 5: Import/ Export Streams of Events Using Kafka Connect.
  6. Step 6: Process Kafka Events Using Kafka Streams.
Mar 23, 2022

What makes Kafka so fast? ›

Why is Kafka fast? Kafka achieves low latency message delivery through Sequential I/O and Zero Copy Principle. The same techniques are commonly used in many other messaging/streaming platforms. Zero copy is a shortcut to save the multiple data copies between application context and kernel context.

Does Uber use Kafka? ›

Uber has one of the largest deployments of Apache Kafka® in the world.

What is better than Apache Kafka? ›

RabbitMQ is backed not only by a robust support system but also offers a great developer community. Since it is open-source software it is one of the best Kafka Alternatives and RabbitMQ is free of cost.

Why do we use Kafka in data pipeline? ›

If the target can't keep up with the rate of data being sent to it, Kafka will take the backpressure. Pipelines built around Kafka can evolve gracefully. Because Kafka stores data, we can send the same data to multiple targets independently.

Is Kafka asynchronous? ›

Kafka is widely used for the asynchronous processing of events/messages. By default, the Kafka client uses a blocking call to push the messages to the Kafka broker. We can use the non-blocking call if application requirements permit.

What is the difference between Apache and Kafka? ›

Apache Kafka and Pulsar both support long-term storage, but Kafka allows a smart compaction strategy instead of creating snapshots and leaving the Topic as is. Apache Pulsar provides for the deletion of messages based on consumption.

What language is Kafka written in? ›

Why we use Kafka in microservices? ›

Using Kafka for asynchronous communication between microservices can help you avoid bottlenecks that monolithic architectures with relational databases would likely run into. Because Kafka is highly available, outages are less of a concern and failures are handled gracefully with minimal service interruption.

Why not use Apache Kafka? ›

Apache Kafka is not suitable for streaming data that requires low latency. It is designed to handle large volumes of data, but it is not suitable for streaming data that requires real-time processing. For example, if you need to stream data from a sensor in real-time, Apache Kafka is not the best choice.

What is the benefits of Apache Kafka over the traditional message queues? ›

Message Queues ensure delivery and scaling, while Kafka focuses on high-throughput and low-latency. Kafka suits high data volumes and streaming, and Message Queues excel in decoupling services and workloads. Kafka's log-based storage ensures persistence; Message Queues rely on acknowledgements for delivery.

Why use Kafka over MQ? ›

Apache Kafka scales well and may track events but lacks some message simplification and granular security features. It is perhaps an excellent choice for teams that emphasize performance and efficiency. IBM MQ is a powerful conventional message queue system, but Apache Kafka is faster.

Why Kafka is better than REST API? ›

Kafka APIs store data in topics. With REST APIs, you can store data in the database on the server. With Kafka API, you often are not interested in a response. You are typically expecting a response back when using REST APIs.

What is the advantage of Kafka Connect? ›

Kafka Connect provides the following benefits: Data-centric pipeline: Connect uses meaningful data abstractions to pull or push data to Kafka. Flexibility and scalability: Connect runs with streaming and batch-oriented systems on a single node (standalone) or scaled to an organization-wide service (distributed).

How does Kafka handle messages? ›

The Kafka messaging protocol is a TCP based protocol that provides a fast, scalable, and durable method for exchanging data between applications. Messaging providers are typically used in IT architectures in order to decouple the processing of messages from the applications that produce them.

What are the disadvantages of Kafka streams? ›

It affects throughput and also performance. Sometimes, it starts behaving a bit clumsy and slow, when the number of queues in a Kafka cluster increases. Some of the messaging paradigms are missing in Kafka like request/reply, point-to-point queues and so on. Not always but for certain use cases, it sounds problematic.

What language is used in Apache Kafka? ›

It is an open-source system developed by the Apache Software Foundation written in Java and Scala.

Can Kafka replace MQ? ›

As a conventional Message Queue, IBM MQ has more features than Kafka. IBM MQ also supports JMS, making it a more convenient alternative to Kafka. Kafka, on the other side, is better suited to large data frameworks such as Lambda. Kafka also has connectors and provides stream processing.

Is Kafka a topic or queue? ›

Apache Kafka is not a traditional message queue. Kafka is a distributed messaging system that includes components of both a message queue and a publish-subscribe model. Kafka improves on the deficit of each of those traditional approaches allowing it to provide fault tolerant, high throughput stream processing.

What protocol uses Kafka? ›

Kafka uses a binary protocol over TCP.


1. 3. Apache Kafka Fundamentals | Apache Kafka Fundamentals
2. Apache Kafka 101: ksqlDB (2023)
3. Apache Kafka Basics - A Layman's guide for beginners | Explained with real life examples (2023)
(IT k Funde)
4. Kafka Tutorial - Core Concepts
(Learning Journal)
5. Apache Kafka 101: Kafka Streams (2023)
6. Apache Kafka - Use Cases
Top Articles
Latest Posts
Article information

Author: Clemencia Bogisich Ret

Last Updated: 07/10/2023

Views: 6028

Rating: 5 / 5 (60 voted)

Reviews: 83% of readers found this page helpful

Author information

Name: Clemencia Bogisich Ret

Birthday: 2001-07-17

Address: Suite 794 53887 Geri Spring, West Cristentown, KY 54855

Phone: +5934435460663

Job: Central Hospitality Director

Hobby: Yoga, Electronics, Rafting, Lockpicking, Inline skating, Puzzles, scrapbook

Introduction: My name is Clemencia Bogisich Ret, I am a super, outstanding, graceful, friendly, vast, comfortable, agreeable person who loves writing and wants to share my knowledge and understanding with you.