DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Related

  • Microservices With .NET Core: Building Scalable and Resilient Applications
  • Building an Event-Driven Architecture Using Kafka
  • Applying Kappa Architecture to Make Data Available Where It Matters
  • APIs for Logistics Orchestration: Designing for Compliance, Exceptions, and Edge Cases

Trending

  • Infrastructure as Code (IaC) Beyond the Basics
  • Operational Principles, Architecture, Benefits, and Limitations of Artificial Intelligence Large Language Models
  • Agile’s Quarter-Century Crisis
  • Building Reliable LLM-Powered Microservices With Kubernetes on AWS
  1. DZone
  2. Software Design and Architecture
  3. Microservices
  4. Event-Driven Microservices: How Kafka and RabbitMQ Power Scalable Systems

Event-Driven Microservices: How Kafka and RabbitMQ Power Scalable Systems

Event-driven microservices use Kafka and RabbitMQ for scalable, fault-tolerant data processing in modern application architectures.

By 
Bhanuprakash Madupati user avatar
Bhanuprakash Madupati
·
May. 23, 25 · Tutorial
Likes (0)
Comment
Save
Tweet
Share
1.4K Views

Join the DZone community and get the full member experience.

Join For Free

Event-driven microservices have revolutionized how modern applications handle data flow and communication. Using message brokers such as Apache Kafka and RabbitMQ, microservices can efficiently process and distribute events in a scalable, fault-tolerant manner. 

This tutorial will guide you through the fundamentals of event-driven microservices, focusing on how Kafka and RabbitMQ enable scalable architectures.

Prerequisites

Before we dive in, ensure you have the following installed:

  • Docker – for running Kafka and RabbitMQ locally
  • Kafka CLI – for interacting with Kafka topics
  • RabbitMQ CLI – for managing queues
  • A programming language (such as Python or Java) to publish and consume messages
  • Basic knowledge of Docker and message queues

Step 1: Setting Up Kafka

Running Kafka in Docker

Kafka relies on ZooKeeper for distributed coordination. Therefore, you need to run ZooKeeper before starting Kafka. Open a terminal and run the following commands to start Kafka and ZooKeeper using Docker.

Shell
 
# Start ZooKeeper (Required for Kafka)
docker run -d --name zookeeper -p 2181:2181 zookeeper

# Start Kafka Broker
docker run -d --name kafka -p 9092:9092 --link zookeeper:zookeeper \
  -e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 \
  -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9092 \
  -e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
  wurstmeister/kafka


Understanding the Commands

  • ZooKeeper: Kafka uses ZooKeeper to manage distributed brokers. The ZooKeeper service must be up before you start Kafka.
  • Kafka Broker: The Kafka broker runs as a service that handles message traffic. The KAFKA_ZOOKEEPER_CONNECT environment variable connects Kafka to the ZooKeeper instance.

Creating a Kafka Topic

Once Kafka is up and running, you need to create a topic. Topics in Kafka act as message channels where data is sent.

Shell
 
# Create a topic named 'events'
docker exec kafka kafka-topics.sh --create --topic events --bootstrap-server localhost:9092 --partitions 1 --replication-factor 1


Producing Messages

To produce messages to the events topic, run the following command:

Shell
 
docker exec -it kafka kafka-console-producer.sh --topic events --bootstrap-server localhost:9092


Once you run this command, you can type messages into the console and press Enter to send them. For example: 

Shell
 
Hello, this is a Kafka event!


Consuming Messages

In another terminal window, you can consume the messages by running:

Shell
 
docker exec -it kafka kafka-console-consumer.sh --topic events --bootstrap-server localhost:9092 --from-beginning


This command reads the messages from the events topic starting from the beginning.

Step 2: Setting Up RabbitMQ

RabbitMQ is another message broker that can handle event-driven architectures. Unlike Kafka, RabbitMQ is more focused on message queuing, and its main feature is reliable delivery and routing of messages.

Running RabbitMQ in Docker

Open a terminal and run the following command to start RabbitMQ using Docker:

Shell
 
docker run -d --name rabbitmq -p 5672:5672 -p 15672:15672 rabbitmq:management


This command will run RabbitMQ with its management plugin enabled, so you can monitor and manage queues and exchanges via the web UI.

Accessing RabbitMQ Management UI

Visit http://localhost:15672 in your browser. The default credentials are:

  • Username: guest
  • Password: guest

Here you can view, monitor, and manage queues, exchanges, and connections.

Declaring a Queue

In RabbitMQ, queues store messages, and consumers retrieve messages from these queues. Run the following command to declare a queue named events-queue:

Shell
 
docker exec -it rabbitmq rabbitmqctl add_vhost /events
docker exec -it rabbitmq rabbitmqctl set_policy ha-all "^events-queue$" '{"ha-mode":"all"}'


Publishing Messages to RabbitMQ

You can use a programming language such as Python to publish messages to RabbitMQ.

Python Publisher example:

Python
 
import pika

connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
channel.queue_declare(queue='events-queue')

channel.basic_publish(exchange='', routing_key='events-queue', body='Hello RabbitMQ!')
print("Message Sent!")
connection.close()


To run the publisher: 

Shell
 
python publish.py


Consuming Messages

To consume messages from the events-queue, create a consumer script in Python:

Python Consumer example:

Python
 
import pika

connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()

channel.queue_declare(queue='events-queue')

def callback(ch, method, properties, body):
    print(f"Received {body}")

channel.basic_consume(queue='events-queue', on_message_callback=callback, auto_ack=True)

print("Waiting for messages...")
channel.start_consuming()


Run the consumer script: 

Shell
 
python consume.py


Now, the consumer will receive the messages sent by the publisher.

Step 3: Scaling and Monitoring

Scaling Kafka Consumers

To simulate a real-world event-driven system, you can scale Kafka consumers. Run multiple consumers to handle messages concurrently.

Shell
 
docker exec -it kafka kafka-console-consumer.sh --topic events --bootstrap-server localhost:9092 --group consumer-group-1


Scaling consumers can improve throughput, as each consumer will handle a portion of the load.

Scaling RabbitMQ Consumers

To scale RabbitMQ consumers, run multiple instances of the consumer script across different terminals.

Shell
 
python consume.py


This simulates horizontal scaling, allowing multiple consumers to process messages simultaneously and ensuring high availability.

Advanced Topics

Kafka Consumer Groups

In Kafka, a consumer group is a group of consumers that work together to consume messages from one or more topics. Kafka ensures that each message is consumed by only one consumer in the group, allowing for parallel processing of events.

To create a consumer group, use the --group flag:

Shell
 
docker exec -it kafka kafka-console-consumer.sh --topic events --bootstrap-server localhost:9092 --group consumer-group-1


RabbitMQ Exchanges and Routing

RabbitMQ uses exchanges to route messages to queues based on routing keys. The most common types of exchanges are:

  • Direct exchange: Routes messages to queues based on an exact match of the routing key.
  • Fanout exchange: Routes messages to all queues bound to the exchange.
  • Topic exchange: Routes messages to queues based on pattern matching in the routing key.

To declare a direct exchange:

Shell
 
docker exec -it rabbitmq rabbitmqctl add_exchange direct_events direct


Kafka Retention and Compaction

Kafka allows you to configure the retention period for topics, meaning how long messages are kept in the topic before they are deleted.

Shell
 
docker exec -it kafka kafka-topics.sh --alter --topic events --bootstrap-server localhost:9092 --config retention.ms=86400000


This configures Kafka to keep messages for 1 day (in milliseconds).

Conclusion

Event-driven microservices can efficiently scale and handle high-throughput data processing by leveraging Kafka and RabbitMQ. In this tutorial, we have explored the basics of setting up, producing, and consuming events in both systems. We also discussed advanced configurations for scaling consumers and message routing.

Experiment with different configurations and integrations to optimize your microservices architecture further. As your systems grow, you’ll need to ensure they remain resilient, scalable, and highly available, which can be achieved with the power of Kafka and RabbitMQ.

Event kafka systems microservices

Opinions expressed by DZone contributors are their own.

Related

  • Microservices With .NET Core: Building Scalable and Resilient Applications
  • Building an Event-Driven Architecture Using Kafka
  • Applying Kappa Architecture to Make Data Available Where It Matters
  • APIs for Logistics Orchestration: Designing for Compliance, Exceptions, and Edge Cases

Partner Resources

×

Comments

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • [email protected]

Let's be friends: