Spring Boot 3 With Kafka

An example developed using Spring Boot 3. This example explained about spring boot 3 with kafka with multi project setup. Also will split such as: common, producer and consumer.

Sopheaktra

Eang Sopheaktra

April 06 2024 02:05 pm

0 322
 

Why we should using Message Queue?

Message queues play a crucial role in modern distributed systems. Let’s explore their benefits:

  1. Better Performance:

    • Message queues enable asynchronous communication.
    • Producers add requests to the queue without waiting for processing.
    • Consumers process messages only when available, optimizing data flow.
  2. Increased Reliability:

    • Queues make data persistent, reducing errors when parts of the system go offline.
    • Separating components with message queues enhances fault tolerance.
    • Mirrored queues provide additional availability.
  3. Granular Scalability:

    • Kafka, for instance, scales precisely where needed.
    • Multiple instances add requests to the queue during peaks.
    • Consumers distribute workload across a fleet.
  4. Simplified Decoupling:

    • Message queues remove dependencies between components.
    • Components focus on discrete business functions.
    • Ideal for monolithic apps, microservices, or serverless architectures.

Why we should using kafka?

Apache Kafka is a powerful distributed streaming platform that has gained popularity for various use cases. Let’s explore why you might consider using Kafka:

  1. Real-Time Data Streams:

    • Kafka excels at handling real-time streams of data. It allows you to collect and process large volumes of data in near real-time.
    • Use cases include ingesting event logs, sensor data, social media feeds, financial transactions, and more.
  2. Durability and Fault Tolerance:

    • Kafka provides durability by persistently storing data (messages) across a cluster of connected machines.
    • Even if a machine fails, the data remains intact, making it suitable for critical applications.
  3. Scalability and High Throughput:

    • Kafka scales horizontally by adding more brokers (nodes) to the cluster.
    • It can handle high message throughput, making it ideal for scenarios with heavy data traffic.
  4. Guaranteed Order and At-Least-Once Delivery:

    • Kafka ensures the order of data, crucial for maintaining consistency.
    • It guarantees at-least-once delivery, meaning every message sent to Kafka is received by consumers at least once.
  5. Complex Event Processing (CEP):

    • Kafka feeds events to CEP systems, allowing real-time analysis and pattern detection.
    • Use cases include fraud detection, monitoring, and anomaly detection.
  6. Internet of Things (IoT) and Automation:

    • Kafka integrates well with IoT platforms, enabling efficient data ingestion from sensors and devices.
    • It can feed data to automation systems, triggering actions based on events.

Pros and Cons?

Pros of Apache Kafka:

  1. Scalability:

    • Kafka is highly scalable. It can be expanded quickly without downtime, making it suitable for handling large data volumes.
  2. Durability:

    • Kafka persists messages on disks, ensuring data durability.
    • Intra-cluster replication further enhances reliability.
  3. High Performance:

    • Kafka handles terabytes of data efficiently with minimal overhead.
    • Its log-based architecture allows for efficient data processing.
  4. Real-Time Data Streaming:

    • Kafka excels in real-time data streaming scenarios.
    • It’s used by companies like Walmart, Netflix, and Tesla for tracking user activity, notifications, and analytics.

Cons of Apache Kafka:

  1. Complexity:

    • Setting up and maintaining Kafka can be challenging.
    • It requires understanding various components like brokers, partitions, and Zookeeper.
  2. Operational Overhead:

    • Managing Kafka clusters involves operational tasks such as monitoring, scaling, and maintenance.
    • Organizations need skilled personnel to handle these responsibilities.
  3. Message Ordering:

    • While Kafka guarantees order within a partition, global ordering across all partitions can be complex.
    • Ensuring strict message order across topics can be tricky.
  4. Message Size Limitations:

    • Kafka doesn’t handle very large messages efficiently.
    • Breaking down large messages into smaller chunks is necessary.
  5. Dependency on Zookeeper:

    • Kafka relies on Zookeeper for coordination and management.
    • Zookeeper introduces an additional layer of complexity and potential points of failure.

Requirements

The fully fledged server uses the following:

  • Spring Framework
  • SpringBoot
  • Log4j2
  • Lombok
  • Kafka

Dependencies

There are a number of third-party dependencies used in the project. Browse the Maven pom.xml file for details of libraries and versions used.

<dependencies>
	<dependency>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter</artifactId>
	</dependency>
	<dependency>
		<groupId>org.yaml</groupId>
		<artifactId>snakeyaml</artifactId>
		<version>2.0</version>
	</dependency>
	<dependency>
		<groupId>org.springframework.kafka</groupId>
		<artifactId>spring-kafka</artifactId>
	</dependency>

	<dependency>
		<groupId>com.fasterxml.jackson.core</groupId>
		<artifactId>jackson-databind</artifactId>
	</dependency>

	<dependency>
		<groupId>org.projectlombok</groupId>
		<artifactId>lombok</artifactId>
		<optional>true</optional>
	</dependency>

	<dependency>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-test</artifactId>
		<scope>test</scope>
	</dependency>
	<dependency>
		<groupId>org.springframework.kafka</groupId>
		<artifactId>spring-kafka-test</artifactId>
		<scope>test</scope>
	</dependency>
</dependencies>

Coding 

 On this step I will not introduce more specific on how to create multiple project while it depends on your IDE or sometimes manaul by hard copy and patse lol.

  • Common's project 
    • Create OrderDto with directory dto package (/dto) for using as payload or message to queue.
package com.tra22.kafka.dto;

import lombok.*;
import java.io.Serializable;

@Getter
@Setter
@Builder
@ToString
@NoArgsConstructor
@AllArgsConstructor
public class OrderDto implements Serializable {
    private int orderId;
    private double price;
}
    • Create KafkaProducerService with directory producer (/producer) for using as global produce message to topic
package com.tra22.kafka.producer;

import com.tra22.kafka.dto.OrderDto;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.stereotype.Service;

@Service
@Slf4j
@RequiredArgsConstructor
public class KafkaProducerService {
    private final KafkaTemplate<String, OrderDto> kafkaTemplate;
    public void send(String topicName, String key, OrderDto value) {
        var future = kafkaTemplate.send(topicName, key, value);
        future.whenComplete((sendResult, exception) -> {
            if (exception != null) {
                future.completeExceptionally(exception);
                log.error("error: ", exception);
            } else {
                future.complete(sendResult);
                log.info("result: {}", sendResult);
            }
            log.info("order producer : {} ", value);
        });
    }
}
  • Producer's project
    • Add properties config with kafka like below
server:
  port: 8089
spring:
  kafka:
    bootstrap-servers: localhost:9092
    # producer config
    producer:
       key-serializer: org.apache.kafka.common.serialization.StringSerializer
       value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
    • For this code I'm just testing so I'm try to using on main class (it's bad don't follow me lol please using controllers, services layer to doing it). Also I called KafkaProducerService from Common's project. Please take note: if your project namespace or package is same so you don't need to add @ComponentScan (it'll auto scan for us and can use) but if your namespace or package different so you will faced issue no bean found your Producer's project. For fix it you can uncomment and changed your namespace or package your Common's project.
package com.tra22.kafka;

import com.tra22.kafka.dto.OrderDto;
import com.tra22.kafka.producer.KafkaProducerService;
import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;

@SpringBootApplication
//@ComponentScan(basePackages = {"com.tra22.common"})
public class Main {
    public static void main(String[] args) {
        SpringApplication.run(Main.class, args);
    }
    @Bean
    public CommandLineRunner commandLineRunner(
            KafkaProducerService kafkaTemplate
    ) {
        return args -> {
            kafkaTemplate.send("order-topic", "orderId", OrderDto.builder().orderId(1).price(100.23).build());
        };
    }
}
  • Consumer's project
    • Add properties config with kafka like below
server:
  port: 8082
spring:
  kafka:
    bootstrap-servers: localhost:9092
    # consumer config
    consumer:
      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      value-deserializer: org.springframework.kafka.support.serializer.JsonDeserializer
      properties:
        spring.json.trusted.packages: "*"
    • Create KafKaTopicListeners with directory event (/event) with @KafkaListener  is used to designate a bean method as a listener for a listener container. The bean is wrapped in a MessagingMessageListenerAdapter configured with various features, such as converters to convert the data, if necessary, to match the method parameters.
package com.tra22.kafka.event;

import com.tra22.kafka.dto.OrderDto;
import lombok.extern.slf4j.Slf4j;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.stereotype.Component;

@Component
@Slf4j
public class KafKaTopicListeners {
    @KafkaListener(topics = {"order-topic"}, groupId = "order-group")
    public void consume(OrderDto orderDto) {
        log.info("order consumer : {} ", orderDto);
    }
}

Building the project

You will need:

  • Java JDK 17 or higher
  • Maven 3+ or higher
  • Tomcat 10.1

Clone the project and use Maven to build the server

$ mvn clean install

Languages and Tools

spring framework  spring boot  kafka  lombok 

Summary

Download the source code for the sample application implementing an Producer and Consumer. Also you will learn with multi module project on spring boot 3 (Maven Project) below:

  • Common: for global using on another such as producer and consumer
  • Producer: for sends messages to a topic, and messages are distributed to partitions according to a mechanism such as key hashing.
  • Consumer: for read messages from a number of Kafka topics.

Comments

Subscribe to our newsletter.

Get updates about new articles, contents, coding tips and tricks.

Weekly articles
Always new release articles every week or weekly released.
No spam
No marketing, share and spam you different notification.
© 2023-2025 Tra21, Inc. All rights reserved.