First of all, welcome to my new series: Coding the Standards. In this article series, I will try my best to architect industrial standards among different industries like aviation, banking, insurance and more. This is the first episode which will be focused on IATA R753 which is a civil aviation standard for baggage tracking. Before the writing, there is short summary of sections of this article:
- About the IATA R753
- Event-Driven Design
- Data Modelling
- System Scalability & Reliability
- Code Walkthrough
- Conclusion
1. About the IATA R753
When you drop your luggage on the conveyor belt at check-in, a complex digital journey that unfolds behind the scenes of modern aviation begins. Your bag travels not only physically through airport systems but also through a network of software platforms, data streams, and international standards.
The IATA R753 standard was designed to make this process transparent, traceable, and reliable. It ensures that baggage location and status updates be recorded at every key stage, from the moment it's accepted at check-in to when it's returned to the passenger properly.
In this article, we'll explore how to build a real-time baggage tracking system using modern architectural components such as Spring Boot, Kafka, WebSocket, and event-driven architecture. Our goal isn't just to write code but also to create a scalable, fault-tolerant solution that can directly enhance the travel experience of our passengers.
If you are ready, please fasten your seatbelt, return your seat to the upright position and let's take a behind-the-scenes look at what really happens to your bag after you hand it over.

IATA R753 standard is a standard for tracking a baggage's journey for an airline. Main purpose is tracking a baggage from the check-in until it's arrived to its destination. This standard makes these checks mandatory for airlines and ground services. For achieve that, a baggage must be scanned on some points and of course, these updates should be reflected to system. Fundamental tracking points are:
- Check-in,
- Loaded to Plane,
- Transferred (if it's a connection flight),
- Arrived
These controls are mandatory since June 2018 for all airlines that are member of IATA. These tracking standard aims to improve three things.
- Reduce amount of baggage lost and delays,
- Customer satisfaction,
- More efficient operations and data sharing across partners.
These standards are crucially important since past year, in total over 10 million more baggage were delayed, mislaid, misdirected, pilfered or stolen.
My solution approach to address the challenges of real-time baggage tracking while complying with IATA R753, an event-driven architecture that combines scalable backend services, real-time messaging, and modern UI components.
2. Event-Driven Design
Each significant baggage movement (check-in, loading, unload and delivery) is captured as an event.
- Producer: Systems like Check-in, Baggage Handling, and Ground Operations generate events.
- Messaging: Apache Kafka is used as a central event bus, ensuring durable, ordered, and scalable message delivery.
- Consumer: Downstream services, such as operational dashboards or passenger notifications, subscribe to the Kafka topics to react to events in real-time.
3. Data Modelling
Kafka events contain key information:
{
"tag": "TK4J8B7XZ", //baggage tag
"flightNumber": "TK2020", //flight number
"timestamp": "2025-08-12T12:38:53Z", //time of event
"origin": "IST", //origin airport
"destination": "AYT", //destination airport
"passengerName": "Denizhan", //passenger name
"eventType": "CHECKED_IN" //IATA R753 status information
"handlingCarrier": "TK" //IATA R753 handling carrier information
}This structured format allows:
- Auditability: Each bag's full history is retained
- Query ability: Operations staff can quickly see the current bag status
To maintain correct sequence per bag, the baggage tag is used as the partition key in Kafka. This ensures that all events for the same bag are processed in order, even under high load.
Passenger-facing dashboards and operational screens could subscribe to the Kafka event stream via WebSocket or STOMP. This enables:
- Immediate status updates when a bag is checked in, loaded, or delivered.
- Visual timelines showing the bag's journey through airports.
4. System Scalability & Reliability
To ensure the baggage tracking system remains highly available, fault-tolerant, and scalable, the following strategies are implemented:
Spring Boot Microservices
- Each service handles a specific responsibility: event generation and storage, processing and email notifications.
- Services can be scaled horizontally to handle increased load during peak travel times.
Apache Kafka
- Acts as the central event bus. Guarantees durable, ordered, and partitioned message delivery.
- Using baggage tag as the partition key ensures event ordering per bag.
Outbox Pattern
- All changes to baggage state are first written to an Outbox table in the database.
- A scheduled job reliably publishes these events to Kafka, ensuring transactional consistency between the database and the message broker.
Dead Letter Queue (DLQ)
- Failed events that cannot be processed (e.g., deserialization errors, validation failures) are sent to a DLQ topic in Kafka.
- This prevents message loss and allows manual or automated reprocessing.
Persistent Storage
- PostgreSQL stores full baggage event histories for auditing, reporting, and operational dashboards.
Fault Tolerance
- Microservices and Kafka clusters can recover from node failures.
- Consumers can replay events from Kafka topics to rebuild state if needed.
Together, these strategies ensure that the baggage tracking system can handle millions of events per month, maintain correct event ordering, and provide real-time updates without losing critical information.
Now, let's dive into architecture more by taking a look at example code.
Before walk-through, please remember all codes written in this article, available at: arasdenizhan/iata-r753-baggage-track: Real-Time Event-Driven Baggage Tracking System with IATA R753
5. Code Walkthrough
First of all, we will start with our base pom. This POM will be our parent POM which manages all our dependencies (BOM) and it will build our modules.
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<!-- PARENT AND BILLS OF MATERIAL FOR PROJECT -->
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>3.5.4</version>
<relativePath/>
</parent>
<groupId>io.github.arasdenizhan</groupId>
<artifactId>bts</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>bts</name>
<description>Baggage Tracking System with Event Driven Architecture</description>
<packaging>pom</packaging>
<properties>
<java.version>17</java.version>
<spring-parent.version>3.5.4</spring-parent.version>
<spring.kafka.version>3.3.4</spring.kafka.version>
<mapstruct.version>1.6.3</mapstruct.version>
<spring-doc.version>2.8.5</spring-doc.version>
<postgresql.version>42.7.7</postgresql.version>
<hibernate-type.version>2.21.1</hibernate-type.version>
</properties>
<dependencies>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
</dependency>
</dependencies>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
<version>${spring-parent.version}</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-mongodb</artifactId>
<version>${spring-parent.version}</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-mail</artifactId>
<version>${spring-parent.version}</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-thymeleaf</artifactId>
<version>${spring-parent.version}</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
<version>${spring-parent.version}</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-websocket</artifactId>
<version>${spring-parent.version}</version>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
<version>${spring.kafka.version}</version>
</dependency>
<dependency>
<groupId>org.springdoc</groupId>
<artifactId>springdoc-openapi-starter-webmvc-ui</artifactId>
<version>${spring-doc.version}</version>
</dependency>
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>${postgresql.version}</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.datatype</groupId>
<artifactId>jackson-datatype-jsr310</artifactId>
<version>${jackson-bom.version}</version>
</dependency>
<dependency>
<groupId>com.vladmihalcea</groupId>
<artifactId>hibernate-types-60</artifactId>
<version>${hibernate-type.version}</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<version>${spring-parent.version}</version>
<scope>test</scope>
</dependency>
</dependencies>
</dependencyManagement>
<modules>
<module>management-service</module>
<module>notification-core</module>
<module>notification-service</module>
<module>dlq-notification-service</module>
</modules>
</project>Let's start with management-service
This service will be responsible for check-in the baggage and updating its status. Also, same time, it will write our update events to an outbox event table, and it will try to send outbox events to our Kafka cluster every 30 seconds. (This duration is totally experimental, and it will depend on system requirements in a real-life scenario.)
@RestController
@RequestMapping("/api/v1/check-in")
@RequiredArgsConstructor
public class CheckInController {
private final BaggageService service;
@PostMapping
ResponseEntity<BaggageInfoDto> checkIn(@RequestBody CheckInRequest request){ //check-in endpoint for a baggage
return ResponseEntity.ok(service.checkIn(request));
}
}
@CrossOrigin //for our local tests, allowed all cross origins. in real life scenario, a spring security config should be made.
@RestController
@RequestMapping("/api/v1/baggage")
@RequiredArgsConstructor
public class BaggageController {
private final BaggageService service;
@GetMapping
ResponseEntity<List<BaggageUpdate>> getAll(){ //get all endpoint for UI
return ResponseEntity.ok(service.getAll());
}
@PutMapping("{baggageTag}")
ResponseEntity<Void> updateStatus(@PathVariable("baggageTag") String baggageTag){ //update endpoint for baggage statuses
service.update(baggageTag);
return ResponseEntity.ok().build();
}
}Let's look into our Baggage Service interface:
public interface BaggageService {
BaggageInfoDto checkIn(CheckInRequest request);
List<BaggageUpdate> getAll();
void update(String baggageTag);
}
@Slf4j
@Service
@RequiredArgsConstructor
public class BaggageServiceImpl implements BaggageService {
private final BaggageMapper mapper; //we used a separate mapper interface, powered by Mapstruct
private final BaggageRepository repository; //our JpaRepository instance
private final OutboxEventService outboxEventService; //outbox event service to handle outbox events
private final BaggageWebSocketService baggageWebSocketService; //baggage websocket service to handle real-time baggage updates for UI
private final Clock clock; //injected clock instance with specified zone, will make writing tests easy.
@Override
@Transactional //transactional method to update our baggages.
// it will try to save to our db first, then will save outbox event to our internal db also.
// and afterwards will try to send live updated to our tracking UI.
// order here is important since we MUST update baggage DB first then send events to Kafka and UI.
public BaggageInfoDto checkIn(CheckInRequest request) {
Baggage existingBaggage = repository.findByTag(request.getTag()).orElse(null);
if(existingBaggage != null){
throw new BaggageServiceException("Baggage with tag=" + request.getTag() + " already checked-in!");
}
LocalDateTime now = LocalDateTime.now(clock);
log.info("Check-in request, baggageTag={}, flightNumber={}, time={}", request.getTag(), request.getFlightNumber(), now);
Baggage baggage = mapper.fromRequest(request);
baggage.setCheckInTime(now);
baggage.setCurrentStatus(EventType.CHECKED_IN);
baggage.setCurrentLocation(baggage.getOrigin());
Baggage savedBaggage = repository.save(baggage);
log.info("Baggage with baggageTag={} successfully checked in.", request.getTag());
outboxEventService.saveOutboxEvent(baggage);
baggageWebSocketService.updateBaggageInfo(baggage);
return mapper.toDto(savedBaggage); //returning infoDto to avoid make our DB model directly shared.
}
@Override
public List<BaggageUpdate> getAll() {
return mapper.toUpdate(repository.findAll());
}
@Override
@Transactional
public void update(String baggageTag) {
Baggage baggage = repository.findByTag(baggageTag)
.orElseThrow(() -> new BaggageServiceException("Baggage with tag=" + baggageTag + "not found!"));
baggage.setLastEventTime(LocalDateTime.now());
EventType nextStatus = EventType.getNextStatus(baggage.getCurrentStatus());
if(nextStatus == null){ //if next status is null, it means there is no available further status for a baggage so nothing should be made.
log.info("Baggage is on the final status, skipping update.");
return;
}
if(EventType.UNLOADED == nextStatus){ //if next status is UNLOADED, change location destionation since baggage is UNLOADED at destination.
baggage.setCurrentLocation(baggage.getDestination());
}
baggage.setCurrentStatus(nextStatus);
repository.save(baggage); //again first save to our db then outbox then UI update via websocket.
outboxEventService.saveOutboxEvent(baggage);
baggageWebSocketService.updateBaggageInfo(baggage);
}
}Let's look at our Outbox Event Service:
public interface OutboxEventService {
void saveOutboxEvent(Baggage baggage);
}
@Slf4j
@Service
@RequiredArgsConstructor
public class OutboxEventServiceImpl implements OutboxEventService {
private final OutboxEventRepository repository;
private final OutboxEventMapper outboxEventMapper;
private final KafkaService kafkaService;
private final Clock clock;
@Override
public void saveOutboxEvent(Baggage baggage) { // outbox event will be mapped via a mapstruct mapper instance.
try {
log.info("OutboxEvent save request for baggageTag={}", baggage.getTag());
OutboxEvent outboxEvent = outboxEventMapper.fromBaggage(baggage);
repository.save(outboxEvent); //it will be saved to our db first.
log.info("OutboxEvent saved successfully for baggageTag={}", baggage.getTag());
} catch (JsonProcessingException e) {
log.error("Error occurred while mapping outbox event!", e);
throw new OutboxEventException(e.getMessage(), e);
}
}
@Scheduled(fixedRate = 30000)
@Transactional
public void processOutboxEvents(){ //every 30 seconds, we will get not processed outbox events and try to send them to Kafka.
List<OutboxEvent> notProcessedEvents = repository.findNotProcessedEvents();
if(notProcessedEvents != null && !notProcessedEvents.isEmpty()){
log.info("Found {} amount of not processed outbox events.", notProcessedEvents.size());
for(OutboxEvent event : notProcessedEvents){
log.info("OutboxEvent with id={} will be processed.", event.getId());
boolean isSent = kafkaService.send(event.getId().toString(), event.getPayload());
if(isSent){
event.setProcessed(true); //if successfull, set processed to true to mark outbox event as "sent"
event.setUpdatedAt(LocalDateTime.now(clock));
repository.save(event);
log.info("OutboxEvent with id={} processed successfully.", event.getId());
} else { //if failed do nothing. outbox event should be handled in future run of this method.
log.warn("OutboxEvent with id={} not processed successfully!", event.getId());
}
}
}
}
}You can see more (like Models, Enums, Repositories) from GitHub repo, I shared above. Let's continue with, notification-core
notification-core, is our core module for common notification processes, that notification-service and dlq-notification-service can use. It has our BaggageEvent kafka model, MailType enum for template matching and MailMessage for mail DTO.
public interface EmailService {
void send(MailMessage message);
}
@Slf4j
@Service
@RequiredArgsConstructor
public class EmailServiceImpl implements EmailService {
private final JavaMailSender mailSender;
private final TemplateEngine templateEngine;
public void send(MailMessage message) {
if(message.getMailType() == null){
log.info("Not required to send mail. Skipping mail send operation.");
return;
}
BaggageEvent baggageEvent = message.getBaggageEvent();
try {
log.info("Mail send operation started for baggageTag={}, mailType={}", baggageEvent.getTag(), message.getMailType());
MimeMessage mimeMessage = mailSender.createMimeMessage();
MimeMessageHelper helper = new MimeMessageHelper(mimeMessage, true, "UTF-8");
getMailContent(message.getMailType(), baggageEvent, helper);
log.info("Sending mail for baggageTag={}", baggageEvent.getTag());
mailSender.send(mimeMessage);
log.info("Mail Sending successfully completed for baggageTag={}", baggageEvent.getTag());
} catch (Exception e) {
log.error("Mail send operation failed for baggageTag={}!", baggageEvent.getTag(), e);
throw new MailServiceException("Mail send operation failed!", e);
}
}
private void getMailContent(MailType mailType, BaggageEvent baggageEvent, MimeMessageHelper helper) throws MessagingException {
log.info("Mail template generation started for baggageTag={}", baggageEvent.getTag());
Context context = new Context();
context.setVariable("passengerName", baggageEvent.getPassengerName());
context.setVariable("flightNumber", baggageEvent.getFlightNumber());
context.setVariable("bagTag", baggageEvent.getTag());
context.setVariable("checkedInAt", DateTimeFormatter.ofPattern("dd/MM/yyyy HH:mm")
.format(baggageEvent.getTimestamp()));
context.setVariable("helpUrl", "denizhanairlines.com");
context.setVariable("departureAirport", baggageEvent.getOrigin());
context.setVariable("arrivalAirport", baggageEvent.getDestination());
String htmlContent = templateEngine.process(mailType.getTemplate(), context);
helper.setFrom("info@denizhanairlines.com");
helper.setTo("test@mail.com");
helper.setSubject(mailType.getTitle());
helper.setText(htmlContent, true);
log.info("Mail template generation finished successfully for baggageTag={}", baggageEvent.getTag());
}
}Mail Service is responsible for sending mails to our clients. (For test purposes, we've sent our mails to MailHog, and also, we used a dummy recipient address. For processing mail templates, we used Thymeleaf templates with placeholders. We use Context to set placeholder variables as written in mail template html files. For template processing, we used Template Engine to process given html template as String content. Placeholders in mail templates will be fulfilled with the help of our Context object. Let's go through our lovely mail sender service notification-service
@Slf4j
@Service
@RequiredArgsConstructor
public class KafkaService {
private static final String BAGGAGE_TOPIC = "baggage.tracking"; //baggage topic to subscribe
private static final String MAIL_DLQ_TOPIC = "mail.send.failures.dlq"; //DLQ topic for failed messages.
private final KafkaTemplate<String, MailMessage> kafkaTemplate;
private final ObjectMapper objectMapper;
private final EmailService emailService;
@KafkaListener(topics = BAGGAGE_TOPIC, groupId = "notification")
public void consume(String message){
try {
log.info("Incoming kafka message, trying to parse message.");
BaggageEvent baggageEvent = objectMapper.reader().readValue(message, BaggageEvent.class);
log.info("BaggageEvent with baggageTag={} successfully parsed.", baggageEvent.getTag());
tryToSendMail(baggageEvent); //we will try to send mail 3 times.
} catch (IOException e) {
log.error("Error occurred while reading incoming kafka message!", e);
throw new KafkaServiceException("Incoming kafka message not parsable!", e);
}
}
@Retryable(
retryFor = {MailServiceException.class}, //retry for MailServiceException, custom exception that we threw in our notification-core MailService flow.
backoff = @Backoff(delay = 2000, multiplier = 2) //we will delay first call 2000 ms (2 seconds), second for 2000*2 (4 seconds), and third for 2000*2*2 (8 seconds)
)
public void tryToSendMail(BaggageEvent baggageEvent) {
MailMessage mailMessage = new MailMessage(MailType.getType(baggageEvent.getEventType()), baggageEvent);
emailService.send(mailMessage);
}
@Recover //if all 3 attempts were failed, send event to DLQ topic and let DLQ service handle it.
public void recover(MailServiceException e, MailMessage mail) {
kafkaTemplate.send(MAIL_DLQ_TOPIC, mail);
log.error("Mail send is failed after retries, mail message sent to the DLQ: {}", mail, e);
}
}And in our dlq-notification-service we have our Kafka Service again.
@Slf4j
@Service
@RequiredArgsConstructor
public class KafkaService {
private static final String MAIL_DLQ_TOPIC = "mail.send.failures.dlq";
private final ObjectMapper objectMapper;
private final EmailService emailService;
@KafkaListener(topics = MAIL_DLQ_TOPIC, groupId = "dlq.notification")
public void consume(String message){
try {
log.info("Incoming kafka message, trying to parse message.");
BaggageEvent baggageEvent = objectMapper.reader().readValue(message, BaggageEvent.class);
log.info("BaggageEvent with baggageTag={} successfully parsed.", baggageEvent.getTag());
tryToSendMail(baggageEvent);
} catch (IOException e) {
log.error("Error occurred while reading incoming kafka message!", e);
throw new KafkaServiceException("Incoming kafka message not parsable!", e);
}
}
@Retryable(
retryFor = {MailServiceException.class},
backoff = @Backoff(delay = 2000, multiplier = 2)
)
public void tryToSendMail(BaggageEvent baggageEvent) {
MailMessage mailMessage = new MailMessage(MailType.getType(baggageEvent.getEventType()), baggageEvent);
emailService.send(mailMessage);
}
}This service will consume DLQ Topic messages, and it will again try to send mail 3 times. Normally if all 3 attempts are failed, we MUST stop this consume method with a Fall-back method that prevents infinite try of consuming DLQ topic messages. But in our test scenario we don't expect these DLQ service will fail. For a real-life scenario we can add here another method as seen below:
@Recover
public void recover(MailServiceException ex, BaggageEvent baggageEvent) {
log.error("Mail sending permanently failed after retries. baggageTag={}, error={}",
baggageEvent.getTag(), ex.getMessage(), ex);
// Alternatively, we can do following:
// - write our message to a "permanent failure" topic.
// - we can save it to handle it later (as we did on outbox events)
// - alarm/Slack notifications
// - mark for manuel handling
}And let's look into our Luggage Tracker UI.
import { useEffect, useState } from "react";
import { Modal, Table, Tag } from "antd";
import { Client, type Message } from "@stomp/stompjs";
import SockJS from "sockjs-client";
import { airportsMap, type Airport } from "~/constant";
import { lazy, Suspense } from "react";
interface Luggage { //Luggage object
tag: string;
status: string;
lastUpdate: string;
flightNumber: string;
location: string;
}
const statusColors: Record<string, string> = { //Status colors visible on table
CHECKED_IN: "blue",
CLAIMED: "green",
LOADED: "orange",
DELIVERED: "orange",
TRANSFERRED: "orange",
UNLOADED: "orange",
ARRIVED: "orange",
};
const MapItem = lazy(() => import("../map")); //we import MapItem lazily since it uses React Leaflet and map renderings done on client side.
const INFO_URL = import.meta.env.REACT_APP_API_INFO_URL
? import.meta.env.REACT_APP_API_INFO_URL
: "http://localhost:8090/api/v1/baggage";
const SOCKET_URL = import.meta.env.REACT_APP_API_SOCKET_URL
? import.meta.env.REACT_APP_API_SOCKET_URL
: "http://localhost:8090/ws/luggage";
//we get URL env values. If not available, put local urls for development.
//INFO_URL is for getting ALL Baggages.
//SOCKET_URL is for our live updates.
export default function LuggageTracker() {
const [isModalOpen, setIsModalOpen] = useState(false);
const [modalContent, setModalContent] = useState<React.ReactNode | null>(
null
);
const [selectedLocation, setSelectedLocation] = useState<
Airport | undefined
>();
const [luggageList, setLuggageList] = useState<Luggage[]>([]);
const [highlightedRow, setHighlightedRow] = useState<string | null>(null);
const stompClient = new Client({ //STOMP Client to connect our websocket endpoint
webSocketFactory: () => new SockJS(SOCKET_URL),
reconnectDelay: 5000,
debug: (msg) => console.log(msg),
});
useEffect(() => { //with useEffect hook, we will get all baggage info first, then we will connect our stompClient to receive live updates.
fetch(INFO_URL, {
method: "GET",
headers: {
Accept: "application/json",
"Content-Type": "application/json",
},
})
.then((response) => response.json())
.then((data: Luggage[]) => {
setLuggageList(data);
})
.catch((error) => {
Modal.error({
title: "Error while getting data!",
content:
"Error occurred while getting luggage data! Reason = " + error,
});
});
stompClient.onConnect = () => {
console.log("STOMP connected");
stompClient.subscribe("/topic/luggage-updates", (message: Message) => {
if (message.body) {
try {
const data: Luggage = JSON.parse(message.body);
setLuggageList((prevList) => { //find index of incoming change event.
const index = prevList.findIndex((item) => item.tag === data.tag);
if (index !== -1) { //if its exists in list, update list otherwise add to end.
const newList = [...prevList];
newList[index] = data;
setHighlightedRow(prevList[index].tag); //set row highlighted.
setTimeout(() => setHighlightedRow(null), 2000); //after 2 seconds, remove highlight.
return newList;
} else {
return [...prevList, data];
}
});
} catch (err) {
console.error("Error parsing STOMP message:", err);
}
}
});
};
stompClient.activate();
return () => {
stompClient.deactivate();
};
}, []);
const columns = [
{
title: "Luggage Tag",
dataIndex: "tag",
key: "tag",
},
{
title: "Status",
dataIndex: "status",
key: "status",
render: (status: string) => (
<Tag color={statusColors[status] || "default"}>{status}</Tag>
),
},
{
title: "Last Update",
dataIndex: "lastUpdate",
key: "lastUpdate",
render: (date: string) =>
date === null ? "-" : new Date(date).toLocaleString(),
},
{
title: "Flight",
dataIndex: "flightNumber",
key: "flightNumber",
},
{
title: "Location",
dataIndex: "location",
key: "location",
render: (location: string) => (
<Tag
color="black"
onClick={() => {
setIsModalOpen(true);
const lctn = airportsMap.get(location);
const content = getMapContent(lctn);
setModalContent(content);
setSelectedLocation(lctn);
}}
>
{location}
</Tag>
),
},
];
//if location is not undefined, we will render our MapItem again.
//since we use lazily loading, we used Suspense fallback to show a text.
function getMapContent(location: Airport | undefined) {
return (
<>
{location ? (
<Suspense fallback={<div>Loading Map...</div>}>
<MapItem
position={location ? location.position : [1, 1]}
title={location ? location.name : ""}
/>
</Suspense>
) : (
<></> //if location is not defined, return an empty block.
)}
</>
);
}
function handleCloseModal() {
setIsModalOpen(false);
setModalContent(null);
}
return (
<>
<div style={{ padding: 24, minHeight: "100vh" }}>
<div className="mb-4 flex items-center justify-center gap-5">
<h2 className="font-bold text-zinc-200">
🛄R Denizhan Airlnes - Live Luggage Tracking System
</h2>
</div>
<Table
rowKey="tag"
columns={columns}
dataSource={luggageList}
rowClassName={(luggage) =>
luggage.tag === highlightedRow ? "highlight-row" : ""
}
pagination={false}
bordered
/>
</div>
{modalContent && (
<Modal
title={selectedLocation?.name}
open={isModalOpen}
onOk={handleCloseModal}
onCancel={handleCloseModal}
>
{modalContent}
</Modal>
)}
</>
);
}And that's all for my code. You can explore more on GitHub repo. There is a lot to cover like all individual Docker Files, Configs, Repository interfaces, Map interfaces etc. But we've walked through over project's main points.
And before the end, let's look into our Docker-Compose file:
services:
zookeeper: #zookeeper for kafka
image: confluentinc/cp-zookeeper:7.5.0
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka: #our kafka cluster
image: confluentinc/cp-kafka:7.5.0
container_name: kafka
ports:
- "9092:9092"
healthcheck: #we used healthcheck to prevent our dependent services run before kafka is up and healthy.
test: [ "CMD", "kafka-topics", "--bootstrap-server", "localhost:9092", "--list" ]
interval: 10s
timeout: 5s
retries: 5
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
depends_on:
- zookeeper
kafka-init: #kafka init will create two main topics for our kafka after kafka is up.
image: confluentinc/cp-kafka:7.5.0
depends_on:
kafka:
condition: service_healthy
entrypoint: ["sh", "-c", "kafka-topics --create --topic baggage.tracking --partitions 1 --replication-factor 1 --if-not-exists --bootstrap-server kafka:9092 && kafka-topics --create --topic mail.send.failures.dlq --partitions 1 --replication-factor 1 --if-not-exists --bootstrap-server kafka:9092"]
kafka-ui: #kafka-ui to visualize kafka topics.
image: provectuslabs/kafka-ui:latest
ports:
- "8080:8080"
environment:
- KAFKA_CLUSTERS_0_NAME=local
- KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092
depends_on:
- kafka
baggage-db: #our db for management-service
image: postgres
ports:
- "5432:5432"
volumes:
- baggage-db-data:/var/lib/postgresql/data
environment:
POSTGRES_DB: baggage
POSTGRES_USER: baggage
POSTGRES_PASSWORD: baggage123
mailhog: #mailhog to mock SMTP and see mail messages from UI
image: mailhog/mailhog:latest
container_name: mailhog
ports:
- "1025:1025" # SMTP
- "8025:8025" # Web UI
restart: unless-stopped
management-service: #management-service
build:
context: ../management-service
dockerfile: Dockerfile
ports:
- "8090:8090"
environment:
DB_USER: baggage
DB_PASSWORD: baggage123
DB_HOST: baggage-db
DB_PORT: 5432
DB_NAME: baggage
DDL_AUTO: update
KAFKA_BOOTSTRAP_SERVERS: kafka:9092
depends_on:
- baggage-db
- kafka
- kafka-init
notification-service: #notification-service
build:
context: ../notification-service
dockerfile: Dockerfile
ports:
- "8091:8091"
environment:
KAFKA_BOOTSTRAP_SERVERS: kafka:9092
MAIL_HOST: mailhog
MAIL_PORT: 1025
depends_on:
- mailhog
- kafka
- kafka-init
dlq-notification-service: #dlq-notification-service
build:
context: ../dlq-notification-service
dockerfile: Dockerfile
ports:
- "8092:8092"
environment:
KAFKA_BOOTSTRAP_SERVERS: kafka:9092
MAIL_HOST: mailhog
MAIL_PORT: 1025
depends_on:
- mailhog
- kafka
- kafka-init
tracking-ui: #tracking-ui service
build:
context: ../tracking-ui
dockerfile: Dockerfile
args:
REACT_APP_API_INFO_URL: "http://management-service:8090/api/v1/baggage"
REACT_APP_API_SOCKET_URL: "http://management-service:8090/ws/luggage"
ports:
- "3000:3000"
volumes: #volume for our db
baggage-db-data:
6. Conclusion
Could say it was fun to create such a system. I've always enjoyed flying since I know myself. Who knows, this might be the reason for that. But coding the standards, is more informative than you think.
Please comment what do you think about this architecture. There is always space for improvements. Of course, we can add Testcontainers integration tests, functional tests to our controllers, unit tests to our services but this is just a simple demonstration of an architecture.
Don't forget to follow me because in my next writing, which will be Coding the Standards II, I will try to architect a PCI DSS (Payment Card Industry Data Security Standard) system which is banking standard. There will be Masking, Encryption and RBAC included.
Until my next writing, take care and don't forget to read more!
Codes in this article: arasdenizhan/iata-r753-baggage-track: Real-Time Event-Driven Baggage Tracking System with IATA R753
(Screenshots exists in GitHub but you can also find them below)






