11 classic-editor-remember: classic-editor
12 date: "2021-02-14T18:10:38+00:00"
13 guid: http://juplo.de/?p=1209
16 title: 'Implementing The Outbox-Pattern With Kafka - Part 1: Writing In The Outbox-Table'
17 url: /implementing-the-outbox-pattern-with-kafka-part-1-the-outbox-table/
20 _This article is part of a Blog-Series_
22 Based on a [very simple example-project](/implementing-the-outbox-pattern-with-kafka-part-0-the-example/)
23 we will implemnt the [Outbox-Pattern](https://microservices.io/patterns/data/transactional-outbox.html) with [Kafka](https://kafka.apache.org/quickstart).
25 - [Part 0: The Example-Project](/implementing-the-outbox-pattern-with-kafka-part-0-the-example/ "Jump to the explanation of the example project")
26 - Part 1: Writing In The Outbox-Table
30 In this part, we will implement the outbox (aka: the queueing of the messages in a database-table).
34 The outbox is represented by an additionall table in the database.
35 This table acts as a queue for messages, that should be send as part of the transaction.
36 Instead of sending the messages, the application stores them in the outbox-table.
37 The actual sending of the messages occures outside of the transaction.
39 Because the messages are read from the table outside of the transaction context, only entries related to sucessfully commited transactions are visible.
40 Hence, the sending of the message effectively becomes a part of the transaction.
41 It happens only, if the transaction was successfully completed.
42 Messages associated to an aborted transaction will not be send.
46 No special measures need to be taken when writing the messages to the table.
47 The only thing to be sure of is that the writing takes part in the transaction.
49 In our implementation, we simply store the **serialized message**, together with a **key**, that is needed for the partitioning of your data in Kafka, in case the order of the messages is important.
50 We also store a timestamp, that we plan to record as [Event Time](https://kafka.apache.org/0110/documentation/streams/core-concepts) later.
52 One more thing that is worth noting is that we utilize the database to create an unique record-ID.
53 The generated **unique and monotonically increasing id** is required later, for the implementation of **Exactly-Once** semantics.
55 [The SQL for the table](https://github.com/juplo/demos-spring-data-jdbc/blob/part-1/src/main/resources/db/migration/h2/V2__Table_outbox.sql) looks like this:
57 `CREATE TABLE outbox (
58 id BIGINT PRIMARY KEY AUTO_INCREMENT,
65 ## Decoupling The Business Logic
67 In order to decouple the business logic from the implementation of the messaging mechanism, I have implemented a thin layer, that uses [Spring Application Events](https://docs.spring.io/spring-integration/docs/current/reference/html/event.html) to publish the messages.
69 Messages are send as a [subclass of `ApplicationEvent`](https://github.com/juplo/demos-spring-data-jdbc/blob/part-1/src/main/java/de/juplo/kafka/outbox/OutboxEvent.java):
71 `publisher.publishEvent(
76 ZonedDateTime.now(clock)));
79 The event takes a key ( `username`) and an object as value (an instance of an enum in our case).
80 An `EventListener` receives the events and writes them in the outbox table:
82 `@TransactionalEventListener(phase = TransactionPhase.BEFORE_COMMIT)
83 public void onUserEvent(OutboxEvent event)
89 mapper.writeValueAsString(event.getValue()),
92 catch (JsonProcessingException e)
94 throw new RuntimeException(e);
99 The `@TransactionalEventListener` is not really needed here.
100 A normal `EventListener` would also suffice, because spring immediately executes all registered normal event listeners.
101 Therefore, the registered listeners would run in the same thread, that published the event, and participate in the existing transaction.
103 But if a `@TransactionalEventListener` is used, like in our example project, it is crucial, that the phase is switched to `BEFORE_COMMIT` when the Outbox Pattern is introduced.
104 This is, because the listener has to be executed in the same transaction context in which the event was published.
105 Otherwise, the writing of the messages would not be coupled to the success or abortion of the transaction, thus violating the idea of the pattern.
107 ## May The Source Be With You!
109 Since this part of the implementation only stores the messages in a normal database, it can be published as an independent component that does not require any dependencies on Kafka.
110 To highlight this, the implementation of this step does not use Kafka at all.
111 In a later step, we will separate the layer, that decouples the business code from our messaging logic in a separate package.
113 The complete source code of the example-project can be cloned here:
115 - `git clone -b part-1 /git/demos/spring/data-jdbc`
116 - `git clone -b part-1 https://github.com/juplo/demos-spring-data-jdbc.git`
118 This version only includes the logic, that is needed to fill the outbox-tabel.
119 Reading the messages from this table and sending them through Kafka will be the topic of the next part of this blog-series.
121 The sources include a [Setup for Docker Compose](https://github.com/juplo/demos-spring-data-jdbc/blob/master/docker-compose.yml), that can be run without compiling
122 the project. And a runnable [README.sh](https://github.com/juplo/demos-spring-data-jdbc/blob/master/README.sh), that compiles and run the application and illustrates the example.