Simplifying asynchronous service communication with Kafka, Debezium and the Outbox pattern | Ernesto Matos, Loggi

HostedbyConfluent 700 views 19 slides Sep 24, 2021
Slide 1
Slide 1 of 19
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19

About This Presentation

Loggi is one of the Brazilian unicorns which is transforming logistics through technology to connect the entire country. At Loggi, we've widely used the CDC pattern with Debezium + Kafka to capture the changes from our main database as a way of communication between services, as an Event Notific...


Slide Content

Simplifying asynchronous service
communication with Kafka, Debezium
and the Outbox pattern
2021

2
Ernesto Matos
●Engineering Manager for Loggi’s Platform team.
●Helping to develop tools that make our engineers
and data people more productive.
●In the past, I got Ph.D. in computer science
researching software testing and verification.

Loggi is a logistics company enabled by
technology to connect Brazil.
3

We’re a unicorn! ??????
With SoftBank, Microsoft, GGV Capital and
others, the new investment round has brought
Loggi’s valuation up to more than US$1 billion!
Series B R$50m
2015
Series C R$50m
2017
R$10m
Agosto 2014 R$2,5m
Agosto 2013 Angels R$400m
Março
2018 R$600m
Maio
2019
R$400m
2018
R$600m
2019
R$10m
2014
R$2,5m
2013
Series A
Seeds
Series D
Series E
Angels
R$1,15bi
2021
Series F

●Around 1000 cities are served by Loggi directly
●Expect to reach 3000 cities in 2021



We are connecting Brazil

60% Brazil coverage
5
SÃO PAULO
RECIFE
SALVADOR
BRASÍLIA
BELO HORIZONTE
RIO DE JANEIRO
PORTO ALEGRE
VITÓRIA
CAMPINAS
CURITIBA
FLORIANÓPOLIS
2019
2020
2021
XDs regionais 2021
XDs regionais 2020
Coleta:
Zona de entrega:

6
The problem we
wanted to solve
●We had a growing number of services.
●Many ways of exchanging information between
services, each team did it its own way.
●Kafka is very flexible, we wanted to define
company patterns.

7
Requirements
●It has to work on both of our stacks:
Python/Django/PostgreSQL and
Kotlin/Micronaut/MongoDB .
●It has to abstract the complexities that exist
when using Kafka.
●The events produced should be made available
for consumption in our data lake.
●It has to have strong transactional guarantees.

8
Architecture overview

9
Producing events

10
Producing events
Show me the code

11
Producing events

12
Consuming events

13
Consuming events
Show me the code

14
Consuming events

15
Consuming events
Show me the code

16
Dealing with failures

17
Final remarks
●This design has been running in production for
more than half a year.
●We have dozens of types of events created
(package status updates, cargo transfers,
accounting events, etc.).
●The Event Broker is successfully delivering more
than a million events every day.

18
Future work
●Delayed events
●Deliver events to multiple endpoints of the same
consumer
●Scheduled events

Thank You
loggi.com
19