Moving from monolith to event-driven Microservices architecture

In this section, I am going to explain one of the most popular architectural transformation approach which is about moving from monolith to event-driven Microservices architecture. Lots of companies are making this transformation and I would like to elaborate on why the IT world is making that transformation and examine all pros and cons of the different approaches.

In general, a monolithic application confronts a single database and multiple layered applications on top of it. These layers can be differentiated based on the architectural design such as user interface, business layer and data layer.

Wikipedia also gives a clear definition for a monolithic application

In software engineering, a monolithic application describes a single-tiered software application in which the user interface and data access code are combined into a single program from a single platform.

Even if the companies are moving away from a monolithic application, there are also some advantages of this approach

  • The data is in a single database, sounds good! Yes, you can access all your data from the single database
  • Much easier in terms of development
  • You can use the same libraries, templates and code blocks
  • Easier to test and deploy something
  • SPOF(Single point of failure)-All the services are connecting to the same database. If any issue happens on a database level, all the services will be down
  • Stick to the same database-Different services should use the same database model. If you select a relational database, all the services are supposed to use a relational database even if one of the service needs a NoSQL database
  • Not scalable
  • Not modular

Event-driven architecture

The event-driven pattern allows you to create scalable and real-time workflows in order to process your data. Every component is in charge to listen to input, process it with respective domain rules and publish output to the event channel to be listened by another component.

You can see a simple event flow at the below;

Steps in the workflow

  • Number 1: OrderPlacement service receive the order, make the required domain controls and publish event to be listened by Inventory Service
  • Number 2: Inventory Service consume an event that includes the product details and requested item count for order, and publish the result to the event channel.
  • Number 3: Order Placement service listen to the inventory confirmation event and make the required changes in its database and send order confirmation to Event Channel to be listened to by Customer Communications
  • Number 4: Customer Communications service listen to the Order Confirmation Events and send order confirmation event to the customer

What happens if the Analytics team wants to use the Order Confirmation event to make some order based consumer segmentation!
In that case, Analytics listens the order confirmation event in order to downstream to the Analytics system. Hence, Analytics can make a segmentation with respective data. The point is to note that there is nothing that needs to be done by Order Placement Service, it is loosely coupled and scaled architecture.

Microservices architecture

The monolith application works with the request-response model. Once the client makes a call, the system performs some actions and send the response. The problem starts here; high coupling would make your system difficult to change and maintain

In this case, event-driven comes to play. Event-driven enables the communication of data between different applications to create a highly scalable and loosely coupled system.

The publisher writes data to the messaging layer and messaging layer holds this message for some time. On the other side, the consumer reads this message and process it. All this flow can happen in milliseconds depends on your infrastructure performance.

Please see how publisher writes data to messaging layer;

The following diagram also shows how consumer read event from messaging layer to integrate data to downstream system such as reporting, etc…

See the sample event-driven Microservices architecture below. There are multiple product teams such as order management, inventory management, analytics and customer communications. Data comes from multiple sources and every data is tackled as an event. Let’s look at one example data flow

  • The customer place an order, and the order event is created in the Order Placement
  • The order placement event is published to the messaging hub (Kafka) by the Order Management team
  • Customer communications product listens to this event to send an order confirmation email
  • Enables building a scalable system.
  • There is no 1:1 integration between systems, it helps to create a loosely coupled system
  • The producer and consumer are quite unbound, so teams are more flexible in terms of development and deployment.
  • Real-time processing based on an event or specific time window
  • Reduce the interconnection between systems that improves resiliency
  • Data is distributed to different database systems. It might be a challenge to aggregate data from different sources
  • Requires more effort when it comes to development
  • There is no classical transactional process, inconsistency could be an issue

Steps need to be done starting to the transformation

To move to the event-driven architecture, you need to set up a streaming infrastructure. It is good to have a common streaming platform that is managed by the platform team, so product teams can use the streaming platform as a service which increases the focus of the feature team. There are multiple options to set up this infrastructure via Confluent, Kafka On-Prem, Amazon Managed Streaming for Apache Kafka (Amazon MSK) etc…

As you see in the picture, it would be better to create sub product teams that focus on specific features. For example, the order management team will be responsible to process order data and publish the order transactions to the Messaging Hub. The team can have full control for architectural decisions to have a better tech stack for the requirements

In general, producers publish and own the data in the streaming layer to be used by consumers. Since multiple consumers are going to use this data, it would be better to publish as much as valuable content on the topic. For example, order transactions will be used by the customer communications team to send an email. At the same time, the same topic will be used by the Analytics team to segment the consumer to identify who is buying what. Hence the topic design should meet these requirements and needs to include all valuable data.

Doing a transformation in a big organisations would be a challenge. In the monolith architecture, there are lots of unknowns and that surprise you during the transformation. What I suggest you do is to start with MVP (Minimum viable product) instead of starting all transformation.

Since moving to distributed and independent components, consistency could be problem in general. To have a more reliable system, monitoring and alerting is one of the must-have approaches. Once you deploy every component, monitoring and alerting should be part of the go-live.

Using a containerisation platform is very important to point when using Microservices. It helps you to create a package of applications that allow you to work in any environment and scale up whenever needed. Multiple libraries can be used like Kubernetes, Docker etc…


As you see, there are different architectural approaches to set up your project infrastructure either monolith or event-driven Microservices. There is no certain recommendation, monolith seems more speedy set up and useful for small projects, but when it comes to playing big projects and better scalability, event-driven Microservices is the better approach.

Data & Cloud Architect and Trainer .