Monolithic vs microservices: How we’ve successfully migrated our app (2/2)

11 min

read

It’s time for the second part of the article about changing the application architecture from a monolithic to microservices. In the first part, I’ve presented you what is the difference between monolithic and microservices how to prepare for the migration. In the second part, I’ll focus on the details of the migration itself. But firstly, let’s recap what happened in the previous part of this series.

In the first part of the article “Monolithic vs microservices: How we’ve successfully migrated our app” I mostly focused on the microservices based solutions in comparison to monolithic application in software development. I presented you the preparation to the process of migration from a monolithic app to microservices architecture, basing on a simple scenario of the mockup application. I showed you how to clean up the code before migration and focused on architecture transition phase. The second part is mostly about the entire application migration itself. We’ll take a closer look at microservices architecture and we’ll try to answer the question whether there are any advantages of microservices over monolithic applications.

Introduction

We began our work on a live text relations platform with a code refactoring. Access to football games relations is restricted only to registered and logged-in users.

Later, we introduced a CQRS (Command Query Responsibility Segregation) architectural pattern that allowed us to split read and write operations into the commands and queries. After that, logic segregated according to that guideline was moved to the corresponding namespaces.

An image with simple contexts CQRS

The latest version of the code that will be used in the examples is in this repository.

Data flow inside the current system

Before moving on with the changes to our architecture, I would like you to pay attention to the various actions performed in the code and how are they invoked. In this article, I will yet again set a football match updating method as an example to keep them consistent. However, now let’s take a closer look at how a typical request is handled inside the app.

An image with an example of simple data flow CQRS

Looking at the scheme above, we can quickly notice that between the request (wrapped by the command) and action handler we have an additional data transport layer. In our case, this transport layer is handled by a CommandBus which relays commands to proper handlers.

So now, thanks to the CQRS pattern, we achieved method segregation to the one that reads or writes the data. We may even feel tempted to disable CommandBus (but we won’t!) and try to issue command handlers manually. This way, we would achieve better control of this transport layer mentioned above. To make it even more convenient, we would implement our intermediary mechanism that would match intentions with its execution. We can also go a step further and incorporate whatever communication protocol we see fit. And actually, we will create a similar solution later in this article, but we won’t ditch the usage of the CommandBus. 😉

And what communication protocols we have available? In a microservice world, the most popular ones are HTTP and AMQP. However, we are not limited to them. Nothing prevents us from using, for example, binary protocols like gRPC. For this article, I chose an HTTP protocol, and here is why:

  • HTTP is a text-based protocol that is easy to debug and to fix potential problems,
  • it is human-readable, simple and widely known,
  • it does not require the installation of any additional dependencies to work.

With that being said, we will replace CommandBus with an HTTP protocol in the previous communication scheme and logic handler with a microservice that will perform the same business logic.

An example of simple data flow HTTP

This way, we will slowly start extracting app logic from old codebase into a proper microservices. This cutoff will happen on a controller methods level where we have definitions of each endpoint. Thanks to the context segregation done previously, we can move all related logic out of the old architecture almost at one go.

New system architecture in the software development

Knowing how data flows through the application to fulfill the business logic, it is a great opportunity to think about how to connect loosely coupled microservices together. 

Depending on our needs or size of our project, we can apply one of a few available patterns or their derivatives. Also, nothing stands in our way to mix them all together. Most of the patterns are derived from the way how are they connected between each other. We can distinguish a few patterns like:

  • Aggregator/Proxy pattern – to meet specific functionality, dedicated service (proxy) or client-side app (aggregator) calls all individual services.

An example of microservice proxy pattern

  • Asynchronous message pattern — especially useful if we need asynchronous calls in our system as HTTP communication is done synchronously. Utilising the AMQ message queue will allow easy, asynchronous interservice communication.

An example of microservice asynchronous pattern

  • Chain pattern — used when there is a need to generate one, unified response from several dependent services. The response of one service becomes the input of another subsidiary service.

An example of microservice chain pattern

As we want to keep the current project simple and easy to understand, we will use a proxy pattern. There will be one service created for that purpose. This service will communicate with all other services, which will give us two primary benefits:

  • a proxy will become a central place from where we will call all other services,
  • process of the user authorization and authentication will get simplified.

Why did I mention security here? That’s because authorization and authentication matter will profoundly affect our architecture choice. Let’s imagine a simplified request done through a web application. The user wants to fetch the entire football event (teams, relation and scores) but the access is restricted to registered and logged-in users.

A diagram with an example of simple authorization flow

After the creation of multiple services that are exposed to users, we would need to pass a token on every request. This way, services would authenticate us using another dedicated service and then authorize our request. As a result, this practice would unnecessarily duplicate the same operation several times, negatively affecting the time to get a response from the application. Therefore, according to DDD methodologies and bounded context, the logic performed to obtain data from microservices alongside the authorization process can be included in the proxy. The proxy will perform the authorization at the beginning of the process and fetch the data without having to repeat it.

A diagram with an example of advanced authorization flow

Additionally, knowing that for each authentication we always need to get some user data – we can keep user context as close to the proxy service as possible. The current diagram of our system would look as follows:

An example of microservice architecture with proxy

Service extraction

How the service extraction looks like if we check microservices vs monolithic topic? Before we start working with the actual code, I would like to go back to the microservice definition for a second. As defined on microservices website, services should be characterized by:

  • High testability and ease of maintenance.
  • Loose connections.
  • Possibility of independent deployment.
  • Concentration on business capabilities.
  • Ownership of one team.

We can, therefore, conclude that the service is a mostly autonomous being. According to this thesis, we can treat it practically as a separate project with its copy of the framework, settings and logic related only to the given context. That assumption unveils one of the less desirable peculiarities of microservices — duplication. 

From this moment on, we have to get used to the fact that projects in a microservice architecture will often face data duplication in the form of files (framework and vendors) or identifiers in the database, as they will be necessary for us to deliver all the functionalities.

Bearing in mind all the assumptions listed above, we can apply them to our well-known context of football matches. To transform the existing code and business logic into a microservice, we will create – in a simplified way – a separate project. The workflow will be repeatable for all contexts, and it looks like this:

  • We create a new folder in which we will set up a framework and necessary packages configuration.
    • Alternatively, we can copy the current settings and trim down them a bit.
  • We should only transfer the context-related code to the new project.
  • We will prepare the docker configuration to be able to run services simultaneously.
  • We ought to test the newly extracted part of the application and apply any final fixes or improvements.

Having a simplified checklist for the necessary steps, we can execute all subsequent points. The first thing to handle is the folder structure. It practically does not differ from the structure of the current project, except that only elements related to football games will be transferred there. The catalog with the service will look something like in the attached picture. So we can see inside folders related to the Symfony framework, configuration of the given microservice and namespaces consistent with the division of the CQRS pattern imposed by us earlier.

A screenshot of a structure

As the context of football games has become a separate project, we must be able to run it independently from the rest of the project. The new definition of a service is required in docker-compose.yml. It will be an entry with reference to the folder containing a current service code.

From now on, while entering the command docker-compose up -d in the console, the football matches container will be launched beside the main project.

A screenshot of Docker compose CLI

💡 Read more: Information about docker-compose on the Docker website.

We’ve made quite a few changes to our code, so it’s time to check that everything works. We run automatic tests of our service and unfortunately it turns out that all the scenarios turned red.

A screenshot with failed scenarios

How is that possible, since the code was working fine before the extraction? It turns out that the service has dependencies on system elements which definitions no longer exist in our context. Those are mainly data models and ORM relationships in entities. Removing the former is not particularly difficult, because all dependent models are fetched via identifiers. So, we need to replace the objects with the associated IDs. As an example, we’ll use the entity describing the match. It holds information such as points, date and two teams that compete against each other.

When a person from the administration crew creates a new match, he must enter from the team list the names of the host and guest teams. The application later resolves the dependencies from these objects, right before writing to the database and stores them in individual columns. In the current microservice context, we do not have teams but, we can simply provide only identifiers that previously appeared as foreign key identifiers of the corresponding teams. With these identifiers, we can easily restore the previous structure of the object.

More problematic are type 1: M or M: N relationships between classes, and foreign keys in the database. They introduce strong dependencies between objects, which results in problems when dividing the system into independent elements. Foreign keys are often generated automatically without our knowledge – primarily when we use tools such as ORM, for example, Doctrine. For instance, many-to-one relationships for teams are defined as follows:

Such a foreign keys are stored in the database:

A screenshot with database foreign keys example

Here as we want to remove those keys, we will be facing significant alterations in the database structure. A special ‘DROP FOREIGN KEY’ declaration is used to delete foreign keys in MySQL. We will use it in conjunction with database migration to ensure that the process is reversible.

💡 Read more: MySQL documentation – Foreign Key Constraints

Now, we can safely remove the many-to-one relationship from the ORM code and replace the field names with those corresponding to the columns from the database. It is worth remembering that indexes were also established on foreign keys to ensure optimal queries performance. It is, therefore, worth to make up for those missing entity declarations.

An additional advantage after getting rid of relationships in the database is the fact that we have gained the ability to easily move tables into separate databases. If there is ever a need for a change, we can completely switch the technology or mechanism used to store information.

Once again making corrections and improvements in the code, the time has come for further tests. Fortunately, this time the test scenarios were completed successfully, and no errors were reported.

A screenshot of a situation with no reported errors

I want to elaborate a bit on a topic of tables splitting between different databases. It is worth considering the consequences of such a move. We will encounter another widespread problem occurring in microservices – transactions. Obtaining transactions over several microservices is not an easy thing to achieve. Especially when errors arise in our application or any system element suddenly stops functioning. Fortunately, the first attempts to deal with this problem were already raised in 1987 in the document “Sagas” written by Hector Garcia-Molina and Kenneth Salem. Currently, two popular methods are used to deal with this problem:

  • Saga pattern — a saga is a series of consecutive transactions in which each transaction updates data within one service. In case of failure, transactions that compensate changes are performed.
  • Eventual consistency — is a model that, by definition, does not support the cross-service ACID transaction style (Atomicity, Consistency, Isolation, Durability). Instead, it emphasizes other mechanisms to ensure that data is consistent at some point in the future.

However, I will not elaborate on these issues, because it is a topic so extensive that both these subjects deserve separate articles. Nevertheless, this is something that you need to keep in mind when designing a system in a microservice architecture.

See also: Microservices design patterns for CTOs

Creation of a Proxy microservice

Now that we have separated our microservice and all its logic from the existing code base, it turns out that we cannot use it. We must somehow be able to communicate with the newly created microservice to ensure the current functionality of the application.

A diagram with an example of microservice out

Therefore, we will create a simple service inside the application that will allow transparent control and transmission of data coming from the outside. The service will be called ServiceEndpointResolver, and its underlying method will be the callService method with parameters such as the HTTP method, microservice address or data.

Another critical element of the proxy will be the ability to resolve path names to relevant microservices. To do this, we need to create a simple map describing which endpoint name belongs to which microservice. This way, the EndpointToServiceMap class was formed:

Now, to get the full address to the microservice to which we want to transfer data, another method in the ServiceEndpointResolver class were prepared:

Having all the necessary methods, we can combine and use them in the controller method related to football matches updates.

The last thing we have to do is to handle any errors returned from the service, such as:

  • Incorrect data and validation.
  • Misbehaving service.
  • Problems with connection.

Finally, we can add a method in the application base controller class to validate the response returned from the microservice.

After all these operations, our application should work again, just like before microservice extraction. The structure diagram is presented as follows in the graphic below.

A diagram showing an example of proxy connecting service

Similarly, we proceed with all other microservices in the project. The only difference is that we don’t have to isolate the last one, because it won’t have code dependencies from the other microservices. In our case, we will leave the logic related to users, authentication and authorization in place. This way, our proxy microservice will emerge.

A diagram showing an example of finished proxy

Infrastructure and monitoring

The way we run microservices now is rather straightforward. Unfortunately, this solution is insufficient to take advantage of all the benefits which give us microservice architecture. We want to be able to monitor the status of microservices and scale them freely. A load balancer called HAProxy comes to our aid, which will help to provide basic monitoring functionality and support for multiple microservice instances.

💡 Read more: Official HAProxy Docker image

As we want to add the HAProxy container in the application yet again, we need to modify the docker-compose.yml file. Besides, for the proper communication between containers, it is mandatory to set the names of dependent containers. The full entry can be found in the listing below.

Bearing in mind our initial assumptions saying that all communication should take place only through our proxy, we are adding an internal network in the Docker called “hidden”. From now on, all traffic will be listened on 80 port and redirected to the proxy service. To ensure convenient addressing, we’ve also added internal microservice domains. All that’s left to do is to create an HAProxy configuration. The file can be divided into two sections:

  • frontend — configures the listening ports and allows you to include control logic where the data should be forwarded
  • backend — defines the available list of microservers to which traffic is redirected.

In the configuration file, you can easily recognise the connections of Docker container names with the domain names we imposed.

After restarting the application, we should be able to check current services statuses at 127.0.0.1:1936/stats

A screenshot of HAProxy stats

One of the essential features that microservices have is the ability to scale. This trait allows the application to optimize resource consumption and improve performance. 

In the first part of the article, our fictional customer raised the problem of performance — the application during the matches of well-known teams encounters issues with the handling of heavy traffic. Fortunately, now with the help of the Docker scale and HAProxy commands, we can quickly increase the number of our microservices. We just need to remember to load the appropriate configuration file.

💡 Read more: Docker – Scale command

A screenshot of HAProxy stats – balancing

By scaling our application to three server instances, we can see that HAProxy has divided two hundred test queries equally between each microservice. The HAProxy configuration is so simple that everybody can quickly adapt the load balancer to their own needs. Besides, extensive documentation an active community will help solve potential errors.

Another thing that we can see on the statistics page is the status of each microservice.

A screenshot of HAProxy stats with statuses

Additionally, on the statistics page, we will quickly check which microservices are unavailable or under heavy load. We will also see how requests are distributed and how many sessions are currently active. In the world of microservices, such information is vital to us, as it allows to assess how efficiently various components work inside the application.

The code after the changes can be found in the repository.

Summary

We went through all stages of the transformation from a monolithic architecture to a microservices architecture application. We have covered  the monolithic vs microservices topic. I addressed many issues, techniques, patterns and described several solutions that we can use during such transition. We started with refactoring techniques, through DDD, bounded context and CQRS methodologies. It is worth noting here that the microservice architecture may not be a remedy for all problems we can experience in a variety of other project and brings with it some difficulties such as:

  • Greater architectural complexity — new contexts or namespaces, read/write models, and many additional classes are created.
  • New security threats — more components require more work to keep the entire system secure.
  • Complicated logging and data flow tracking — information about events are scattered across various elements of the application.
  • More demanding system error handling — we cannot allow a failing service to block the entire system.
  • Difficulties in the deployment of services — this process must always be coordinated appropriately.

However, if we plan carefully and stick to good practices, the chances of experiencing problems are minimized. And that is why we increasingly see customers deciding to migrate to the aforementioned architecture. After all, most applications on the Internet were created based on a monolithic approach. Constantly changing business requirements force companies to employ new approaches to currently existing problems.

Techniques and issues related to microservices architecture that were not described here are still a whole lot. Unfortunately, I couldn’t fit all of them into this short series of articles as this much material would be enough to make a good book. However, I hope that the examples presented here will be helpful for people interested in the migration process for their new architecture. Also, you are now able to say whether you see any advantages of microservices vs monolithic application. After all, it can serve as simple guidelines or at least as an incentive for such a change.

What do you want to achieve?





You can upload a file (optional)

Upload file

File should be .pdf, .doc, .docx, .rtf, .jpg, .jpeg, .png format, max size 5 MB

Uploaded
0 % of

or contact us directly at [email protected]

This site is protected by reCAPTCHA and the Google
Privacy Policy and Terms of Service apply.

Thanks

Thank you!

Your message has been sent. We’ll get back to you in 24 hours.

Back to page
24h

We’ll get back to you in 24 hours

to get to know each other and address your needs as quick as possible.

Strategy

We'll work together on possible scenarios

for the software development strategy in sync with your goals.

Strategy

We’ll turn the strategy into an actionable plan

and provide you with experienced development teams to execute it.

Our work was featured in:

Tech Crunch
Forbes
Business Insider

Aplikujesz do

The Software House

CopiedTekst skopiowany!

Nie zapomnij dodać klauzuli:

Kopiuj do schowka

Jakie będą kolejne kroki?

Phone

Rozmowa telefoniczna

Krótka rozmowa o twoim doświadczeniu,
umiejętnościach i oczekiwaniach.

Test task

Zadanie testowe

Praktyczne zadanie sprawdzające dokładnie
poziom twoich umiejętności.

Meeting

Spotkanie w biurze

Rozmowa w biurze The Software House,
pozwalająca nam się lepiej poznać.

Response 200

Response 200

Ostateczna odpowiedź i propozycja
finansowa (w ciągu kilku dni od spotkania).

spinner
Webinar - CTO Roundtable #6

Low-code vs. custom software development: What’s best for your business

Sign up