Transition to microservices for a logistics company
Helping a logistics company boost software performance and scalability by shifting from a monolithic architecture to microservices
We helped our client upgrade a warehouse and transportation management logistics app that:
Our client came to us with:
Monolithic architecture challenges
The app’s monolithic architecture inhibited its development. The app was hard to scale as its modules were interconnected. Implementing new libraries, frameworks, and languages would have affected the entire app and significantly increased the cost of development.
The app was too large and complex to update it quickly and correctly:
A growing number of vital changes and the system redeployment they required led to long system downtimes.
Changes in business logic led to changes in all parts of the system.
Even the smallest upgrades required testing the entire system.
To overcome these challenges, we decided to optimize the app by shifting to a microservice architecture. We also improved the continuous integration / continuous development (CI/CD) pipeline and ensured automated testing.
Challenges that formed the product backlog
Huge amount of time spent making changes to the product
High software latency leading to loss of users
Need to manage an increasing number of microservices
Memory leakage that jeopardized the app’s proper work
Minimizing time spent making changes to the product
Adding new features and making changes to the app required one to three months for testing and checks — even for hotfixes and bug fixes.
We solved this problem by:
identifying logical parts of the app to move each to its own microservice
separating microservices by technical characteristics
deciding on the order in which to move parts to microservices
adding WARs with microservices to Jetty, which is responsible for connecting servers to each microservice
For example, we moved the business feature responsible for changing user parameters to a separate microservice. Now, each time the business logic changes, we don’t lose time on reloading Jetty. Consequently, users don’t face system downtime.
Reducing software latency
Parts of our MySQL database were initially scattered between different data centers. This caused high software latency and jeopardized user retention.
To reduce the delay between an input and the desired output, we replicated the entire database across different data centers. As a result, if one data center doesn’t work properly, we can switch to another and use its copy of the database.
Managing an increasing number of microservices
When we implemented 20 and more microservices, we faced the problem of their typization and versioning. We also experienced difficulties deploying many WAR files in Jetty. It took us up to two hours to update all microservices.
The following steps helped us solve these issues and fully shift to a microservices architecture:
To address the issue of microservice typization, we created client libraries (module APIs) for service controllers. We used the same call interface for client libraries and controllers.
So that basic business features and their transactions do not affect several servers simultaneously, we decided to use Redis for events and user actions. We deployed Redis as an intermediary between the database and server. As a result, users can smoothly transfer between servers.
Shifting to a horizontal architecture
We turned a vertical architecture into a horizontal architecture, divided it across several data centers, and added a separate server for Elastic Stack. The Kibana user interface collects data by indexes and doesn’t use the server’s system resources.
Wrapping Kubernetes around the app
To ensure easy scaling and shifts from one cloud service provider to another if needed, we wrapped our app in Kubernetes. We transformed the app to have a stateless architecture to provide simple request sending and the creation of multiple database instances.
Detecting and handling memory leaks
We faced a memory leak problem when subsystems took over and kept resources no longer used by the app. This issue was critical, as the the app was consuming an increasing amount of resources, which result in a fatal OutOfMemoryError.
We solved the memory leak problem by:
using subsystems’ native mechanisms to deal with the out-of-memory condition
running the system using a custom JAR service with a scenario for detecting memory leaks
handling memory leaks by using an out-of-memory killer, an established mechanism for recovering system memory
We improved CI/CD and started to use GitFlow, which facilitated source code management.
Continuous Delivery schema
Continuous Integration schema
After switching to GitFlow, we ensured automated testing and implemented a new deployment pipeline for the app. With continuous integration, we started collecting and running tests for each commit, running unit and integration tests, and collecting and stacking artifacts.
We moved from a monolithic app to microservices. This helped our client get rid of typization and versioning problems as well as issues with changing the work of sessions.
How the app benefits from a microservice architecture:
Updating the system takes less than five minutes and adding new services doesn’t affect other system modules.
The system allows developers to scale the app horizontally using different technologies, programming languages, frameworks, and libraries.
The microservice architecture within Docker allows for creating distributed development teams that can work independently on different parts of the app.
Looking to improve your software performance?
We can help you transit from a monolithic architecture to microservices and enhance your system performance, maintainability, and scalability.Get in touch with us