Is Monolith Dead?

Monolith architecture has been very successful in shaping the software world the way we see it today. However, the last few years have seen a sharp decline in its adoption, especially with the advent of Microservices. The popularity of microservices was caused by the need of scalability and changeability which in turn is caused by the penetration of IT in almost every entity, animate or inanimate. Modern applications see no boundary when it comes to scale and these applications are fond of change and this is where Monolith doesn’t fit at all.

Microservices, at least in theory, is “The Silver Bullet” that will solve all the problems and will serve humankind till eternity, but it doesn’t happen. Microservices bring lots of challenges which were nonexistent earlier. Still, what it does, it does it beautifully and efficiently and most importantly serves the purpose.

The popular idea is to have very fine grained services where each service is responsible for a single task. Practitioners give a contempt look to coarse-grained services as it is deemed against the philosophy of Microservices and this is where Monolith is left for slow death. Is monolith really that bad and if so then how it was one of the most successful architecture for years?

Fine grained microservices have their own challenges e.g. transactions or latency. To make matters worse, the management overhead is overwhelming and agreeing to the fine-ness is no easy job. Fine grained microservices are preferred because there is no single point of failure, possibility to scale independently, ability to change and deploy often and list goes on. However, if you look at these goodies carefully, all of this can be achieved only by complex design patterns and development discipline. On the flip side, there are challenges like unavoidable network latency which in turn results in degraded performance (unless it uses additional complex systems like caching), huge management overhead, complex transactions and many more.

A monolith by definition is a system that consists of every part of a system, but for ages organizations have been building monoliths that at least have different processes for UI and backend and these parts integrate via interfaces. This segregated model is referred to here.

Monolith systems have the edge when it comes to simplicity. If development process can somehow can avoid turning it into a big ball of mud and if a monolith system (as defined above) can be broken into sub-systems such that each of these sub-systems is a complete unit in itself, and if these subsystems can be developed in a microservices style, we can get best of both worlds. This sub-system is nothing but a “Coarse Grained Service”, a self-contained unit of system. 

A coarse-grained service can be a single point of failure. By definition, it consists of significant sub-parts of a system and so its failure is highly undesirable. If a part of this coarse grained service fails (which otherwise would have been a fine-grained service itself), it should take the necessary steps to mask the failure, recover from it and report it. However, the trouble begins when this coarse-grained service fails as a whole. Still, it is not the deal breaker and if the right mechanism is in place for high availability (containerized, multi-zone, multi-region, stateless), there will be very bleak chances for it.  On the flip side, it takes away the complexity of failure management for sub-parts like needing to employ circuit breakers. There is a trade-off but it is worth evaluating.

Scaling a coarse grained service is not very different from fine grained one. If its boundaries are defined carefully and if it is developed as a stateless, scaling these services is trivial. They can run inside containers and can even be deployed in serverless manner on cloud (e.g. AWS Fargates).

One of the challenges a coarse grained service faces is to react to change. In the past, during the Monolith era, the journey from code to production was manual and tedious. However, with modern methodologies it can be automated easily. A coarse grained service is not very fine, still it is not supposed to be of the scale of a Monolith and so reacting to this change is not that challenging as it seems. There can still be some trade offs but it is worth considering them.

A coarse grained service is often a complete unit in itself and so it can take advantage of running in a single process, which means network calls can be replaced with method calls which not only improves performance but also simplifies management of components.

Quite evidently, there is a need for an amalgam between a Microservice and Monolith. In fact, microservices is not really about building very small services but it is an architecture style to build software as a service app that can be built independently and that interact via interfaces. All of this can be weaved into a monolith or a coarse grained service which allows to reap benefits of both the architecture style.

And so, Monolith is not dead but it just incarnated into a different form to serve for time to come.

References:

1.       https://microservices.io/patterns/monolithic.html

2.       https://12factor.net/

Architectural Decision Making : Core Architecture Principles

Decision making is the one of the fundamental characteristics of an architectural practice and it often puts an architect in two minds. Computer science has come a long way from abacus and Turing machines, and is very mature and sophisticated in almost every aspect. It offers multiple solutions for same problem in the same context. However, it causes a bigger trouble, the problem of plenty.

Selecting one option among many is a daily challenge for architects and system designers. Often, it even affects the pace of work as lots of time is spent in reaching to an agreement. It is common to find architects arguing to reach a consensus but in vain. Even if an agreement is reached, convincing other stakeholders remains an uphill task as they have their own experiences and biases.

This choose one among many problem is not unique to computer science only. In daily life one often faces this challenge. For example, when one walks into an electronic supermarket to buy a laptop, one can easily be perplexed by the diverse range of laptops put on sale in store. However, often this little venture results in one buying a laptop without much confusion. How the problem of plenty doesn’t have much impact in one’s decision making while shopping?

There is something that helps people in there routine life in decision making and if we can apply this something into architecture decision making, it will take the pain away from an architect’s life. In above example, when one went to buy a laptop, one might already had few things sorted,

  • Ecosystem – Apple, Windows, Google
  • Preferred Brand (If not in apple ecosystem) – Dell, Lenovo, HP or something else
  • Must have feature – E.g. Capability to play graphic heavy games

It is like having some ground rules before stepping out to buy a laptop, and if something similar can be practiced during architecting and designing, it will potentially help in the process. These ground rules are Core Architecture Principles.

Core Architecture Principles act as a guide to all the stake holders and make the whole decision making process much simpler. There are two simple rules to craft these architecture principles –

  • Define at very early stage
  • Define at a very abstract level

Consider an example where an enterprise is looking forward to modernize their legacy system. Leadership specifically is looking to build a digital platform that enables them to offer new capabilities on regular basis without any disruption on existing one. Focus of new system should be towards achieving better time to market to get an edge in highly competitive environment. Cost is another factor and needs to be optimized. Organization has a bias towards public cloud.

Backed with above vision statement, along with some tailor made interviews with stakeholders, following rules can be crafted:

  • Cloud First
  • Some particular cloud provider First (AWS First or Azure First or GCP First)
  • Serverless First
  • PaaS First
  • SaaS First

Above rules are very abstract but in real world they act as a guiding principle for all the stakeholders and ultimately help in decision making. Above rules might result in following architecture.

Above baseline architecture can actually be defined with just one day of huddle. It might be just a jumpstart and it is bound to see variations in due course. However for every deviation from the outlined principles, team will have a reason. E.g., assume that after few weeks team finds that AWS API is not a good fit. At the same time, team has better clarity what capability it wants from API gateway and even among those capabilities, which ones are deal breaker. This information along with core principles will help in getting common consensus and it makes decision making far easier.

Core Architecture Principles are very abstract by design as they should act just as a guiding principle for architecture and should never dictate it. Often, few of these principles are defined not at system level but at enterprise level. Moreover, there is no standard process to define them. One has to carefully get to the essence of work along with all the constraints and biases to define it. Last but not the least, these principles are dynamic in nature and so suppose to be change with time, though the rate of change is very slow.

Infrastructure | A First Class Citizen

Modern applications are generally distributed in nature, e.g. in last 4-5 years, all the solutions that I architected were Microservices based. Distributed applications have a different philosophy, many architecture characteristics or concerns that were afterthoughts in traditional applications, are now mainstream. Infrastructure is one such concern that should concern an architect, a developer or product owner at a very initial phase. Infrastructure is actually a First Class Citizen.

Infrastructure needs in a traditional software systems are usually static in nature. I seldom saw anyone paying too much attention to it and it is mostly left to IT engineers. Architects and developers are only concern to define infrastructure specifications e.g. RAM or CPU and that too at a later stage. These infrastructure requirements has almost no effect on how application is crafted or developed. The most obvious consideration during development is limited to when system is stateful and behind a load balancer or it needs to keep a flag (e.g. in case of scheduler).

Things changes dramatically in distributed application, especially in microservices. These systems are dynamic, e.g. one of the basic ingredient of microservices is elasticity which means network addresses of resources will change frequently. Traditional methods of defining a static list of IPs will not work anymore and some new patterns are needed (Service Discovery). Similarly, observability is another concern that is not so obvious in these systems. There is no single log file, rather sophisticated infrastructure needed for logs aggregation, or tracing is not straight forward but distributed in nature and require another framework.

All of these concerns needs to weaved in the application development itself and in fact needs attention at solution phase. Infrastructure platforms, tools and framework influence development greatly. Clearly, Infrastructure is a primary factor of architecture and development.