You Probably Shouldn't be Using Micro Services

Whoever coined the term Monolith to describe any application built on a single code base, no matter how it it's structured, delivered a major coup for the Micro Services movement. According to Wikipedia, the term was initially used to describe enormous main frame applications that became unmaintainable due to lack of modularity but later goes onto state this gem:

"The extent to which an application is described as monolithic is dependent upon perspective. Software that is not service-oriented may be described as monolithic even though it is object-based and can be distributed"

This is further backed up in blogs, podcasts, events and meetups around the world where seasoned IT pros jump on the popularity band wagon and describe how moving away from a monolithic architecture to a micro service based architecture has made things so much simpler, saved the day and looking back, they wish they had started this way in the first place. What they fail to acknowledge is that they've been on a journey and are now in the possession of a much more detailed understanding of the problem domain, how that translates into their system architecture and where the stress points are.

Armed with this knowledge, they have been able to easily carve their system into appropriate blocks, eliminating or at least minimising the costly process of further splitting or worse, re-combining services. Being a wholesale restructure, they've had the opportunity to eliminate old frameworks and ways of doing things, substituting with the latest and greatest. Their development team may also have increased in size, pushing past the 8 to 10 person barrier that exists around working on a single codebase. In effect, when they hint that introducing the new architecture was the only change, in actual fact everything has changed.

The above has led to micro service based architectures equalling good and monoliths, bad. This is an absolute travesty for a couple of reasons: The imagery conjured by the word Monolith - slow, unmoving, resistant to change - is completely unrelated to a well designed, modular, monolith; monoliths are easier to understand, easier to develop and debug, easier to test and easier deploy and monitor; and finally, a monolithic architecture should be the de facto starting point for just about every new project where team sizes are small.

In my mind, we need a better word to describe systems with minimal dependencies - the word Monolith just doesn't do these systems justice and plays too easily into the hands of opponents keen to push their own agenda. Some people have tried this in the past, coining terms such as the Majestic Monolith. While this is better, it still doesn't do enough to distinguish itself from the main frame applications that brought about the definition. We also need more talks, more blog posts, more pod casts & more talks on the trade-offs between the two types of architectures and what you should take into consideration when choosing. My take is below:

Simpler is better

In a well designed, monolithic application, the concept of services is represented as logical units of code that communicate in process rather than over an external communication mechanism, such as HTTP. In effect, they're at a much higher abstraction, the benefits of which mean that if we get our service boundaries wrong, the effort taken to change their boundaries is exponentially reduced. It also becomes trivial to run two similar services side by side, slowly phase the use of a service out or outright replace a service at the drop of a hat.

Developing and debugging a monolith is as simple as writing some code in your favourite IDE and hitting the Run button. If you need to debug a problem, you're able to easily set breakpoints and step through your entire stack, again from your favourite IDE. There is no need to spin up an environment full of services needed to support the service (or services) you're working on, no need to mock out dependencies. Everything is right there in front of you.

Full stack integration testing, where your automated tests touch every layer from the UI through to the database, deliver value vastly superior to unit testing and mocking of dependencies. A lot of frameworks now allow you to launch an in memory version of your application, connected to a real or in memory data store and run automated tests against it using one of the numerous test frameworks available. I've blogged about such a setup in the past. With a little bit of effort, you can write tests that mimic the actions your users will make and have a high degree of certainty that they will actually work when released as, even though these are in memory representations, they're pretty close to the real deal.

Deployment becomes a breeze as database upgrades, web app(s) and background processing app(s) get deployed as one. There are a limited number of endpoints to monitor and a limited number of scaling points. With the rise of Platform as a Service (PaaS) and a couple of developers with an understanding of operational principles, you might even be able to get away without an Operations team.

More, not less complex

Contrary to what some people will lead you to believe, micro service based architectures actually increase system complexity rather than reduce it. Debugging, for example, has just gone from firing up your favourite IDE, setting the appropriate breakpoints and hitting Run to having to set up an entire ecosystem of services - where you most probably can't set break points and have to resort to littering your code with logging statements... and then having to sift through these log files. In short, it's just become much much harder to identify where the problem is.

Things get harder for your Ops team (and you will need one) as well. Instead of having to worry about how to deploy, monitor and scale a database (or two) and a couple of web apps, they now have to worry about how to deploy, scale and monitor n databases and web apps where n depends on your definition of how micro, a micro service should actually be. If you then need multiple test environments, the number of individual dependencies to worry about can skyrocket and soon become a maintenance nightmare of its own.

Test automation gets just that much harder. Gone are the days of being able to spin up an instance in memory and run end to end integration tests on your CI server. You will either need to mock the dependencies of each service and run them in isolation, leading to less conclusive test results or you will need to deploy all your services to a clean environment and run end to end tests there. Vastly more complex, time consuming and critically it increases the time from writing new code to verifying that the change works.

The biggest cost, however, comes about if / when you get your service boundaries wrong. Rather than swapping around a few classes, re-compiling and running automated integration tests, it involves defining new service definitions and their communication mechanisms, releasing these new services (taking into account monitoring, scaling and deployment along the way), cutting over, testing and then decommissioning the old services. Everything gets more complex, everything takes longer to get done, everything costs more.

An exception to the rule

A scenario were service based architectures do shine is when your development team crosses roughly the 8 person mark. 8 or more developers working on a single codebase with a single release strategy very quickly turns into an organisational nightmare. Developers will start working on the same areas at the same time, merge conflicts will ensue, dependency chains will leave some developers unable to work and your Ops team will hate you when it comes time to release.

If this describes your company, you have a valid reason to start moving to a (micro) service based architecture. You will also likely know where the pinch points are, what can easily be divided up and what can't. Once you have separate code bases, with separate release cycles, you should start not notice some pretty big efficiencies and less overall cognitive overhead. Your development team will be happier and more productive, your operations team will be happier and more productive and this in turn will lead your platform to be more stable and performant.

Just to drill the last point home, most of the time, it's not the organisation of systems that brings the biggest performance gains with micro service based architectures but the organisation on people.

Walking the walk

At Moneybox, we're building an investment platform to help UK millennials start making saving a part of their everyday life. Our small team has fully embraced this strategy. We've built a system using a single stack comprising of a couple of web apps and an asynchronous batch processing framework.

Our entire application can be deployed in under 5 minutes to Azure's PaaS platform, we can scale the front and back end independently and our platform is covered in end to end integration tests. We can move quickly, radically change the way parts of the system work at the drop of a hat and easily deploy it to production - all with zero downtime.

We don't pretend to be the cool kids on the block. Our focus is on finding out what works as quickly as possible with the least amount of effort. If we get lucky and find that our team starts to significantly grow, we may start to look at how to segment our platform. This may mean that we start to segment it into services or other organisational structures. But it also may not. The point being, we will only start to look a dividing up our platform when our current strategy is no longer fit for purpose, not before.

Show Comments