One thing I mention frequently in the daily emails is the fact that microservices require a lot more operational support and development support than a monolith does.
In a monolith, once you’ve got your CI/CD set up; it’s set up. Production-wise, you only have to worry about your application server and your database server (and any reverse proxy), and the most complex your architecture gets is when it includes a load balancer and a web farm.
All in all, not relatively complex at all.
Now, Microservices are not synonymous with containerization; but generally microservices end up being containerized for ease-of-deployment. You can also containerize your monolith (which I generally recommend as a forcing function to make it seamless for new developers to get started in your system), and so containers aren’t just a microservices fad, though they work nicely with microservices.
If you think of microservices as self-contained applications that each need deployment, operational support, and scaffolding to make it easy to develop a new microservice, then you start to realize there are lots of repeated problems you have to solve when developing Microservices:
- How do we add new microservices in a uniform way so that we don’t have twelve different ways to do logging, or healthchecks, or monitoring, or authentication or authorization?
- How do we make scaffolding that makes it easy to have a generated container image for a new microservice that has all the business and industry specific stuff we need? For instance, if you work in the US government space, you’re going to hear two phrases a lot: “STIGged” and “FIPS-140 compliant”. Your industry may have its own terms, but it’s the non-functional part thatevery application needs to have that you’d rather bake in than worry about doing it each time.
- How do we make tooling that makes it easy to generate contracts when new microservices are made?
- How do we provision new hardware (when in a private data-center) or provision new instances (when in a public cloud setting)?
Susan Fowler talks about these four types of problems in her book Production Ready Microservices, which talks about the ways that you get a microservice from development to production in a sustainable and scalable manner.
Susan mentions four ‘support’ layers you have with Microservices that you don’t have in a monolith (or if you have them, they were already solved long ago and you don’t need to worry about it now).
Layer 4: Microservices
Layer 3: Application Platform
Layer 2: Communication
Layer 1: Hardware/Host*
*I’ve modified layer 1 to be Hardware/Host (because like it or not, a docker image is a host, and has its own patch cycle).
For a development organization, here’s the sort of things you typically need to worry about in those layers:
Layer 1: OS updates; OS library updates; the actual hardware (if in a private datacenter); Virtual Instances staying up to date; local Docker registry
Layer 2: Message contracts; event queue infrastructure; scaffolding for generated types; OpenAPI tooling; Thrift/gRPC tooling
Layer 3: (if in .NET): private nuget (package) registry maintenance and tooling; Keeping .NET up to date; CI/CD for each microservice, development tooling; internal tooling to make development easier; (above scaffolding for generated types can also be in this layer); logging and monitoring for microservices;making the application systemd compatible
Layer 4: tools to generate microservice-specific configurations; SSL certificates for each service (if needed); environment files; and any tooling that we’d need to apply to a specific type of microservice.
If you have these four support layers in place; then a developer simply has to create a new microservice and go; this tooling takes care of the rest. This can look like all of the scaffolding being generated for them by internal CLIs.
Sounds like a lot, doesn’t it? It is. But whether you automate it or make your development team do it manually, it still all has to be done, and is a cost to adopting microservices.