You’re looking for a new architecture for your next software project, and you’ve heard about this thing called Microservices. It sounds cool, but you’re not sure if it’s a fit for your next project. Use this handy checklist to decide if using Microservices are right for you.
- It’s a dynamic new paradigm that drastically increases complexity; what’s not to love?
- Networks are fun to troubleshoot.
- You can now use Brainfuck in a production application!
- You missed the XML wave and the Actor model wave; you’re not missing this one.
- Scaling out is so much more fun than worrying about application server performance. Throw more hardware at it!
- You have a friend who works in the server sales business and you owe them some favors.
- How else are you going to get to put “Docker” on your resume?
- Event-driven, disconnected, asynchronous programming was way too easy in a monolith.
- You also have a friend in the server logging and metrics business (Is it New Relic or Spleunk?) and you own them favors (you owe a lot of people favors, don’t you).
- You’ve taken Spolsky‘s “Things you should never Do part 1” as a challenge. After all, you’re not rewriting the app, you’re reimagining it.
- The Single Responsibility principle needs to go to its fanatical conclusion to finally become a reality: One line of code per service.
- Your DI container has pissed you off for the last time.
- Complex deployment processes mean job security.
- “DevOps experience” makes a great resume booster.
- 100 git repositories means never having merge issues.
- How else would you get around the Mythical Man Month? 9 women can have 9 babies in 9 months, and they don’t even need to talk!
- Contract testing sounds way cooler than “integration testing”.
- What’s better than 1 REST API? 100 of them.
- You can now force your teammates to learn Haskell (They’ll thank you).
- You can now use the best tool for the job, even if it requires you to go through a few months of training to learn that new tool, and did I mention they only do training on Cruise ships to Tahiti? (it’s not your money you’re spending, after all).
- Whenever someone asks how you’ll solve an architecture issue, you can always say, “That’s a future us problem”. TAKE THAT, MONOLITHS.
- The grand total of documentation is a README in the root of each git repository.
- Monoliths generally only have one codename; with Microservices you can have hundreds. Time to bust out that greek mythology.
- Of course your application needs to be able to support a distributed event queue; why is that even a question? You need to obviously scale out to billions of operations per speed.
Microservices don’t sound like your cup of tea? Try Reasons you should Build a Monolith.
You’re building a new software project! Congratulations! You’ll make millions and people will love you. It’s going to be awesome.
Your first question (of course) is: Should you build a monolith or use Microservices? Great Question!
You should build a monolith if you:
- Have only a hammer and can see everything as a nail.
- Have the language you’re going to use, right or wrong.
- Know no one on the team can possibly learn a new language. That’s insane.
- Enjoy contorting your language/framework to solve problems it was never meant to.
- Enjoy building a new library or framework because of the above.
- Believe wholeheartedly in the idea of one code repository.
- Can’t imagine how people would ever deploy multiple code bases.
- Enjoy the simplicity of one, getting progressively longer, build?
- Code merges are so much fun.
- Enjoy spelunking through your code to find out where you’re supposed to make that bug fix.
- Enjoy writing the reams of documentation that will show people how to navigate the project.
- Believe what’s good enough for Ruby on Rails, Django, and ASP.NET is good enough for your team.
- Can’t imagine why anyone would want to write tests against an HTTP API.
- Scoff when someone mentions a new language.
- Are pretty sure the business requirements aren’t going to change
- Have been bitten way too many times by new languages and frameworks that just don’t work out
- Think the idea of containers is nuts. A computer inside of a computer inside of a computer? Craziness.
- Think the network is obviously the slowest part; Keep all the calls in process.
- Love complicated branching strategies; maybe even owning a gitflow T-shirt.
- Love process! Process is your friend. Code freeze, QA, UAT, deployment, Change requests. no one’s getting code in without being reviewed!
- Believe in Scaling up. Scaling out is just expensive, and the network is slow!
- Believe people that allow data to be duplicated throughout the system are reckless. One authoritative place for data!
- Simple implementations are the best; no need for microservices; they’re complex.
Enjoy your newly minted monolith! It’s going to be fast. It’s going to be simple to modify. It’s going to be awesome.
With the advent of Microsoft embracing Docker; it’s now possible to release .NET Core apps in Docker containers; and it’s a first class citizen. This means instead of creating custom Docker images, Microsoft has released multiple docker images you can use instead.
The cool thing about these Docker images is that their Dockerfiles are on Github, which is quite amazing if you like to create custom Docker images. Without more ado, here’s how I set up the project’s Dockerfile, and I created a
build.sh file so that I could script this repeatedly.
COPY $source .
ENTRYPOINT [“dotnet”, “MyProject.dll”]
Let’s take it line by line:
FROM microsoft/dotnet:1.1.0-runtime says to create a new image Microsoft’s dockerhub against the
dotnet repository, against the tag named
ARG source=./src/bin/Release/netcoreapp1.1/publish says to create a variable called
source that has the path of
./src/bin/Release/netcoreapp1.1/publish (the default publish directory in .NET Core 1.1). This path is relative to the
WORKDIR /app means to create a directory in the docker container and make it the working directory.
COPY $source . says to copy the files located at the
/app, since that directory was previously defined as the working directory.
EXPOSE 5000 tells docker to expose that port in the container so that it’s accessible from the host.
ENTRYPOINT ["dotnet", "MyProject.dll"] says the entrypoint for the container is the command:
This would be the same as:
CMD "dotnet MyProject.dll"
So that’s the docker file, but there are a few other steps to get a running container; first you have to make sure you’re running ASP.NET Core applications against something other than localhost, and then you still have to publish the application, create the docker image, and run the docker container based on that image. I created a
build.sh file to do that, but you could just as easily do it with PowerShell:
# change directory to location of project.json
# run dotnet publish, specify release build
dotnet publish -c Release
# equivalent to cd .. (go back to previous directory)
# Create a docker image tagged with the name of the project:latest
docker build -t "$SERVICE":latest .
# Check to see if this container exists.
CONTAINER=`docker ps --all | grep "$SERVICE"`
# if it doesn't, then just run this.
if [ -z "$CONTAINER" ]; then
docker run -i -p 8000:5000 --name $SERVICE -t $SERVICE:latest
# if it does exist; nuke it and then run the new one
docker rm $SERVICE
docker run -i -p 8000:5000 --name $SERVICE -t $SERVICE:latest
My ASP.NET Core directory structure is set up as follows:
This let’s me keep the files I care about (buildwise) as the base of the directory; so that I can have a master bootstrap file call each directory’s build files depending on the environment. You may want to mix these, but this also allows me to keep certain files out side of Visual Studio (I don’t want it to track or care about those files).
Then, all I have to do to build and deploy my ASP.NET Core 1.1 application is to run:
And it’ll then build, deploy, change the container if needbe, and start the container.