There is no “One True Way”

Creating instructions to tell a computer to do certain things in a certain order is an exact science; you either get the instructions right (and the computer does what you want), or you get the instructions wrong (and the computer does what you told it to). There is no third way (cosmic radiation shifting bits notwithstanding).

Software Development, that is, the act of creating software to fulfill a human need, is not an exact science.  It’s not even a reproducible art (yet). If it were, we wouldn’t have so many failed projects in Waterfall, Agile, Monoliths, Microservices, TDD, Unit Testing, BDD,  RAD, JAD, SDLC, ITIL, CMMI, Six Sigma, or any other methodology that attempts to solve human problems.  If we could make it into a reproducible art, we would have already done so.

So why do we the act of creating software as if it’s a science? As if there is a One True Way?  We know there isn’t, since projects of all stripes succeed (and fail); and we know that as of yet, there is no one approach for success (though there are many approaches for failure).

We even do this in objectively silly things: Tabs vs. Spaces, CamelCase vs unix_case (or is it unix-case?), ORM vs No ORM, REST vs. HATEOS vs. RPC over HTTP, or anything else.  We do it in the form of “Style Guides” that detail exactly how the project should be laid out;  as if the mere act of writing down our rules would bring us closer to successfully creating software.  We make rules that apply to all situations and then castigate other developers for breaking those rules.  Those rules bring us safety and comfort, even if they don’t make delivering software a success (or a priority).

Those rules we cling to for safety cripple us from making the best decision using the best information we have.

Style Guides are beautiful things; and I believe in their efficacy.  By standardizing code, it becomes easier to change the code. There’s no cognitive load spent on the parts of the code that stick out; and that savings can be spent on fixing the actual problem at hand. But Style guides can go too far. Think for a moment about your database names and class names for Data Access Objects (DAOs).  If you work in C#, they’re typically PascalCase.  For instance, in SQL Server, Table names can be PascalCase with no issues (and they generally are).  But if you do that in Postgres, your C# will look horrible:

private readonly string getByMyName = "SELECT * FROM \"my\".\"mytable\" WHERE \"myId\" = @MyId AND \"MyName\" IS NOT null";

In this case, your style guide brought you consistency across databases at the expense of developer health.

But we tend to take a good practice and morph it into a bad one due to misuse.  You wouldn’t believe how many times I’ve run into an issue where I or someone else placed too much trust into an ORM, and next thing you know we’re outside in our underpants collecting rain water with ponchos to survive. Invariably the rule is put into place “No ORMs” or “Stored Procedures Only”, or some other silly rule that’s just there because the development team was pwned by a SQL Injection Attack due to misuse of an ORM, or N+1, or something

NO ORMs. Seems silly, right?  I’ve personally witnessed it; Hell, I’ve made the rule myself. And I’ve done it for the best of reasons too:

  • Let’s not complicate our code until we understand what we’re actually building. ORMs send us down a particular path,  we don’t understand enough to know if that’s the path we want to be down
  • Traditionally, ORMs handle one-to-many relationships very poorly; I’m OK with ORMs for very basic needs; but it’s that other 20% they’re terrible for.
  • Why should I ask people to learn an ORM’s syntax when SQL does quite nicely?

And I was wrong. My reasoning was sound (at least in context of the information I had at the time), but it was wrong.  What I should have said was this:

You want to use an ORM? Great, go at it.  If and when it doesn’t meet our needs, we’ll revisit the decision; until then, just make sure you use a well-supported one.

And that would have been that.  But I fell into the trap of thinking I was smarter than the person doing the work; to think that I was somehow saving them from making the same mistakes I did.

There’s really only one constant I’ve learned in creating software that succeeded, and software that failed: There is no one “True Way”. There is no style guide that will save us, no magic methodology that will somehow make your organization ship software.  There’s only the day in and day out grit of your team, only their compassion for their user and each other, and their drive to ensure the software gets made.  There are wonderful tools to help your team along that journey; but they are neither one-size-fits-all or magical.

They’re just tools, and they’ll work as often as they won’t.  The deciding factor in what works is you and your team.  Your team has to believe in the tools, the product, and in each other. If they don’t, it doesn’t matter what methodology you throw in front of them, it won’t help you ship software.  So the next time you (or anyone) is making rules for your team to follow, ask yourself: “Do these rules help us ship better software?”  If they don’t, fight them.  There’s too much to do to embrace bad rules.

How to fix common organizational Mistakes .NET Developers make with Microservices

Microservices have really only become possible for .NET Development with the advent of .NET Core, and because of that we have almost two decades of built up practices that don’t apply in the world of microservices.

In case you haven’t heard of Microservices, here’s a quick ten second primer on them: They’re a deployable focused on doing one thing (a very small thing, hence ‘micro’), and they communicate their intent and broadcast their data over a language agnostic network API (HTTP is a common example).

For instance, sitting in the WordPressDotCom editor right now, I could see maybe a dozen Microservices (if this weren’t WordPress), a drafts microservice, notifications, user profile, post settings, publisher, scheduler, reader, site menu, editor, etc.

Screen Shot 2017-03-23 at 8.28.39 AM

Basically everything above is a microservice. All those clickables with data or behavior above? Microservices. Crazy, right?

Back of the cereal box rules for Microservices:

  • Code is not shared
  • APIs are small
  • build/deployment of that service should be trivial.

So that’s code, but what about organization? What about project setup?  Those are the pieces that are as crucial to successful microservices as anything else.

In .NET Monolithic projects, we’ve spent years hammering home ‘good code organization’, with lots of namespaces, namespaces matching directories, and multiple projects.

But thinking about those rules of organization for Monoliths, when’s the last time you were able to easily find and fix a bug even in the most well organized monolithic project?  On average, how long does it take you to find and fix the bug in a monolith? (Not even that, but how long does it take you to update your code to the latest before trying to find the bug?)

The benefits of Microservices are the polar opposite of the benefits of a Monolithic application.

An ‘under the hood’ feature of Microservices is that code is easy to change. It’s easy to change because it’s easy to find, it’s easy to change because there’s not much of it, and it’s easy to change because there isn’t a lot of pomp and circumstance around changing it. In a well defined microservice, it would take longer to write this blog post than to find the issue (I’m exaggerating, but only slightly).


If you’re developing .NET Microservices, here are some points to keep in mind, to keep from falling into the old traps of monoliths:

Keep the number of directories low: The more folders you have, the more someone has to search around for what they’re looking for.  Since the service shouldn’t be doing that much, there isn’t as much need for lots of directories.

Move classes into the file using them: Resharper loves to ask you to move classes to filenames that match their class name.  If your class is just a DAO/POCO; rethink that.  Keep it close to where it’s used. If you do split it into a separate file, think about keeping all of its complex types in the same file it’s in.

1 Microservice, 1 .NET Project, 1 source control repository: This is a microservice. Splitting things out into multiple projects in one .sln file necessarily raises the complexity and reduces the advantages Microservices have.  Yes, it feels good to put that Repository in a different project; but does it really need to be there? (Incidentally, it’s currently impossible to publish multiple projects with the .NET Core CLI)

Code organization should be centered around easily finding code: If I can’t find what your service is doing, I may just rewrite it.  Then all that time you spent on that service organization will be gone anyway. The inner-workings of your microservice should be easy to find and work with. If they aren’t, maybe it’s doing too much?

Your build process should be trivial: If your project pulls down Nuget packages from two separate repositories, it’s time to rethink your build process.

Why are you sharing code, anyway?: Private Nuget packages are monolithic thinking;  to make “sharing code” easy.  But in the Microservice world, you shouldn’t be sharing code, right? Duplicate it, pull it out into its own service. Sharing it simply means you’re dependent on someone else’s code when it breaks (which is why we have microservices in the first place; so we don’t have that dependency).

Working beats elegant, every time: I love elegant code. I also love working code. Incidentally, I get paid to write working code, not elegant code.  If using a microservices based architecture allows me to move faster in development, why would I hamper that by spending time making code elegant that doesn’t need to be? There are even odds that this service won’t even exist in its current form in 6 months, let alone be around long enough for its elegance to be appreciated.

Microservices are a different paradigm for software development, in the same way agile was meant to be different than classic SDLC (Waterfall). The same thinking that built Monoliths can’t be used to build Microservices successfully. Next time you’re writing a microservice, think about what practices and inertia you have; and double check: Does this practice make sense in a Microservice?  If it doesn’t, jettison it.


Reasons you should use Microservices to build your next application

You’re looking for a new architecture for your next software project, and you’ve heard about this thing called Microservices.  It sounds cool, but you’re not sure if it’s a fit for your next project.  Use this handy checklist to decide if using Microservices are right for you.

  • It’s a dynamic new paradigm that drastically increases complexity; what’s not to love?
  • Networks are fun to troubleshoot.
  • You can now use Brainfuck in a production application!
  • You missed the XML wave and the Actor model wave; you’re not missing this one.
  • Scaling out is so much more fun than worrying about application server performance. Throw more hardware at it!
  • You have a friend who works in the server sales business and you owe them some favors.
  • How else are you going to get to put “Docker” on your resume?
  • Event-driven, disconnected, asynchronous programming was way too easy in a monolith.
  • You also have a friend in the server logging and metrics business (Is it New Relic or Spleunk?) and you own them favors (you owe a lot of people favors, don’t you).
  • You’ve taken Spolsky‘s “Things you should never Do part 1” as a challenge.  After all, you’re not rewriting the app, you’re reimagining it.
  • The Single Responsibility principle needs to go to its fanatical conclusion to finally become a reality: One line of code per service.
  • Your DI container has pissed you off for the last time.
  • Complex deployment processes mean job security.
  • “DevOps experience” makes a great resume booster.
  • 100 git repositories means never having merge issues.
  • How else would you get around the Mythical Man Month? 9 women can have 9 babies in 9 months, and they don’t even need to talk!
  • Contract testing sounds way cooler than “integration testing”.
  • What’s better than 1 REST API? 100 of them.
  • You can now force your teammates to learn Haskell (They’ll thank you).
  • You can now use the best tool for the job, even if it requires you to go through a few months of training to learn that new tool, and did I mention they only do training on Cruise ships to Tahiti? (it’s not your money you’re spending, after all).
  • Whenever someone asks how you’ll solve an architecture issue, you can always say, “That’s a future us problem”. TAKE THAT, MONOLITHS.
  • The grand total of documentation is a README in the root of each git repository.
  • Monoliths generally only have one codename; with Microservices you can have hundreds. Time to bust out that greek mythology.
  • Of course your application needs to be able to support a distributed event queue; why is that even a question? You need to obviously scale out to billions of operations per speed.


Microservices don’t sound like your cup of tea? Try Reasons you should Build a Monolith.

Reasons You Should Build a Monolith

You’re building a new software project! Congratulations! You’ll make millions and people will love you. It’s going to be awesome.

Your first question (of course) is: Should you build a monolith or use Microservices?  Great Question!

You should build a monolith if you:

  • Have only a hammer and can see everything as a nail.
  • Have the language you’re going to use, right or wrong.
  • Know no one on the team can possibly learn a new language. That’s insane.
  • Enjoy contorting your language/framework to solve problems it was never meant to.
  • Enjoy building a new  library or framework because of the above.
  • Believe wholeheartedly in the idea of one code repository.
  • Can’t imagine how people would ever deploy multiple code bases.
  • Enjoy the simplicity of one, getting progressively longer, build?
  • Code merges are so much fun.
  • Enjoy spelunking through your code to find out where you’re supposed to make that bug fix.
  • Enjoy writing the reams of documentation that will show people how to navigate the project.
  • Believe what’s good enough for Ruby on Rails, Django, and ASP.NET is good enough for your team.
  • Can’t imagine why anyone would want to write tests against an HTTP API.
  • Scoff when someone mentions a new language.
  • Are pretty sure the business requirements aren’t going to change
  • Have been bitten way too many times by new languages and frameworks that just don’t work out
  • Think the idea of containers is nuts. A computer inside of a computer inside of a computer? Craziness.
  • Think the network is obviously the slowest part; Keep all the calls in process.
  • Love complicated branching strategies; maybe even owning a gitflow T-shirt.
  • Love process! Process is your friend. Code freeze, QA, UAT, deployment, Change requests. no one’s getting code in without being reviewed!
  • Believe in Scaling up.  Scaling out is just expensive, and the network is slow!
  • Believe people that allow data to be duplicated throughout the system are reckless. One authoritative place for data!
  • Simple implementations are the best; no need for microservices; they’re complex.

Enjoy your newly minted monolith! It’s going to be fast. It’s going to be simple to modify. It’s going to be awesome.

How do I publish a .NET Core 1.1 application in a Docker container?

With the advent of Microsoft embracing Docker; it’s now possible to release .NET Core apps in Docker containers; and it’s a first class citizen. This means instead of creating custom Docker images, Microsoft has released multiple docker images you can use instead.

The cool thing about these Docker images is that their Dockerfiles are on Github, which is quite amazing if you like to create custom Docker images.  Without more ado, here’s how I set up the project’s Dockerfile, and I created a file so that I could script this repeatedly.


FROM microsoft/dotnet:1.1.0-runtime
ARG source=./src/bin/Release/netcoreapp1.1/publish
COPY $source .
ENTRYPOINT [“dotnet”, “MyProject.dll”]

Let’s take it line by line:

FROM microsoft/dotnet:1.1.0-runtime says to create a new image Microsoft’s dockerhub against the dotnet repository, against the tag named 1.1.0-runtime.

ARG source=./src/bin/Release/netcoreapp1.1/publish says to create a variable called source that has the path of ./src/bin/Release/netcoreapp1.1/publish (the default publish directory in .NET Core 1.1).  This path is relative to the project.json file.

WORKDIR /app means to create a directory  in the docker container and make it the working directory.

COPY $source . says to copy the files located at the $source to /app, since that directory was previously defined as the working directory.

EXPOSE 5000 tells docker to expose that port in the container so that it’s accessible from the host.

ENTRYPOINT ["dotnet", "MyProject.dll"] says the entrypoint for the container is the command: dotnet MyProject.dll.

This would be the same as:

CMD "dotnet MyProject.dll"

So that’s the docker file, but there are a few other steps to get a running container; first you have to make sure you’re running ASP.NET Core applications against something other than localhost, and then you still have to publish the application, create the docker image, and run the docker container based on that image. I created a file to do that, but you could just as easily do it with PowerShell:

# change directory to location of project.json
pushd ./src 
# run dotnet publish, specify release build
dotnet publish -c Release
# equivalent to cd .. (go back to previous directory)
# Create a docker image tagged with the name of the project:latest
docker build -t "$SERVICE":latest .
# Check to see if this container exists.
CONTAINER=`docker ps --all | grep "$SERVICE"`
# if it doesn't, then just run this.
if [ -z "$CONTAINER" ]; then
  docker run -i -p 8000:5000 --name $SERVICE -t $SERVICE:latest
# if it does exist; nuke it and then run the new one
  docker rm $SERVICE
  docker run -i -p 8000:5000 --name $SERVICE -t $SERVICE:latest

My ASP.NET Core directory structure is set up as follows:

- src
    - project.json
    - //snip..
- tests
- Dockerfile
- build.ps1

This let’s me keep the files I care about (buildwise) as the base of the directory; so that I can have a master bootstrap file call each directory’s build files depending on the environment. You may want to mix these, but this also allows me to keep certain files out side of Visual Studio (I don’t want it to track or care about those files).

Then, all I have to do to build and deploy my ASP.NET Core 1.1 application is to run:


And it’ll then build, deploy, change the container if needbe, and start the container.

Fix for ASP.NET Core Docker service not being exposed on host

When attempting to Dockerize my ASP.NET Core micro-service, I ran into an interesting issue. We use Docker’s Network feature to create a virtual network for our docker containers; but for some reason I wasn’t able to issue a curl request against the ASP.NET Docker container, it simply returned:

curl: (52) Empty reply from server

Well crap.  Docker was set up correctly; and ASP.NET Core applications listen on port 5000 typically:

Complete Dockerfile:

FROM microsoft/dotnet:1.1.0-runtime
ARG source=./src/bin/Release/netcoreapp1.1/publish
COPY $source .
ENTRYPOINT ["dotnet", "MyProject.dll"]

ASP.NET Core dotnet run output:

Hosting environment: Production
Content root path: /app
Now listening on: http://localhost:5000
Application started. Press Ctrl+C to shut down.

Hm. It’s listening on localhost (the docker container), and I’ve exposed port 5000, right? Maybe my port settings are wrong when I spin up the container? Let’s check:

docker run -i -p 5000:5000 --name my-project -t my-project:latest

That checks out. The syntax for docker -p is HOST:CONTAINER, and I’ve made them 5000 on both sides just for testing, but I’m still not getting a response.

I found out that by default, ASP.NET Core applications only listen on the loopback interface — that’s why it showed me localhost:5000. That doesn’t allow it to be viewed externally from the host. Sort of a bonehead moment on my part; but there you are. So to fix it, I can tell Kestrel to listen using the UseUrls() method and specify interfaces to listen on:

public static void Main(string[] args)
  var host = new WebHostBuilder()
    .UseUrls("http://*:5000") //Add this line


Now, it will listen to all network interfaces on the local machine at port 5000, which is exactly what I want.

How do I debug ASP.NET Core 1.1 applications in Visual Studio 2015?

I updated .NET Core 1.0.1 to .NET Core 1.1.0, and am no longer able to ‘Start Debugging’ (F5) from Visual Studio 2015.

When I tried to debug, I’d get the following error in the output console in Visual Studio:

The program '[6316] dotnet.exe' has exited with code -2147450751 (0x80008081).
The program '[4612] iisexpress.exe' has exited with code 0 (0x0).

The latest version of .NET Core is 1.1 (1.1.0 if you’re a package manager type), but Visual Studio 2015 only supports F5 Debugging with .NET Core 1.0.1.

You can still debug .NET Core 1.1.0 Applications in Visual Studio 2015, however. Here’s how:

Open the Package Manager Console (or use cmd.exe).

Type dotnet run from either console.
You should see the following prompt:


Go to the debug menu in Visual Studio and click “Attach to Process” (or use the Ctrl+Alt+P shortcut).

attachtoprocessLook for dotnet.exe, and hold down shift while selecting all the dotnet.exe processes (it’s one of those three; and you can actually select all at the same time).

Click the “Attach” button.

You’ll know you’re debugging because set breakpoints will change from clear interiors to a red interior (indicating the breakpoints have been loaded).

Happy Debugging!