Five Surprises after using .NET Core for six months

I’ve been working on .NET Core for the last 6 months; from .NET Core 1.0.1 and .NET Core 1.0.0 SDK – Preview 2 (build 003131) to .NET Core 1.1.1 and .NET Core 1.0.1 SDK.

Here’s a short list of things that surprised me:

What the hell is up with the versioning?  Read my above sentence, and pay attention to the version numbers.  If you think they’re related to one another, you’re wrong. (At least, I think you are. I’m not sure if I’m wrong or not).  Lest you think that I’m weird; this has been brought up as a common issue on .NET Core’s Github page. (I got tired of posting links; there are many, many more).  In fact, here’s a handy chart of the versioning as it existed until recently (and remember “LTS” means “Long Term Support”, which for some strange reason appears right next to the phrase “Outdated”. Screen Shot 2017-05-02 at 8.53.37 AM

They did a good job in the above chart (they were smart to take out SDK version numbers); but they included version numbers in the actual release notes, and I’m not sure which way is up:

Screen Shot 2017-05-02 at 8.58.29 AM.png

Are you using .NET Runtime 1.1.1? If so, should you use SDK 1.0.1 or SDK 1.0.3? Remember, you can use .NET Runtime 1.0.4 with SDK 1.0.3 too.

No, those version numbers are not in sync, and good luck with figuring out which SDK tooling is supported on Visual Studio for Mac, VS Code, or Visual Studio itself (Hint, anything past SDK build 3177 is probably not supported on VS 2015).

Unit Tests don’t work. Or They do work, until they don’t. Our team started out with XUnit, and then found out that XUnit wasn’t supported with all versions of .NET Core; and it wasn’t well supported on ReSharper with certain versions of the SDK Tooling; so we switched to NUnit, only to find out that now that we want to upgrade to RTM SDK Tooling, NUnit doesn’t work.  In short, the test runner that worked before doesn’t work now, and the one that didn’t work before mostly works now (unless you want to debug in Visual Studio).

Oh, and MS Test probably always worked.  (Except it didn’t).

There is a graveyard of OBE blog posts on .NET Core SDK Tooling bugs.  Depending on which version of the .NET Core tooling you’re using depends upon which answer on the internet is useful for you.  So much so that old blog posts (from 2016, mind you) are already out of date and won’t help you with your problem, even though they’re atop Google’s results.  They’ve been Overcome By Events. In this case, Microsoft.  What happened?  Microsoft decided to retain backwards compatibility (I think) with MSBuild, so project.json was jettisoned in favor of .csproj.

Versioning problems even come into play when talking about the .NET Runtime vs. .NET Core Runtime. Quick, does .NET Core have XSL support?  It has XML support, but what about XSL?  No?  When will that be coming? .NET Standard 2.0. What’s .NET Standard 2.0 you ask? GREAT QUESTION:

Screen Shot 2017-05-02 at 9.15.02 AM

It’s not often I say this, but could the Microsoft .NET Team just adopt month/year as their versioning moniker? It’d be easier to determine if two things are supported together.

So the same release of .NET Core 1.0 works with .NET Standard 1.0-1.6; how is that possible you ask? I have no idea.  In fact, if I continue to look at this chart I may start drinking early.

Does your favorite library support .NET Core? Probably not. .NET Core support has a bunch of blockers for libraries; and it doesn’t look like they’ll all get resolved before .NET Core 2.0 is released (or is that NET Standard 2.0?) It’s both this time! (But that’s happenstance). And the porting to .NET Core?  It’s most likely a rewrite before .NET Core 2.0, I still think they need to be more explicit in Step 5:

Screen Shot 2017-05-02 at 9.21.17 AM.png


Over all, I’m glad that I’ve gotten to work on .NET Core; but given that I’ve spent a non-trivial amount of time over the past 6 months wrestling with these issues; I’m not even certain what performance issues will crop up from running .NET Core on Linux (Docker).  That’ll be for a future blog post, I’m sure.

Some Questions I have about Async/Await in .NET

I’ve been writing a new project using a microservices-based architecture; and during the development of the latest service; I realized that it needs to communicate with no less than 7 other microservices over HTTP (it also listens on two particular queues and publishes to two message queues).

During one iteration, it could potentially talk to all seven microservices depending on the logic path taken.  As such, there is a lot of time spent talking over the network, and in a normal synchronous .NET Core application, there’s a lot of time spent blocking while this communication is happening.  To resolve the blocking slowing down its responsiveness to the rest of the system, I ported it from being a synchronous to an asynchronous Microservice.  This was a feat and took the better part of a day to do (for a Microservice, a day feels like a really long time).   Along the way, I ran into several places where I have questions, but no answers (or at least no firm understanding as to whether or not my answer is right), and so I’ll post those questions here.  You’ll find no answers here, only questions:

How far do I need to go down the asnyc rabbit hole?

If you’re writing a .NET Core Microservice, chances are you’re doing JSON serialization/deserialization.  Since JSON.NET doesn’t have Async, our options are to leave it synchronous in any call, or use Task.Run() to make it async:

  "owner": {
    "reputation": 41,
    "user_id": 6223870,
    "user_type": "registered",
    "display_name": "Harshal Zope",
    "link": ""
  "is_accepted": false,
  "score": 0,
  "last_activity_date": 1493298802,
  "creation_date": 1493298802,
  "answer_id": 43658947,
  "question_id": 39165805

var owner = JsonConvert.DeserializeObject(jsonstring);


await Task.Run(() => JsonConvert.DeserializeObject(jsonstring));

Since Microsoft recommends CPU-bound work be put in a task, what is the point where that should occur? Are small serializations/deserializations like the above CPU-bound?  Are big ones? what is that threshold? How do you test for it?

If you don’t put code inside an async method in a Task.Run; what happens? If it depends on previous code; it’ll run in order; but what if it doesn’t? Does it run immediately?  Besides the nano-seconds of blocking, is there any other reason to care whether everything inside an async method is awaitable?

How do you deal with synchronous libraries in asynchronous code?

RabbitMQ’s .NET client famously does not support async/await; (as an aside, have we not seen pressure to convert to async because no one is using it or because no one is using RabbitMQ in .NET?) and you’ll even get errors in some places if you try to make the code async, and they put it in their user-guide:

Symptoms of incorrect serialisation of IModel operations include, but are not limited to,

  • invalid frame sequences being sent on the wire (which occurs, for example, if more than one BasicPublish operation is run simultaneously), and/or
  • NotSupportedExceptions being thrown from a method in class RpcContinuationQueue complaining about “Pipelining of requests forbidden” (which occurs in situations where more than one AMQP RPC, such as ExchangeDeclare, is run simultaneously).

And Stack Overflow’s advice isn’t helpful; the answer to “How do I mix async and non-async code?” is: “Don’t do that“.  In another Stack Overflow post, the answer is, “Yea, you can do it with this code.

What’s the answer?  Don’t do it? Keep your entire service synchronous because the message queueing system you use doesn’t support async? Or do you convert but implement this work-around for the pieces of code that need it?

Why is it after 5 years, the adoption of async seems to be negligible?  Unlike some languages, where you have no choice but to embrace async; C# as a culture still seems to treat async as a second-class citizen; and the vast majority of blogposts I’ve read on the subject go into very topical and contrived uses; without digging deeper into the real pitfalls you’ll hit when you use async in an application.

SynchronizationContext: When do I need to care about it? When do I not?  Do I only care about it when it’s being used inside an object with mutable state? Do I care about it if I’m working with a pure method?  What is the trigger that I can use when learning whether I need to worry about it?

It’s my experience (and partially assumption) that awaitable code that relies on other awaitable code will automatically know to wait to execute until it has the value it needs from the other awaitable code; is this true across the board?  What happens when I intermix synchronous and asynchronous code?

Is it truly a problem if I have blocking code if it’s not a costly method?  Will there be logic problems? Flow control issues?

Is it OK if I use a TaskCancelledException to catch client HttpClient.*Async() timeouts?  Should I refactor code to use cancellation tokens, even if no user-input is ever taken in? (the service itself doesn’t accept user input; it just processes logic).

I’m not at all sure if I’m alone in having these questions and everyone else gets it; or if it’s not more widely addressed because async isn’t widely adopted.  I do know that in every .NET Codebase I’ve seen since async was released, I haven’t seen anyone write new code in using async (this is a terrible metric; don’t take it as some sort of scientific assertion, it’s just what I’ve seen).



There is no “One True Way”

Creating instructions to tell a computer to do certain things in a certain order is an exact science; you either get the instructions right (and the computer does what you want), or you get the instructions wrong (and the computer does what you told it to). There is no third way (cosmic radiation shifting bits notwithstanding).

Software Development, that is, the act of creating software to fulfill a human need, is not an exact science.  It’s not even a reproducible art (yet). If it were, we wouldn’t have so many failed projects in Waterfall, Agile, Monoliths, Microservices, TDD, Unit Testing, BDD,  RAD, JAD, SDLC, ITIL, CMMI, Six Sigma, or any other methodology that attempts to solve human problems.  If we could make it into a reproducible art, we would have already done so.

So why do we the act of creating software as if it’s a science? As if there is a One True Way?  We know there isn’t, since projects of all stripes succeed (and fail); and we know that as of yet, there is no one approach for success (though there are many approaches for failure).

We even do this in objectively silly things: Tabs vs. Spaces, CamelCase vs unix_case (or is it unix-case?), ORM vs No ORM, REST vs. HATEOS vs. RPC over HTTP, or anything else.  We do it in the form of “Style Guides” that detail exactly how the project should be laid out;  as if the mere act of writing down our rules would bring us closer to successfully creating software.  We make rules that apply to all situations and then castigate other developers for breaking those rules.  Those rules bring us safety and comfort, even if they don’t make delivering software a success (or a priority).

Those rules we cling to for safety cripple us from making the best decision using the best information we have.

Style Guides are beautiful things; and I believe in their efficacy.  By standardizing code, it becomes easier to change the code. There’s no cognitive load spent on the parts of the code that stick out; and that savings can be spent on fixing the actual problem at hand. But Style guides can go too far. Think for a moment about your database names and class names for Data Access Objects (DAOs).  If you work in C#, they’re typically PascalCase.  For instance, in SQL Server, Table names can be PascalCase with no issues (and they generally are).  But if you do that in Postgres, your C# will look horrible:

private readonly string getByMyName = "SELECT * FROM \"my\".\"mytable\" WHERE \"myId\" = @MyId AND \"MyName\" IS NOT null";

In this case, your style guide brought you consistency across databases at the expense of developer health.

But we tend to take a good practice and morph it into a bad one due to misuse.  You wouldn’t believe how many times I’ve run into an issue where I or someone else placed too much trust into an ORM, and next thing you know we’re outside in our underpants collecting rain water with ponchos to survive. Invariably the rule is put into place “No ORMs” or “Stored Procedures Only”, or some other silly rule that’s just there because the development team was pwned by a SQL Injection Attack due to misuse of an ORM, or N+1, or something

NO ORMs. Seems silly, right?  I’ve personally witnessed it; Hell, I’ve made the rule myself. And I’ve done it for the best of reasons too:

  • Let’s not complicate our code until we understand what we’re actually building. ORMs send us down a particular path,  we don’t understand enough to know if that’s the path we want to be down
  • Traditionally, ORMs handle one-to-many relationships very poorly; I’m OK with ORMs for very basic needs; but it’s that other 20% they’re terrible for.
  • Why should I ask people to learn an ORM’s syntax when SQL does quite nicely?

And I was wrong. My reasoning was sound (at least in context of the information I had at the time), but it was wrong.  What I should have said was this:

You want to use an ORM? Great, go at it.  If and when it doesn’t meet our needs, we’ll revisit the decision; until then, just make sure you use a well-supported one.

And that would have been that.  But I fell into the trap of thinking I was smarter than the person doing the work; to think that I was somehow saving them from making the same mistakes I did.

There’s really only one constant I’ve learned in creating software that succeeded, and software that failed: There is no one “True Way”. There is no style guide that will save us, no magic methodology that will somehow make your organization ship software.  There’s only the day in and day out grit of your team, only their compassion for their user and each other, and their drive to ensure the software gets made.  There are wonderful tools to help your team along that journey; but they are neither one-size-fits-all or magical.

They’re just tools, and they’ll work as often as they won’t.  The deciding factor in what works is you and your team.  Your team has to believe in the tools, the product, and in each other. If they don’t, it doesn’t matter what methodology you throw in front of them, it won’t help you ship software.  So the next time you (or anyone) is making rules for your team to follow, ask yourself: “Do these rules help us ship better software?”  If they don’t, fight them.  There’s too much to do to embrace bad rules.

How to fix common organizational Mistakes .NET Developers make with Microservices

Microservices have really only become possible for .NET Development with the advent of .NET Core, and because of that we have almost two decades of built up practices that don’t apply in the world of microservices.

In case you haven’t heard of Microservices, here’s a quick ten second primer on them: They’re a deployable focused on doing one thing (a very small thing, hence ‘micro’), and they communicate their intent and broadcast their data over a language agnostic network API (HTTP is a common example).

For instance, sitting in the WordPressDotCom editor right now, I could see maybe a dozen Microservices (if this weren’t WordPress), a drafts microservice, notifications, user profile, post settings, publisher, scheduler, reader, site menu, editor, etc.

Screen Shot 2017-03-23 at 8.28.39 AM

Basically everything above is a microservice. All those clickables with data or behavior above? Microservices. Crazy, right?

Back of the cereal box rules for Microservices:

  • Code is not shared
  • APIs are small
  • build/deployment of that service should be trivial.

So that’s code, but what about organization? What about project setup?  Those are the pieces that are as crucial to successful microservices as anything else.

In .NET Monolithic projects, we’ve spent years hammering home ‘good code organization’, with lots of namespaces, namespaces matching directories, and multiple projects.

But thinking about those rules of organization for Monoliths, when’s the last time you were able to easily find and fix a bug even in the most well organized monolithic project?  On average, how long does it take you to find and fix the bug in a monolith? (Not even that, but how long does it take you to update your code to the latest before trying to find the bug?)

The benefits of Microservices are the polar opposite of the benefits of a Monolithic application.

An ‘under the hood’ feature of Microservices is that code is easy to change. It’s easy to change because it’s easy to find, it’s easy to change because there’s not much of it, and it’s easy to change because there isn’t a lot of pomp and circumstance around changing it. In a well defined microservice, it would take longer to write this blog post than to find the issue (I’m exaggerating, but only slightly).


If you’re developing .NET Microservices, here are some points to keep in mind, to keep from falling into the old traps of monoliths:

Keep the number of directories low: The more folders you have, the more someone has to search around for what they’re looking for.  Since the service shouldn’t be doing that much, there isn’t as much need for lots of directories.

Move classes into the file using them: Resharper loves to ask you to move classes to filenames that match their class name.  If your class is just a DAO/POCO; rethink that.  Keep it close to where it’s used. If you do split it into a separate file, think about keeping all of its complex types in the same file it’s in.

1 Microservice, 1 .NET Project, 1 source control repository: This is a microservice. Splitting things out into multiple projects in one .sln file necessarily raises the complexity and reduces the advantages Microservices have.  Yes, it feels good to put that Repository in a different project; but does it really need to be there? (Incidentally, it’s currently impossible to publish multiple projects with the .NET Core CLI)

Code organization should be centered around easily finding code: If I can’t find what your service is doing, I may just rewrite it.  Then all that time you spent on that service organization will be gone anyway. The inner-workings of your microservice should be easy to find and work with. If they aren’t, maybe it’s doing too much?

Your build process should be trivial: If your project pulls down Nuget packages from two separate repositories, it’s time to rethink your build process.

Why are you sharing code, anyway?: Private Nuget packages are monolithic thinking;  to make “sharing code” easy.  But in the Microservice world, you shouldn’t be sharing code, right? Duplicate it, pull it out into its own service. Sharing it simply means you’re dependent on someone else’s code when it breaks (which is why we have microservices in the first place; so we don’t have that dependency).

Working beats elegant, every time: I love elegant code. I also love working code. Incidentally, I get paid to write working code, not elegant code.  If using a microservices based architecture allows me to move faster in development, why would I hamper that by spending time making code elegant that doesn’t need to be? There are even odds that this service won’t even exist in its current form in 6 months, let alone be around long enough for its elegance to be appreciated.

Microservices are a different paradigm for software development, in the same way agile was meant to be different than classic SDLC (Waterfall). The same thinking that built Monoliths can’t be used to build Microservices successfully. Next time you’re writing a microservice, think about what practices and inertia you have; and double check: Does this practice make sense in a Microservice?  If it doesn’t, jettison it.


Reasons you should use Microservices to build your next application

You’re looking for a new architecture for your next software project, and you’ve heard about this thing called Microservices.  It sounds cool, but you’re not sure if it’s a fit for your next project.  Use this handy checklist to decide if using Microservices are right for you.

  • It’s a dynamic new paradigm that drastically increases complexity; what’s not to love?
  • Networks are fun to troubleshoot.
  • You can now use Brainfuck in a production application!
  • You missed the XML wave and the Actor model wave; you’re not missing this one.
  • Scaling out is so much more fun than worrying about application server performance. Throw more hardware at it!
  • You have a friend who works in the server sales business and you owe them some favors.
  • How else are you going to get to put “Docker” on your resume?
  • Event-driven, disconnected, asynchronous programming was way too easy in a monolith.
  • You also have a friend in the server logging and metrics business (Is it New Relic or Spleunk?) and you own them favors (you owe a lot of people favors, don’t you).
  • You’ve taken Spolsky‘s “Things you should never Do part 1” as a challenge.  After all, you’re not rewriting the app, you’re reimagining it.
  • The Single Responsibility principle needs to go to its fanatical conclusion to finally become a reality: One line of code per service.
  • Your DI container has pissed you off for the last time.
  • Complex deployment processes mean job security.
  • “DevOps experience” makes a great resume booster.
  • 100 git repositories means never having merge issues.
  • How else would you get around the Mythical Man Month? 9 women can have 9 babies in 9 months, and they don’t even need to talk!
  • Contract testing sounds way cooler than “integration testing”.
  • What’s better than 1 REST API? 100 of them.
  • You can now force your teammates to learn Haskell (They’ll thank you).
  • You can now use the best tool for the job, even if it requires you to go through a few months of training to learn that new tool, and did I mention they only do training on Cruise ships to Tahiti? (it’s not your money you’re spending, after all).
  • Whenever someone asks how you’ll solve an architecture issue, you can always say, “That’s a future us problem”. TAKE THAT, MONOLITHS.
  • The grand total of documentation is a README in the root of each git repository.
  • Monoliths generally only have one codename; with Microservices you can have hundreds. Time to bust out that greek mythology.
  • Of course your application needs to be able to support a distributed event queue; why is that even a question? You need to obviously scale out to billions of operations per speed.


Microservices don’t sound like your cup of tea? Try Reasons you should Build a Monolith.

Reasons You Should Build a Monolith

You’re building a new software project! Congratulations! You’ll make millions and people will love you. It’s going to be awesome.

Your first question (of course) is: Should you build a monolith or use Microservices?  Great Question!

You should build a monolith if you:

  • Have only a hammer and can see everything as a nail.
  • Have the language you’re going to use, right or wrong.
  • Know no one on the team can possibly learn a new language. That’s insane.
  • Enjoy contorting your language/framework to solve problems it was never meant to.
  • Enjoy building a new  library or framework because of the above.
  • Believe wholeheartedly in the idea of one code repository.
  • Can’t imagine how people would ever deploy multiple code bases.
  • Enjoy the simplicity of one, getting progressively longer, build?
  • Code merges are so much fun.
  • Enjoy spelunking through your code to find out where you’re supposed to make that bug fix.
  • Enjoy writing the reams of documentation that will show people how to navigate the project.
  • Believe what’s good enough for Ruby on Rails, Django, and ASP.NET is good enough for your team.
  • Can’t imagine why anyone would want to write tests against an HTTP API.
  • Scoff when someone mentions a new language.
  • Are pretty sure the business requirements aren’t going to change
  • Have been bitten way too many times by new languages and frameworks that just don’t work out
  • Think the idea of containers is nuts. A computer inside of a computer inside of a computer? Craziness.
  • Think the network is obviously the slowest part; Keep all the calls in process.
  • Love complicated branching strategies; maybe even owning a gitflow T-shirt.
  • Love process! Process is your friend. Code freeze, QA, UAT, deployment, Change requests. no one’s getting code in without being reviewed!
  • Believe in Scaling up.  Scaling out is just expensive, and the network is slow!
  • Believe people that allow data to be duplicated throughout the system are reckless. One authoritative place for data!
  • Simple implementations are the best; no need for microservices; they’re complex.

Enjoy your newly minted monolith! It’s going to be fast. It’s going to be simple to modify. It’s going to be awesome.

How do I publish a .NET Core 1.1 application in a Docker container?

With the advent of Microsoft embracing Docker; it’s now possible to release .NET Core apps in Docker containers; and it’s a first class citizen. This means instead of creating custom Docker images, Microsoft has released multiple docker images you can use instead.

The cool thing about these Docker images is that their Dockerfiles are on Github, which is quite amazing if you like to create custom Docker images.  Without more ado, here’s how I set up the project’s Dockerfile, and I created a file so that I could script this repeatedly.


FROM microsoft/dotnet:1.1.0-runtime
ARG source=./src/bin/Release/netcoreapp1.1/publish
COPY $source .
ENTRYPOINT [“dotnet”, “MyProject.dll”]

Let’s take it line by line:

FROM microsoft/dotnet:1.1.0-runtime says to create a new image Microsoft’s dockerhub against the dotnet repository, against the tag named 1.1.0-runtime.

ARG source=./src/bin/Release/netcoreapp1.1/publish says to create a variable called source that has the path of ./src/bin/Release/netcoreapp1.1/publish (the default publish directory in .NET Core 1.1).  This path is relative to the project.json file.

WORKDIR /app means to create a directory  in the docker container and make it the working directory.

COPY $source . says to copy the files located at the $source to /app, since that directory was previously defined as the working directory.

EXPOSE 5000 tells docker to expose that port in the container so that it’s accessible from the host.

ENTRYPOINT ["dotnet", "MyProject.dll"] says the entrypoint for the container is the command: dotnet MyProject.dll.

This would be the same as:

CMD "dotnet MyProject.dll"

So that’s the docker file, but there are a few other steps to get a running container; first you have to make sure you’re running ASP.NET Core applications against something other than localhost, and then you still have to publish the application, create the docker image, and run the docker container based on that image. I created a file to do that, but you could just as easily do it with PowerShell:

# change directory to location of project.json
pushd ./src 
# run dotnet publish, specify release build
dotnet publish -c Release
# equivalent to cd .. (go back to previous directory)
# Create a docker image tagged with the name of the project:latest
docker build -t "$SERVICE":latest .
# Check to see if this container exists.
CONTAINER=`docker ps --all | grep "$SERVICE"`
# if it doesn't, then just run this.
if [ -z "$CONTAINER" ]; then
  docker run -i -p 8000:5000 --name $SERVICE -t $SERVICE:latest
# if it does exist; nuke it and then run the new one
  docker rm $SERVICE
  docker run -i -p 8000:5000 --name $SERVICE -t $SERVICE:latest

My ASP.NET Core directory structure is set up as follows:

- src
    - project.json
    - //snip..
- tests
- Dockerfile
- build.ps1

This let’s me keep the files I care about (buildwise) as the base of the directory; so that I can have a master bootstrap file call each directory’s build files depending on the environment. You may want to mix these, but this also allows me to keep certain files out side of Visual Studio (I don’t want it to track or care about those files).

Then, all I have to do to build and deploy my ASP.NET Core 1.1 application is to run:


And it’ll then build, deploy, change the container if needbe, and start the container.