Microservices after Two Years

At this point, I have two(+) years of experience with Microservices, and I’m not an expert, but I have some hard-earned knowledge distilled from working with them (and making lots of mistakes in the process). Here’s what I learned that I wish I had known going into it.

Microservices are not mini-monoliths

Jim Gaffigan has a rather funny skit about (American) Mexican food. Listen to it here before I butcher the punchline. The punchline of the skit is all Mexican food basically consists of a tortilla with cheese, meat, or vegetables. We tend to think of deployable software in that same way. It’s all code, wrapped up with a deployment script, and sent to production. Monoliths are independent complete applications that fulfill a business function. So what’s a Microservice? An independent complete application that fulfills a business function. So why aren’t microservices just ‘mini-monoliths’? The answer comes from the idea that microservices collaborate. A monolith does not rely on another monolith for its uptime, data, or resiliency. It is generally a self-contained view of the world and due to their nature they do not care if anyone else exists. Your company’s website is wholly independent of anything else. More critically though, multiple teams may work on your company’s website. They share code, branches, and a single production pipeline. Microservices, on the other hand, are independent complete applications that fulfill a business function, but doesn’t fulfill more than one. A monolith does.

A Microservice understands that while it is independent, there are possibly zero or more people out there interested in what it has to say, and so it is designed with that understanding in mind. A Monolith is not, and does not have to be. Businesses eventually find out that they wish their monolith was designed to share its information in a de-coupled fashion, but often too late to do anything about it easily.

Microservices are not mini-monoliths; they’re collaborators that operate independently when they need to.

Microservices require a different way of thinking about problem solving

Developers love to write code. We’re so enamored with writing code that we’ll write code even when no one needs us to. We’ll write code to solve nagging problems on our own machines, or to automate silly things, or even write code to solve problems in our households. In fact, I have a new side project to set up a Raspberry Pi as a calendar viewer in my house. This is probably not unique to software development (though maybe it is? Do plumbers re-pipe their houses? Do electricians rewire theirs on a whim?) but the tenor of it is so overdone in software development that we exhort new developers to not write code first.

… And then we ask them to work on a monolith. Monoliths make writing more code easy. It gets to a point where the default state is “find problem”, “write code”, “ship”, without understanding whether or not the problem is best served by a bolt-on or add-on to the existing system. For small things this is not an emergent issue. Those small things can add up, and it will become a problem over time.

For instance, if you’ve ever tried to add a CSV import to any existing system , you’ve probably found out within days that the desired “CSV Import” feature is really a “CSV + Domain Specific Logic” import function, or almost as harmful is if a ‘bulk’ method of inserting wasn’t part of the original requirements; necessitating a change in the API. In a monolith; it’s really easy to write code to add this functionality that has baked in assumptions that aren’t clear, and to potentially change the API your system exposes, or how it presents itself to the user. Because of the ease of ‘just’ writing code, it it easy to rush the implementation without regards to the design. Writing code quickly is not the job; solving problems without causing more problems is the job; and a monolith makes that hard to do.

A user wants to add a stock to their portfolio…

Microservices, on the other hand, require up-front planning before code is written, every time. Every new service or any change to a service may be able to be coupled with completely replacing that service. Anything that has the potential to change the contract in a system (whether with the user or other services), requires more understanding and up-front design than the same change in a monolith. To go back to our CSV import example; a potential way of doing it with microservices is to have a new CSV importer service stood up that takes in a CSV file; does any Domain Specific Formatting; and emits an event or sends an HTTP request to the correct service and uses its existing API for adding/importing information.

And now they want to add multiple through CSV.

Now, these services are necessarily coupled to each other (though the coupling does goes in the right direction), and since the contract has not been changed for the original service; the guarantees of the original service are kept intact. Microservices make it harder to break existing consumers if done well. The trade-off is more upfront planning is required when designing a solution in a microservices based topology.

Domain boundaries are critical to Microservices success

There are three general flows to microservices (There may be more; but the types are escaping me right now):
1. Microservices that give new capabilities to an existing domain bounded context (the previous example of adding CSV import for a portfolio service as a separate microservice is an example of this — there are several trade-offs to doing that, and it depends on your constraints and desires)
2. Microservices that represent a stateless process (viz. validating a credit card)
3. Microservices that represent a stateful process or interaction (the portfolio service)

Notice that I said nothing about size of these services; and depending on whom you speak to, the size of a microservice is a mystery. I have opinions on this, of course; but the one invariant I’ve seen is that good microservices topologies ensure the lines are drawn at the domain’s “bounded context“. This is a fancy Domain Driven Design phrase that means to split up models and interactions by what they mean. To sales, a customer interaction is quite a different model and mode of interaction than a customer interaction for customer support. By splitting them up by their ‘context’ (and the boundaries being sales and customer support), the software can maintain independent ideas of how to interact with a customer depending on the context.

Martin Fowler’s Illustration of Bounded Contexts, source: https://martinfowler.com/bliki/BoundedContext.html


For microservices, this typically means that your customer support portal will be a different bounded context than your sales funnel; even if they share the same properties of a customer (at least demographically). There are three ways to handle the above problem:

Method 1: Set up a separate service with an independent customer model for each service (sales, customer support), and one created in one system is not necessarily referenced elsewhere (or it can be; customer_id, customer_support_id, sales_id)

Method 1, illustrated.

Method 2: Set up a “Customer” service, a sales service, and a customer support service, and both sales and customer support get customer information from the “customer” service.

Method 3: Set up a customer service, a sales service, and a customer support service; and sales and customer support have duplicated data (received through events) of things that happen in the customer service, but they maintain their own disparate models for what a customer means to them. From a system perspective the internal identifier is the same; how it’s used varies from system to system. This means having a customer service that has demographic information; a sales service that may or may not have this same demographic information but adds on sales context, and a customer support service that maintains this duplicate information but adds on its customer support pieces.

Each method has its own trade-offs; but you can quickly see the maintenance issues with each:

  1. Method 1 has three different representations of a customer; and potentially at different states in each service (a sales person sees a customer before they’ve signed on the dotted line, and a customer support person always has a “post sale” view of the customer. This is OK until you want sales to have the customer support information; and then you need to do a bit of juggling to ensure a customer from a sales context is indeed the same customer in a customer support context.
  2. Method 2 allows there to be one representation of a customer; and each service can either “add-on” to this representation of a customer; but each downstream service is still beholden to the customer service; and which context does that live in? Both. There is also a temporal coupling factor as each service “gets” demographic information from the customer service.
  3. Method 3 allows each service to be de-coupled from the “customer” service. It allows each service to add its own data to what it means for there to be a customer; and it allows each service to change independently (since each service will emit events it can listen to to update its model if it wants to). But this also means having a unified contract of what defines the demographics of a customer; and ensuring each service is set up to listen to events pertaining to customers, and each service appropriately handles being down if a customer event is emitted (event sourcing is a possible solution here).

None of these methods are “ideal” from an “easiest to develop” standpoint; and they have different levels of maintenance requirements. The one crucial decision that a team must make is what is the domain context, is this <thing> I’m dealing with talked about differently depending on who I talk to, and what is the maintenance cost of each approach. 

If the team chooses method #1, then they have a lot of distributed systems problems that aren’t easily solved; they’ve made interacting with the system harder. If they choose #2, then two services depend on a third (not really ‘independent’ at that point), and they’ve added an Request/Response dependency between services that may not need to exist (And is harder to debug). If they choose approach #3, they have quite a bit of upfront work (defining contracts; defining patterns), but the maintenance work, reasoning about how a service interacts with another service, debugging, and future expansion is far easier.

Developer Tooling doesn’t support Microservices as well as Monoliths

We have about 25 years of experience as an industry creating tooling around building and deploying software; though it’s only really in the last 15-18 years that the tooling has accelerated. But, even at 18 years of experience, we have pretty solid tooling around developing and debugging monoliths. Debuggers and IDEs take monoliths for-granted, as they likely should. If you write microservices that depend on other microservices over REST, you’re going to have a bad time debugging services locally. Your choices range from standing up the parts of the system that collaborate, or mocking out external dependencies, or dockerizing the system’s services so that they can be stood up independently. Of course, once you do this you’re diving into mixed networking land for Docker; and there’s not a lot of tooling that can make that experience seamless. A service running outside of docker that you’re debugging is hard to set up to work with services running inside of a docker network, or vice versa. Front-end development is even worse; as node.js is a requirement for building front-ends these days. Try live-debugging with docker for your UI where the source is kept locally. Not fun. Teams handle this problem in different ways; but the point is this problem exists, and the solutions are not as mature as debugging a monolith.

If you use microservices, you need to allocate a sizable chunk of time to building the tooling necessary to allow people to develop against those services.

Deployment requires better tooling with Microservices

Deployment considerations are key if you want a fast moving organization. You can’t respond to change without being able to change your software quickly. Even if you can develop changes quickly, if you can’t deploy them quickly you aren’t a fast-moving organization. Continuous Integration (CI) and Continuous Delivery (CD) is essential to being able to respond to change. These products reflect that the deployment view of the world is monolithic in nature. Source control is built for it, CI/CD systems are built around it; and pretty much every commercial CD system is built with monoliths in mind. There are several deployment models where microservices are used; and none of them have good tooling for microservices.

  1. Deploy on-premises as a packaged solution
  2. Deploy to the cloud independently
  3. Deploy to the cloud as a packaged solution

If you sell your product to customers, and they run it in their own data center, deployment method #1 is what you often deal with. Your solution must be packaged up and deployed together as a single unit. Should this necessitate that you develop as a monolith? No. It shouldn’t. However, if you have microservices, you necessarily have multiple deployable artifacts (whether they’re contained in a mono-repository (all services in one source control repository) or micro-repositories (each service in its own source control repository) is a separate matter), and your CD pipeline must take that into account. The trade-offs change whether it’s a micro-repository or mono-repository; but they still exist as problems not solved by current tooling. For instance, tagging master or a release branch with what is in production; or your promotion model to different internal environments; or even local deployments need to be taken into account by the tooling. If you choose method #2 and combine it with continuous delivery, some of those trade-offs go away; as you can make a rule that the latest in master is always pushed to internal promotion environments; and the only tag happens after a particular commit has been pushed to production; but again, tooling is still lacking to make this a seamless experience.

Microservices deliver on the promises of Object-Oriented Programming

I didn’t understand the hype of object oriented programming. I understood the fundamentals of encapsulation, abstraction, inheritance, message dispatching, and polymorphism, but I didn’t understand why they were so useful (I started with Perl, and then moved to Java, so I had nothing to compare Java’s OO nature to. At the time it just seemed like more work to do the same things I could do in Perl. Ahh, youth). The SOLID principles helped later on, but I always felt like there was more hype to OO than actual benefit. After several jobs maintaining and creating Object-Oriented solutions, I was convinced that Object Oriented Programming was a pipe-dream. To the 80% of us who are not “expert” programmers, it is a fad we can never make full use of and it causes more harm than good.

That was until I started researching microservices. This was it! A fully independent object that had agency that could collaborate with others; but encapsulation was ensured! The Open/Closed principle was a requirement! Single responsibility was almost ensured just by the nature of the service! (It says “micro” on the tin) Inheritance was far simpler — consume what the service gives you and modify it to suit your needs (the CSV example above). You couldn’t share information unless you had a common contract and used some sort of message dispatching!

This was absolutely huge for me. All of those principles that I’d been trying to bring to reality for years in codebases I’ve worked on were here — and best of all they didn’t have the downsides of OOP in practice! It’s really easy when modifying code to do something that breaks encapsulation, and business pressures make it even easier. With Microservices, that was no longer possible. Sure, other business induced pressures might cause problems, but they couldn’t alter the contract of a service; and that allowed the system to be reasoned about in ways OOP promised. Perhaps best of all, microservices put up guard rails that keep the mistakes of OOP from happening, and we’re all better for it.

Contracts, Patterns, and Practices should be Code generated

If you do something once, do it manually. If you do it twice, write down the steps, and do it manually. By the third time, automate it. Producing even a dozen services means either manually enforcing the structure of contracts
(the format by which services communicate with each other or to the user), patterns (how you structure common infrastructural concerns), and practices (how you write software) or code generating it for commonality. If you don’t code generate it, entropy wins. Even across features services start to do the same thing different ways; or you find new patterns for structuring your events, and depending on which service you’re in, you could see a different pattern. It’s untenable from a development and maintenance perspective.
Method #3 above shows a world where the Customer Service emits events when a customer is added or updated; allowing interested services to listen for changes and update their own data stores as necessary. Without code generation this would be a tedious process filled with error. With code generation and schema defined models; this is a viable development model.

Can you imagine trying to update any model/contract without code generation?


There are only two sane paths; package the commonalities (which can really only be done for dependencies) into utility functions, or code-generate everything.

Packaging utility classes/models (like the customer model and the events above); is a valid approach. The concerns with using it are taking on dependencies (even internal ones); the overhead of internal infrastructure; and the fact that every service would be required to be in the same programming language.
The latter path (code generation) is exactly what Michael Bryzek advocated in his talk Designing Microservices Architectures the Right Way and coming from trying the other paths (packaging common functionality, and doing it manually), I can see its utility. The trade-off, of course, is that developing the code generation tooling is a heavy investment of time. It requires discipline to develop this tooling first without trying to develop features; and it would likely result in no visible movement on things the business cares about (features, revenue, etc). It also ensures that as long as you have tooling to support that language, you can implement those models in any language you’d like.

You can’t punt on non-functional requirements

There are lots of non-functional requirements in a system that never appear on the roadmap, are never spoken about at the sales meetings, and are only tolerated by the product manager. Things like a user should be signed out after fifteen minutes; or the authorization system should incorporate roles and location; or some data is transient and not part of the backup strategy, and other data needs to be backed up every minute. Or, the system must allow 5000 concurrent users at a time. Those are non-functional requirements; they’re qualities of the system that aren’t part of the user-facing features being developed.

In a Monolith, there’s typically very few places to go to implement a non-functional requirement, and as we’ve discussed previously, IDE tooling is built for the refactoring necessary to ensure a change takes place everywhere it’s needed (only for the statically typed languages; the dynamic folks have their own problems to contend with), and even if you have to implement a new feature, there’s generally one place to do it.

Not so with microservices. If you implement authorization, you must implement it across all services. If you implement a timeout, you must implement it across all services. Unless your microservices are across hosts, any performance improvements must take into account that each service may share host resources with one or more other services. If each service is using the same server instance (i.e., every service that uses postgres shares a postgres server instance, even if they’re separate databases in that instance), then performance tuning and backups must take that into account. This greatly complicates matters of performance tuning and dealing with non-functional requirements; and for the system to be easily built, those non-functional requirements need to be known at the beginning! Every delay in implementing a non-functional requirement makes it more likely that some disparate changes will need to be made across several services; and that will take much longer once the services are built.

Event Driven Programming makes microservices work

In firmware programming, the finite state machine and events got me through the day. Each peripheral has separate states; and those are triggered by events that may happen from user input or other peripherals (for instance, seeing a bluetooth advertisement from a whitelisted address may trigger a connection). Since firmware by-and-large sits on a single core System-on-Chip with limited use of or no threads at all, using an event loop and finite-state-machines are one of the best ways to make firmware work.

Finite State Machines coupled with Event Driven programming also has other nice properties that parlay well into microservices: events ensure each service is de-coupled from the others (there are no direct request/responses between services); and Finite State Machines dictate what happens based on the current state of the service plus its input. This makes debugging a matter of knowing which state the service is in, and what input was received. That’s it. This greatly reduces the complexity in standing up and debugging services; and allows problems to be de-composed into events and states. If you add event sourcing into the mix, you have an event stream that records the events that occurred, so playing back issues is as simple as replaying events.

This is possible because microservices operate on network boundaries. In a monolith you’re forced to debug the entire monolith at once, and hope someone didn’t write code with disastrous side-effects that are impossible to find through normal means. It’s easier to find a needle in a small jar of needles than a giant haystack, and that’s possible because of the observable boundaries of microservices and using patterns that limit the amount complexity that allows you to arrive at a certain state.

If you’re going to start writing microservices; I highly recommend going down the path of event-driven programming, state machines, and some sort of event stream (even if you decide against event sourcing).

Choosing between REST and Events for supporting Microservices is tougher than you may think

If you’ve read the fallacies of distributed systems, then this section almost writes itself. Microservices are distributed systems, no matter how you shake it. One of the major problems when communicating across a network boundary is “is that service down, or am I just having a network timeout?” If you’re using REST, this means implementing the circuit-breaker pattern with some sort of timeout. It also means that if your services communicate to services that communicate to services through REST, then the availability in that chain will eventually hover just above zero. (00:00-12:31). As the video rightfully says, don’t do that. I’d go so far as to say that if at all possible; don’t make calls to other services through REST.

If you need data, have the service publish an event, and consume that event. This sounds great; it’s de-coupled, and it’s resilient to failure. However, each service must now have means to publish to a bus, consume an event off of a bus, and support whatever serialization scheme you want to use. Oh, and now you need to be able to debug all of the above. If you want runtime resiliency, you must sacrifice development simplicity to get there.

Maintaining Microservices requires strong organizational and technical leadership

“The business” does not care what the topology of your system is. They don’t care about its architecture, and they don’t care about how easy to maintain it is, any more than you care whether they use Excel or Quickbooks for forecasting. The business wants two things (really it’s n of 9 things) but work with me here:

  1. Increase Revenue
  2. Reduce costs

They believe more features will increase revenue. It’s a fair belief (correlation does not imply causation), but more features also increases development costs. To “the business”, the way to solve this problem is not by reducing the costs, but by increasing revenues. Again, this is also fair, and in a good number of cases is the right path.

Earlier, I mentioned that microservices keep those nasty shortcuts that cripple development teams from happening, and that’s a good thing, but, to the business, it can also be a bad thing. See, that crippling shortcut may never happen; but adding that feature (to their way of thinking) will increase revenue. If they have to choose between helping revenue but possibly hurting future maintenance, or delaying that feature by several weeks but helping future maintenance, they’ll pick the path to fastest revenue, every time.

The person or people that keep this from happening are hopefully the organization’s CTO and engineering leadership (VP or Director of Engineering, the Architect, and senior leaders of the team). They’re the people with the cachet and experience to know when this is going to hurt future maintenance, and they hopefully know enough to know it’s probably not a sure revenue bet either. But this requires discipline and trust on the part of the engineering leadership team. They must have gained the trust of the business by delivering what the business wants in the timeframe they want it; and they must be disciplined enough to stick to their guns. If someone says, “Well, we could do this in a week if we just hooked Service A up to Service B’s database”, you have now failed with microservices and are maintaining a future monolith. You’ve also lost the advantages of working with microservices.

Shortcuts are easy to say yes to, and shortcuts can greatly endanger the maintainability and health of a development team and the system.

Microservices are a technical solution to an organizational problem

While developers and consultants tend to espouse microservices in a cloud scenario, they tend to ignore that microservices are orthogonal to their deployment scenario, and they’re orthogonal to technology stacks. Take away all these advantages of microservices; and you’re still left with a topology that allows you to segment teams along domain boundaries, and have those teams operate independently of one another. At a small enough scale, you could even have individuals own services and scale out your feature creation to the number of people in your development organization. The Mythical Man month states that adding people to a late project makes it later; and it says that because those people have to communicate with each other. What if they didn’t? or what if you could reduce the amount of communication needed to ship a feature? Microservices let you do that. (I fall firmly in the micro-repository camp as well, so I’m about to conflate the two on purpose). Microservices development means independent repositories, and less issues with merge conflicts, branching, or collaboration needing to happen to push out a particular feature. It also means fewer avenues for the feature to clash with existing features; since by definition the service is independent and autonomous. It means fewer parts to reason about, and that results in faster development time.

Microservices (when architected well), let you go faster and further than you otherwise could, with less need to put organizational guardrails on the development team (code reviews; gated checkins, code freezes) to resolve team performance issues. It minimizes the effect a single developer can have against the whole system. This is a great benefit if the organization does not hire well or pay well (and if every organization did, we’d have a low turnover rate in software development), as it substitutes technology for some of the human training and improvements that organizations should do but don’t do.

If you have all top-notch performers in a high-performing engineering organization with a high performing business with no turnover, you don’t need microservices because you’re not going to make the mistakes that microservices would fix. If, however, you’re in an organization that consists of humans that are fallible, microservices provide a benefit to development that monoliths cannot.

Closing

Microservices are another tool to help make software development better and to make systems easier to maintain. They provide many benefits and have many trade-offs with traditional monoliths, and it’s rarely clear whether or not a system should be developed as a monolith or as microservices. There are several factors that can steer the choice towards one or the other; but those factors depend greatly on the individuals, organizational leadership, business model, constraints, and politics of the organization implementing those services.

These are the things I wish I had known when I started with microservices. What do you wish you had known about Microservices before working with them?

Note: Special thanks to Adam Maras for spending part of his weekend giving me feedback on this post.

Advertisements

Starting Again

As I typically do when I’m working remotely, I was getting an earlier start when I received a message from my manager, “I need you in the office today” in Slack. This itself was unusual, and since I was dealing with a water heater leak; the earliest I’d be able to get into the office was after lunch, and I said as much.

Naturally (and probably as a defense mechanism), I tweeted about it:

I went into the office after lunch, and yada yada yada, I resigned.

I tend to fully immerse myself into my work. Emily knows this and is very supportive, and also gently informs me when I’ve gotten too deep. This has happened several times in my career; and is probably best stated as a personal flaw wrapped as a short-term blessing to an employer. Let’s be very real: It’s not healthy, it’s destructive. The signs are the same for me each time:

  1. I find something I can identify with; be it a product, or the people, or the mission.
  2. I get the dopamine hit from belonging to that product/people/mission (“thing”).
  3. I identify with with that thing.
  4. I start to work more and more to ensure that thing is successful.
  5. I get in over my head; but since I’ve developed a reputation and set expectations that I’m always available; I can’t just ease-up overnight.
  6. I put my own interests outside of work aside; and work consumes my every waking thought. It is not a stretch to say that I’m literally working or thinking about working in every waking moment.
  7. My spouse and kids realize this.
  8. At this point; (as things often do), something is amiss in the work side too: (success is not linear).
  9. Things start to fail (small things); and the dopamine hits aren’t coming. Frustration mounts.
  10. I burn out; and become rather useless in my day to day (or night to night; since it affects my home-life too).
  11. Rinse, Lather, Repeat (often in the same organization, sometimes changing organizations).

I’ve known for a long time what I want out of life; I want financial freedom. I want to see my kids grow up. I want to enjoy each moment of my life. I don’t want to live to work; I want to work to live. This may seem foolish, I know, but think about any time you’ve worked for a company and their direction changed, and suddenly that thing you identified with no longer existed. Would you continue to wish you “lived to work” then?

I am fortunate to have time on my hands now, with the impending birth of our third child, to make my dream happen. And so my plan:

  1. Produce a business plan for what I want to do (short term, bring in revenue to fund my long term plan -> building a software business)
  2. Execute on that plan; revising as circumstances change.
  3. Long term; I want to build a software product business that is a great place to work for developers and also sustains itself and the lifestyle I want. I’m not looking to cash out, I’m looking to be moderately successful; and on the occasion that I’m lucky enough to employ people, I want them to be successful too. I don’t know what the software or the product looks like yet; but I do know what I want the business to be like.

With that in mind, I’m currently executing step 1. My wheelhouse is helping solve problems businesses have, and my specialty is using software to do it (or not using software, if able). This neatly aligns with my former job title of “Solutions Architect” and with the work I’m most used to doing. I also am shopping that around to see if there’s a market for an Independent Solutions Architect, targeting to solving the problems that small-to-medium businesses have. If it feels vague and fuzzy right now, that’s probably because it is. I am actively working to shape this vision into a market niche. If you’ve got any advice or feedback on this, please reach out. I’d love to hear from you.

I’m also brushing up on my networking skills. Something I’ve neglected for a long time is networking. Not because I didn’t want to (everyone wants to meet and know people, right?) but because it didn’t align closely with the successes I was having as a full-time employee and individual contributor. It is, as they say, a myopic view of the world.

With that, and with this newfound time, I want to give back in any way I can. Whether that’s mentoring, quick coffee chats (virtual or otherwise), or matching up developers and recruiters. I have a network of contacts (both on the recruiting side and programmer-y side); and if you’re either looking for developers as a recruiter or looking for good positions as a developer, I can help. As they say, my DMs are always open.

How do I compile Razor Views in .NET Core (CSProj)?

In .NET Core (project.json SDK Tooling), compiling Razor views was rather simple. Add this to your project.json:

{
  "dependencies": {
    "Microsoft.NETCore.App": {
      "version": "1.1.0",
      "type": "platform"
    },
    "Microsoft.AspNetCore.Mvc": "1.1.0-preview1-final",
    "Microsoft.AspNetCore.Razor.Tools": {
      "version": "1.1.0-preview4-final",
      "type": "build"
    },
    "Microsoft.AspNetCore.Mvc.Razor.ViewCompilation.Design": {
      "version": "1.1.0-preview4-final",
      "type": "build"
    }
  },
  "tools": {
    "Microsoft.AspNetCore.Mvc.Razor.ViewCompilation.Tools": {
      "version": "1.1.0-preview4-final"
    }
  }
}

And add this to the postpublish section:

"dotnet razor-precompile --configuration %publish:Configuration% --framework %publish:TargetFramework% --output-path %publish:OutputPath% %publish:ProjectPath%",

And it was done.

But with the CSProj version of .NET Core, they didn’t go back to the old CSProj method of doing that (would that have been too simple?) rather Microsoft introduced a new way to compile razor views.

To add RAzor compilation to .NET Core (CSProj edition), there are two things that need to be added:

  1. DotNetCliToolReference for a specific version of the CodeGeneration Tools.
  2. A new PropertyGroup containing the flag needed to compile Razor Views

For DotNetCliToolReference in .csproj:

  
    <ItemGroup>
      <DotNetCliToolReference Include="Microsoft.VisualStudio.Web.CodeGeneration.Tools" Version="1.0.0-msbuild3-final" />
    </ItemGroup>
  

And the Compile flag later:

  <PropertyGroup>
    <MvcRazorCompileOnPublish>true</MvcRazorCompileOnPublish>
  </PropertyGroup>

And with that, each publish should include a compilation of razor views in the output.

I’ve found shockingly little in documentation for .NET Core Tooling (CSProj); but I did find this information from an issue linked to the aspnet/templates project on Github.

A note about the CodeGeneration tools; even though 1.0.0.0 is a later version than msbuild3-final; I tried 1.0.0.0 and it didn’t work; but the msbuild3-final did. I spent enough time monkeying with this that trying to reproduce it to submit a bug report seems like wasted time. I’m going to call it a victory and say “just use msbuild3-final.” Cargo-culty? Probably; but I’ve spent too much time getting .NET Core tooling to work to try to spend more time reproducing the issues.

Five Surprises after using .NET Core for six months

I’ve been working on .NET Core for the last 6 months; from .NET Core 1.0.1 and .NET Core 1.0.0 SDK – Preview 2 (build 003131) to .NET Core 1.1.1 and .NET Core 1.0.1 SDK.

Here’s a short list of things that surprised me:

What the hell is up with the versioning?  Read my above sentence, and pay attention to the version numbers.  If you think they’re related to one another, you’re wrong. (At least, I think you are. I’m not sure if I’m wrong or not).  Lest you think that I’m weird; this has been brought up as a common issue on .NET Core’s Github page. (I got tired of posting links; there are many, many more).  In fact, here’s a handy chart of the versioning as it existed until recently (and remember “LTS” means “Long Term Support”, which for some strange reason appears right next to the phrase “Outdated”. Screen Shot 2017-05-02 at 8.53.37 AM

They did a good job in the above chart (they were smart to take out SDK version numbers); but they included version numbers in the actual release notes, and I’m not sure which way is up:

Screen Shot 2017-05-02 at 8.58.29 AM.png

Are you using .NET Runtime 1.1.1? If so, should you use SDK 1.0.1 or SDK 1.0.3? Remember, you can use .NET Runtime 1.0.4 with SDK 1.0.3 too.

No, those version numbers are not in sync, and good luck with figuring out which SDK tooling is supported on Visual Studio for Mac, VS Code, or Visual Studio itself (Hint, anything past SDK build 3177 is probably not supported on VS 2015).

Unit Tests don’t work. Or They do work, until they don’t. Our team started out with XUnit, and then found out that XUnit wasn’t supported with all versions of .NET Core; and it wasn’t well supported on ReSharper with certain versions of the SDK Tooling; so we switched to NUnit, only to find out that now that we want to upgrade to RTM SDK Tooling, NUnit doesn’t work.  In short, the test runner that worked before doesn’t work now, and the one that didn’t work before mostly works now (unless you want to debug in Visual Studio).

Oh, and MS Test probably always worked.  (Except it didn’t).

There is a graveyard of OBE blog posts on .NET Core SDK Tooling bugs.  Depending on which version of the .NET Core tooling you’re using depends upon which answer on the internet is useful for you.  So much so that old blog posts (from 2016, mind you) are already out of date and won’t help you with your problem, even though they’re atop Google’s results.  They’ve been Overcome By Events. In this case, Microsoft.  What happened?  Microsoft decided to retain backwards compatibility (I think) with MSBuild, so project.json was jettisoned in favor of .csproj.

Versioning problems even come into play when talking about the .NET Runtime vs. .NET Core Runtime. Quick, does .NET Core have XSL support?  It has XML support, but what about XSL?  No?  When will that be coming? .NET Standard 2.0. What’s .NET Standard 2.0 you ask? GREAT QUESTION:

Screen Shot 2017-05-02 at 9.15.02 AM

It’s not often I say this, but could the Microsoft .NET Team just adopt month/year as their versioning moniker? It’d be easier to determine if two things are supported together.

So the same release of .NET Core 1.0 works with .NET Standard 1.0-1.6; how is that possible you ask? I have no idea.  In fact, if I continue to look at this chart I may start drinking early.

Does your favorite library support .NET Core? Probably not. .NET Core support has a bunch of blockers for libraries; and it doesn’t look like they’ll all get resolved before .NET Core 2.0 is released (or is that NET Standard 2.0?) It’s both this time! (But that’s happenstance). And the porting to .NET Core?  It’s most likely a rewrite before .NET Core 2.0, I still think they need to be more explicit in Step 5:

Screen Shot 2017-05-02 at 9.21.17 AM.png

bfdcedf4eebcf6069d61264ea8fcc08c

Over all, I’m glad that I’ve gotten to work on .NET Core; but given that I’ve spent a non-trivial amount of time over the past 6 months wrestling with these issues; I’m not even certain what performance issues will crop up from running .NET Core on Linux (Docker).  That’ll be for a future blog post, I’m sure.

Some Questions I have about Async/Await in .NET

I’ve been writing a new project using a microservices-based architecture; and during the development of the latest service; I realized that it needs to communicate with no less than 7 other microservices over HTTP (it also listens on two particular queues and publishes to two message queues).

During one iteration, it could potentially talk to all seven microservices depending on the logic path taken.  As such, there is a lot of time spent talking over the network, and in a normal synchronous .NET Core application, there’s a lot of time spent blocking while this communication is happening.  To resolve the blocking slowing down its responsiveness to the rest of the system, I ported it from being a synchronous to an asynchronous Microservice.  This was a feat and took the better part of a day to do (for a Microservice, a day feels like a really long time).   Along the way, I ran into several places where I have questions, but no answers (or at least no firm understanding as to whether or not my answer is right), and so I’ll post those questions here.  You’ll find no answers here, only questions:

How far do I need to go down the asnyc rabbit hole?

If you’re writing a .NET Core Microservice, chances are you’re doing JSON serialization/deserialization.  Since JSON.NET doesn’t have Async, our options are to leave it synchronous in any call, or use Task.Run() to make it async:
sync:

{
  "owner": {
    "reputation": 41,
    "user_id": 6223870,
    "user_type": "registered",
    "profile_image": 
    "https://www.gravatar.com/avatar/e1d1beda042e5faa2177c415da848307?s=128&d=identicon&r=PG&f=1",
    "display_name": "Harshal Zope",
    "link": "http://stackoverflow.com/users/6223870/harshal-zope"
  },
  "is_accepted": false,
  "score": 0,
  "last_activity_date": 1493298802,
  "creation_date": 1493298802,
  "answer_id": 43658947,
  "question_id": 39165805
}

var owner = JsonConvert.DeserializeObject(jsonstring);

async:

await Task.Run(() => JsonConvert.DeserializeObject(jsonstring));

Since Microsoft recommends CPU-bound work be put in a task, what is the point where that should occur? Are small serializations/deserializations like the above CPU-bound?  Are big ones? what is that threshold? How do you test for it?

If you don’t put code inside an async method in a Task.Run; what happens? If it depends on previous code; it’ll run in order; but what if it doesn’t? Does it run immediately?  Besides the nano-seconds of blocking, is there any other reason to care whether everything inside an async method is awaitable?

How do you deal with synchronous libraries in asynchronous code?

RabbitMQ’s .NET client famously does not support async/await; (as an aside, have we not seen pressure to convert to async because no one is using it or because no one is using RabbitMQ in .NET?) and you’ll even get errors in some places if you try to make the code async, and they put it in their user-guide:

Symptoms of incorrect serialisation of IModel operations include, but are not limited to,

  • invalid frame sequences being sent on the wire (which occurs, for example, if more than one BasicPublish operation is run simultaneously), and/or
  • NotSupportedExceptions being thrown from a method in class RpcContinuationQueue complaining about “Pipelining of requests forbidden” (which occurs in situations where more than one AMQP RPC, such as ExchangeDeclare, is run simultaneously).

And Stack Overflow’s advice isn’t helpful; the answer to “How do I mix async and non-async code?” is: “Don’t do that“.  In another Stack Overflow post, the answer is, “Yea, you can do it with this code.

What’s the answer?  Don’t do it? Keep your entire service synchronous because the message queueing system you use doesn’t support async? Or do you convert but implement this work-around for the pieces of code that need it?

Why is it after 5 years, the adoption of async seems to be negligible?  Unlike some languages, where you have no choice but to embrace async; C# as a culture still seems to treat async as a second-class citizen; and the vast majority of blogposts I’ve read on the subject go into very topical and contrived uses; without digging deeper into the real pitfalls you’ll hit when you use async in an application.

SynchronizationContext: When do I need to care about it? When do I not?  Do I only care about it when it’s being used inside an object with mutable state? Do I care about it if I’m working with a pure method?  What is the trigger that I can use when learning whether I need to worry about it?

It’s my experience (and partially assumption) that awaitable code that relies on other awaitable code will automatically know to wait to execute until it has the value it needs from the other awaitable code; is this true across the board?  What happens when I intermix synchronous and asynchronous code?

Is it truly a problem if I have blocking code if it’s not a costly method?  Will there be logic problems? Flow control issues?

Is it OK if I use a TaskCancelledException to catch client HttpClient.*Async() timeouts?  Should I refactor code to use cancellation tokens, even if no user-input is ever taken in? (the service itself doesn’t accept user input; it just processes logic).

I’m not at all sure if I’m alone in having these questions and everyone else gets it; or if it’s not more widely addressed because async isn’t widely adopted.  I do know that in every .NET Codebase I’ve seen since async was released, I haven’t seen anyone write new code in using async (this is a terrible metric; don’t take it as some sort of scientific assertion, it’s just what I’ve seen).

 

 

There is no “One True Way”

Creating instructions to tell a computer to do certain things in a certain order is an exact science; you either get the instructions right (and the computer does what you want), or you get the instructions wrong (and the computer does what you told it to). There is no third way (cosmic radiation shifting bits notwithstanding).

Software Development, that is, the act of creating software to fulfill a human need, is not an exact science.  It’s not even a reproducible art (yet). If it were, we wouldn’t have so many failed projects in Waterfall, Agile, Monoliths, Microservices, TDD, Unit Testing, BDD,  RAD, JAD, SDLC, ITIL, CMMI, Six Sigma, or any other methodology that attempts to solve human problems.  If we could make it into a reproducible art, we would have already done so.

So why do we the act of creating software as if it’s a science? As if there is a One True Way?  We know there isn’t, since projects of all stripes succeed (and fail); and we know that as of yet, there is no one approach for success (though there are many approaches for failure).

We even do this in objectively silly things: Tabs vs. Spaces, CamelCase vs unix_case (or is it unix-case?), ORM vs No ORM, REST vs. HATEOS vs. RPC over HTTP, or anything else.  We do it in the form of “Style Guides” that detail exactly how the project should be laid out;  as if the mere act of writing down our rules would bring us closer to successfully creating software.  We make rules that apply to all situations and then castigate other developers for breaking those rules.  Those rules bring us safety and comfort, even if they don’t make delivering software a success (or a priority).

Those rules we cling to for safety cripple us from making the best decision using the best information we have.

Style Guides are beautiful things; and I believe in their efficacy.  By standardizing code, it becomes easier to change the code. There’s no cognitive load spent on the parts of the code that stick out; and that savings can be spent on fixing the actual problem at hand. But Style guides can go too far. Think for a moment about your database names and class names for Data Access Objects (DAOs).  If you work in C#, they’re typically PascalCase.  For instance, in SQL Server, Table names can be PascalCase with no issues (and they generally are).  But if you do that in Postgres, your C# will look horrible:

private readonly string getByMyName = "SELECT * FROM \"my\".\"mytable\" WHERE \"myId\" = @MyId AND \"MyName\" IS NOT null";

In this case, your style guide brought you consistency across databases at the expense of developer health.

But we tend to take a good practice and morph it into a bad one due to misuse.  You wouldn’t believe how many times I’ve run into an issue where I or someone else placed too much trust into an ORM, and next thing you know we’re outside in our underpants collecting rain water with ponchos to survive. Invariably the rule is put into place “No ORMs” or “Stored Procedures Only”, or some other silly rule that’s just there because the development team was pwned by a SQL Injection Attack due to misuse of an ORM, or N+1, or something

NO ORMs. Seems silly, right?  I’ve personally witnessed it; Hell, I’ve made the rule myself. And I’ve done it for the best of reasons too:

  • Let’s not complicate our code until we understand what we’re actually building. ORMs send us down a particular path,  we don’t understand enough to know if that’s the path we want to be down
  • Traditionally, ORMs handle one-to-many relationships very poorly; I’m OK with ORMs for very basic needs; but it’s that other 20% they’re terrible for.
  • Why should I ask people to learn an ORM’s syntax when SQL does quite nicely?

And I was wrong. My reasoning was sound (at least in context of the information I had at the time), but it was wrong.  What I should have said was this:

You want to use an ORM? Great, go at it.  If and when it doesn’t meet our needs, we’ll revisit the decision; until then, just make sure you use a well-supported one.

And that would have been that.  But I fell into the trap of thinking I was smarter than the person doing the work; to think that I was somehow saving them from making the same mistakes I did.

There’s really only one constant I’ve learned in creating software that succeeded, and software that failed: There is no one “True Way”. There is no style guide that will save us, no magic methodology that will somehow make your organization ship software.  There’s only the day in and day out grit of your team, only their compassion for their user and each other, and their drive to ensure the software gets made.  There are wonderful tools to help your team along that journey; but they are neither one-size-fits-all or magical.

They’re just tools, and they’ll work as often as they won’t.  The deciding factor in what works is you and your team.  Your team has to believe in the tools, the product, and in each other. If they don’t, it doesn’t matter what methodology you throw in front of them, it won’t help you ship software.  So the next time you (or anyone) is making rules for your team to follow, ask yourself: “Do these rules help us ship better software?”  If they don’t, fight them.  There’s too much to do to embrace bad rules.

How to fix common organizational Mistakes .NET Developers make with Microservices

Microservices have really only become possible for .NET Development with the advent of .NET Core, and because of that we have almost two decades of built up practices that don’t apply in the world of microservices.

In case you haven’t heard of Microservices, here’s a quick ten second primer on them: They’re a deployable focused on doing one thing (a very small thing, hence ‘micro’), and they communicate their intent and broadcast their data over a language agnostic network API (HTTP is a common example).

For instance, sitting in the WordPressDotCom editor right now, I could see maybe a dozen Microservices (if this weren’t WordPress), a drafts microservice, notifications, user profile, post settings, publisher, scheduler, reader, site menu, editor, etc.

Screen Shot 2017-03-23 at 8.28.39 AM

Basically everything above is a microservice. All those clickables with data or behavior above? Microservices. Crazy, right?

Back of the cereal box rules for Microservices:

  • Code is not shared
  • APIs are small
  • build/deployment of that service should be trivial.

So that’s code, but what about organization? What about project setup?  Those are the pieces that are as crucial to successful microservices as anything else.

In .NET Monolithic projects, we’ve spent years hammering home ‘good code organization’, with lots of namespaces, namespaces matching directories, and multiple projects.

But thinking about those rules of organization for Monoliths, when’s the last time you were able to easily find and fix a bug even in the most well organized monolithic project?  On average, how long does it take you to find and fix the bug in a monolith? (Not even that, but how long does it take you to update your code to the latest before trying to find the bug?)

The benefits of Microservices are the polar opposite of the benefits of a Monolithic application.

An ‘under the hood’ feature of Microservices is that code is easy to change. It’s easy to change because it’s easy to find, it’s easy to change because there’s not much of it, and it’s easy to change because there isn’t a lot of pomp and circumstance around changing it. In a well defined microservice, it would take longer to write this blog post than to find the issue (I’m exaggerating, but only slightly).

 

If you’re developing .NET Microservices, here are some points to keep in mind, to keep from falling into the old traps of monoliths:

Keep the number of directories low: The more folders you have, the more someone has to search around for what they’re looking for.  Since the service shouldn’t be doing that much, there isn’t as much need for lots of directories.

Move classes into the file using them: Resharper loves to ask you to move classes to filenames that match their class name.  If your class is just a DAO/POCO; rethink that.  Keep it close to where it’s used. If you do split it into a separate file, think about keeping all of its complex types in the same file it’s in.

1 Microservice, 1 .NET Project, 1 source control repository: This is a microservice. Splitting things out into multiple projects in one .sln file necessarily raises the complexity and reduces the advantages Microservices have.  Yes, it feels good to put that Repository in a different project; but does it really need to be there? (Incidentally, it’s currently impossible to publish multiple projects with the .NET Core CLI)

Code organization should be centered around easily finding code: If I can’t find what your service is doing, I may just rewrite it.  Then all that time you spent on that service organization will be gone anyway. The inner-workings of your microservice should be easy to find and work with. If they aren’t, maybe it’s doing too much?

Your build process should be trivial: If your project pulls down Nuget packages from two separate repositories, it’s time to rethink your build process.

Why are you sharing code, anyway?: Private Nuget packages are monolithic thinking;  to make “sharing code” easy.  But in the Microservice world, you shouldn’t be sharing code, right? Duplicate it, pull it out into its own service. Sharing it simply means you’re dependent on someone else’s code when it breaks (which is why we have microservices in the first place; so we don’t have that dependency).

Working beats elegant, every time: I love elegant code. I also love working code. Incidentally, I get paid to write working code, not elegant code.  If using a microservices based architecture allows me to move faster in development, why would I hamper that by spending time making code elegant that doesn’t need to be? There are even odds that this service won’t even exist in its current form in 6 months, let alone be around long enough for its elegance to be appreciated.

Microservices are a different paradigm for software development, in the same way agile was meant to be different than classic SDLC (Waterfall). The same thinking that built Monoliths can’t be used to build Microservices successfully. Next time you’re writing a microservice, think about what practices and inertia you have; and double check: Does this practice make sense in a Microservice?  If it doesn’t, jettison it.