Myths and facts about TDD, Unit Testing, and Team Velocity

The air around TDD has a lot of myths mixed in with the reality, and those myths generally come from a place of blog-post learning on TDD and unit testing. In a way, it’s sort of like plato’s cave, we hear about the experiences with TDD and unit testing others have, and we try to picture those experiences solely through what we hear.

Here are some of the myths (and latent facts) about TDD, Unit Testing, and how they interact with team velocity.

Myth: Test Driven Development Slows Teams Down

Reality: It’s true that when you first start using TDD, you’ll go slower than if you didn’t — but that’s only transactionally and temporarily true. Think of it this way; if I develop a feature in 1 hour, but spend 6 hours debugging it, that’s worse than spending 6 hours developing the feature through TDD, and 0 hours debugging it. Once that feature written through TDD gets to test; I am confident in it. I’m not as confident (or really confident at all) that the tester won’t find an issue if I hadn’t built the tests first. After six hours of debugging, it’s quite likely I missed something, I’m too far into the weeds to realize it, but a fresh set of eyes (our friendly neighborhood QA) will likely find it.

Babies crawl before they walk, and they walk before they run; but no one ever suggests babies should start out running — there’s too many fundamentals that get missed if you try that.

Another reality is that we all take it for granted that ‘hardening’ and ‘bug fixing’ sprints occur; and they occur far too often, by necessity. What you may not realize is that that time counts against development too. It’s an opportunity cost — if you have to spend entire sprints “hardening” or “bug fixing”, then you aren’t able to spend those sprints delivering features and value — you’re effectively shoring up the value you’ve already tried to create.

Instead of doing that, why not build the feature through TDD?

Myth: Test driven Development is a Testing methodology

Reality: TDD is a development methodology. It’s a way to develop software to achieve the following:

  • Code that is designed to be testable
  • Well understood code
  • Well specified code
  • easy to change code
  • consistent and constant delivery

The fact that there are tests is almost an accident of speech. When Test Driven Development was created, tests were the primary way to assert that behavior was a certain way; but we could have easily called it ‘example driven development’. TDD’s purpose is not to create tests; it’s to create code that’s easy to test and easy to change, and allow the creators to be confident when changing the code.

It is not a replacement for the normal QA process; and not a replacement for system based tests — though it can cut down drastically on the number of paths integration tests need to take to be useful. This is especially true of the brand of TDD I teach: FauxO (see my last post on the subject).

Test Driven Development ensures teams of humans can work together to deliver better software, faster than if they hadn’t used TDD.


Myth: Unit Tests are a Development Methodology

Reality: Unit tests are a testing methodology, not a development methodology. The difference is subtle but important. You don’t create Unit tests to determine the path software ought to take; you create unit tests after the software is created to verify it does what you think it does — to double check your work.

TDD, on the other hand, is a development methodology. it’s a way of developing software that puts the test first; and the implementation later. This allows you to specify the behavior you expect, before the behavior itself is written. Unit Testing takes the opposite approach; where the tests are written after the code.

The reality is when you create the Unit Tests after the code is written, the tests are more brittle and necessarily dependent upon all of the dependencies needed to run those tests.

Unit Tests have their place — though I (and others) argue they should be shown the door in favor of good TDD practices and integration tests that don’t have to traverse every path through the application.

Myth: Integration Tests + Unit Tests are Good Enough

Reality: If you develop code with unit tests and integration tests, you’ll run into two problems pretty often:

1. Your Integration tests necessarily have to cover lots of different paths through the application if your application wasn’t developed through TDD. This is untenable. JB Rainsberger famously said Integration Tests are a Scam, and it’s partly because of how we write code.

2. Your Unit tests (again, without developing them through TDD), are bound to the implementation and dealing with all the dependencies you created because the pain of adding a dependency wasn’t apparent until you tried to write unit tests after the fact. That means if your implementation changes or your caller changes, your test could very well fail, even though nothing of substance changed. A common smell that this is the case is extensive use of Mocks and Stubs.

In reality, Integration Tests + Unit tests are treated as good enough, when together, they’re about the most painful way you can test and develop software. (I believe in Integration tests — but they should be spared from having to go through the thousands of codepaths – that should be able to be handled through code that’s been developed through TDD.

Myth: The Goal of TDD is 100% test coverage

Reality: This is another one of those instances where ‘test’ was an unfortunate characterization for TDD. If some TDD is good, and more is better, why not go with the most TDD? 100%?

Besides being infeasible (Frameworks and the icky-real world structures like the Network, Disk, Databases and the system clock get in the way), there is a point of diminishing returns; and those returns hit right about the time you try to deal with the Network, Disk, Databases, and the System Clock in an extensive fashion. This is why “Outside-In” TDD comes across as a brittle to change -> The Mocks and Stubs replace real-world implementations; and we frequently find ourselves wanting to change those implementations.

Also, since TDD is a development methodology, it’s not trying to be a testing methodology — that’s a useful by-product of TDD, but it isn’t the goal. The goal is code that is built to be easily changeable, well-understood, and its behavior specified.

Myth: You should aspire for 100% test coverage

Reality: The reality is if you work with very expensive equipment, the value of 100% test coverage outweighs its cost. If your software deals with safety or security, that’s also true. If you’re writing a banking back-end — that is very likely true; but for the rest of us; you want enough test-coverage that you’re confident in making changes; but not so much that the tests get in your way of making change (again, brittle tests make code hard to change).

Fun or Frustration?

You ever bowl as a kid? It was fun, they put down these giant tubes in the gutters, and you could just… bowl.

Around 13 or 14, it was no longer cool to use the gutter protectors, and I started bowling for real.

I sucked. Bigtime.

That is to be expected, right? I feel like if I bowled every day now, for 8 hours a day, I’d get to the point where I’d be good at the game, as good as I was when I had the gutter rails to help me.

I have been programming for 20 years now, and I gotta say, very few times has programming felt as effortless as bowling with the gutter rails up.

In one recent case, I’m helping a client modernize their enterprise web applications, and they assembled pieces from the Angular Material CDK to comprise their workflow. One such piece was the stepper -> think a “wizard” for Angular. The problem was, due to the way the application had been constructed, each of the steps of the form was actually child components that needed to communicate back to their parent that they were dirty, lest a user navigate to another step and lose their input data.

Fixing that issue didn’t feel fun., and it wasn’t. It was fighting the framework every step of the way.

Too much programming is like that these days; it feels more like bowling without the gutter rails on and sucking at it.

If you put the framework first, you have to do things the framework’s way. When you’re bowling, you have to learn how to hold the ball and use your arm and wrist to give it the right amount of spin, and you have to build the muscles necessary to do that consistently. It’s not an easy process.

The big difference between bowling and relying on framework’s opinions for your architecture is that bowling techniques still work if you pick up a new ball. If you pick up a new framework, you effectively have to start all over, learning that new framework’s opinions.

The times where I’ve been happiest programming is when I could specify the solution without worrying about how a framework felt about it. Uncle Bob et. al., speak of this as putting ‘scar tissue’ between your code and your framework; and treating frameworks as frenemies to your continued success.

There are two major and one minor way to program using TDD (technically there’s another which is entirely functional but given my audience is C# and .NET developers I’ll leave that one aside):

Outside In (London School of TDD): No scar tissue between you and the framework; you mock and stub out any form of Network IO, State, or Disk IO.

Inside Out (Detroit School of TDD): You focus solely on your solution, and take what amounts to a devil-may-care attitude to the world outside of the code you own.

FauxO – Popularized by Gary Bernhardt’s talk “Boundaries”: This method uses a similar “inside out” approach that the Detroit School does, but it uses value objects at the boundaries, and only TDD’s to the boundary of the framework; at which point it switches over to integration tests — the Framework bits are generally not unit tested.

After years of trying to make both the London and Detroit school’s work in production applications; I decided to focus solely on FauxO, and going deep on that as a method to develop production applications using TDD, without all the pain and discomfort brought on by the other approaches.

And you know what? It’s made programming feel as fun as bowling with the gutter rails did to 11 year old me, and that’s a good feeling.

What problems have you run into adopting one of these schools of TDD? Reply below and tell me about them.

Ego v. Confidence.

I would be outright lying if I said I didn’t have an ego attached to what I do.

I take pride in software that works; in a beautiful solution to a hard problem.

I take pride in seeing my work in the world.

I take pride knowing that I’ve solved a lot of hard problems in my career, and I feel prepared to do it again.

There are two problems here.

The first is eloquently illustrated by this meme:

The second is far more subtle:

When we’re developing software in a team, those small bits of ego add up. Because it’s not just one person on the team that thinks that way: everyone tends towards that about something.

The problem with the ego here is that it gets in the way of true confidence in your code. Ego is great — without it we may have never tried to get better; but the progress your team makes comes from true confidence, not from the ego.

What’s the difference?

If I have tests that prove how the code works (and doesn’t work!) with certainty, that isn’t ego. I ‘have the goods’ to back it up, so to speak.

Contrast that if there aren’t tests in place.

You, of course, know it couldn’t possibly be your code that’s causing the problem, right? That is, until it turns out to be your code that causes the problem.

For the record, I had this happen to me just a week ago: There’s a particularly hard-to-test aspect of Angular (a form/component unloading after a failed? save) that has been causing trouble, and because it exists at the boundaries; it’s particularly insideous to test and it only intermittently happens in certain environments. All this means that while my ego believes it couldn’t possibly be something I wrote, deep down, I don’t have the confidence to assert that; because there aren’t tests in place around it.

You probably know where I’m going with this. There’s a hierarchy to actual confidence in your system, in descending order. Incidentally confidence works inversely proportional to ego, here.

Systems built on TDD (High confidence, low need for ego)
Systems covered with Unit Tests and Integration Tests
Systems covered with Integration Tests
Systems covered with no tests (Low confidence, high need for ego)

Tests can give you confidence, but their confidence comes from the foundation they were built on. When that confidence isn’t there, ego takes over, and that can be bad for a software team.

P.S. For the foreseeable future, I’m migrating my in-person TDD immersion training for .NET software teams to online training. This training helps ensure the entire team is operating from the same worldview with regards to TDD and tests. It’s a chance for your team to learn TDD together, and learn where it works and doesn’t work. It’s a chance to build team trust and confidence both in themselves and their software. It’s a chance to help your team hit their deadlines without all of the panic that usually ensues. If this sounds like something your team could use to help it build better software, faster, go to https://www.doubleyourproductivity.io/paid.html to learn more.

The Divide Between Developers and QA

One common refrain I’ve seen in almost every team I’ve been a part of is that there’s a divide between QA and developers. It manifests itself in something like this:

We need a process where code isn’t just ‘thrown over the wall’.

We can’t spend time fixing that and trying to make our deadline.

We have QA who will find those issues. Let’s focus on adding features.

It’s harder for us to test this if you don’t document how it works.

Why can’t we just build a test automation framework? That’s solve our problem.

Why can’t we just have developers manually run through the test cases to help QA?

Testing is everyone’s responsibility.

We don’t have time to test now, we’ll take on that risk for delivery, we’ll test after the release is done.

If we don’t spend time focusing on quality, our stakeholders won’t trust us to deliver quality software.

And the list goes on. This tension is really common. I’ve never seen a team completely free of this tension, and I can’t help but think it’s a by-product of how we develop software.

We develop software to ‘get features out’ without really a care towards planning each part and how it relates to the whole methodically. And on the other hand, when we do plan methodically, we go too slowly and end up abandoning it halfway through the project.

QA, on the other hand, has a sole mandate: Make sure any problems a user encounters are found before the user encounters them.

Now, imagine your team changes how it develops software, without sacrificing velocity. Imagine a team where:

Developers develop software against a specification of tests; where edge cases can be added to those tests trivially.

Developers document how their API works through those tests (this is a good first pass, but is not sufficient on its own)

Developers respond to what QA finds by adding new test cases that cover the bug.

QA is able to focus solely on customer facing problems and system-wide testing, and not logic or environmental errors

QA and Developers are able to communicate over a common way of setting up tests: Arranging the data to be tested, acting out that test, and asserting what’s true; and those tests are in source control near the code they affect.

Developers and QA feel like they’re a part of the same team because testing truly is everyone’s responsibility. Developers use TDD to ensure their designs are extensible, their logic is organized, and their expectations for the code are spelled out. QA can rely on this test output to understand what parts of the system developer’s have tested, and see the holes for themselves, with confidence that someone has run tests against this code before.

The world I describe is not a pipe dream. If your team embraces Test Driven Development, you’re able to cultivate the conditions that make this world possible.

I offer team immersion training for TDD for .NET teams; this helps developers and QA alike see the world from each other’s perspectives and to help close the divide between developers and QA.


What I’ve been up to

Since the Starting again blog post from January 2019, I’ve gone through several activities and phases:
1. Unsettling Ennui (I’d say depression, but this is public, and I’m not sure we’re ready for that conversation).
2. Hearing Colin Powell’s quote in my ears at least once per day: “never let your ego get so close to your position that when your position fails, your ego goes with it.
3. Meeting with the local SCORE mentor and developing a business plan
4. Some freelancing in .NET and .NET Core.
5. Refining that business plan and trying to figure out how to niche down, who to serve, and how to serve them.
6. Discovering The Business of Authority podcast (and subsequently Ditching Hourly), and binging those.
7. Talking with potential customers and and refining who I’m serving, why I’m serving them, and how I’m serving them.
8. Figuring out that “Solutions Consultant” is an utterly bland term that isn’t useful on its own.
9. Making a map of what I’m good at, what I like to do, and what there is an existing market for, and what the gaps are in that market.
10. Reaching out to friends and people I know to ask their thoughts and to re-establish those connections.
11. Had a coaching call with Jonathan Stark; where we honed in on what I could offer: helping software teams double their productivity, and was just left with a ‘how’ (inside baseball: I initially decided to focus on process improvements as a way to help teams go faster, but abandoned it because I don’t <3 JIRA *that* much).

I’m probably forgetting something; but it’s been a year.

Around January of 2020 (A full year later!) I finally finished the exercise of what I’m good at, what I like doing, and what there’s a market for, that matches what I want to do: Help software teams build better software, faster.

The decision came from a conversation I had with Brent Ozar a year ago(!). If you don’t know Brent, he specializes in making SQL Server faster and more reliable, and he has been doing this sort of work for at least 10 years now (longer, but I think he’s been in business for himself for the past ten years). He and I were on a call and he was giving me some advice when I expressed that I didn’t really know what to offer. He said (paraphrased), “Whatever it is. Make sure you like it. A lot. Because you’ll be teaching/talking/reading about it, every day, forever.”

So when I ran down the list of things that help software teams build better software, faster, I thought about processes (requirements analysis, branching/merging, team practices, architectural practices, TDD), tools (git, JIRA, testing frameworks), and development methodologies (agile practices such as XP, Scrum, developer methodologies like TDD), and I realized that I have extensive experience in failing and succeeding with TDD, that there’s a gap in how it’s taught and how it’s presented, that it really is a huge enabler of team productivity, and that learning TDD doesn’t require a change in a team’s process culture to implement.

In other words, it’s one of the practices that a team can implement on its own, without trying to drive the larger organizational cultural change that scrum and XP require.

I’ve practiced TDD across teams and organizations, have seen where it fails and where it succeeds, and I believe it’s a good way for software teams to level-up how they build software. I believe TDD helps organizations build software sustainably, without death-marches brought on by feature explosion, and allow teams to make changes to software more quickly with confidence.

These are not new benefits of TDD. What is ‘new’ (though even this is 8 years old) is how teams should learn TDD. One of my chief complaints with how TDD is taught is that it falls apart on any application larger than a todo-list. If you practice ‘outside-in’ TDD, mocking and stubbing make your “units” brittle, causing lots of pain and changes whenever a caller changes how it arrives at a particular result. If you practice inside-out TDD, it’s far too easy to create units that have a “can’t see the forest for the trees” problem, and they run into issues as soon as you hit the boundaries of any application – the UI, Network IO, Disk IO, or the framework you’re using.

Up until this point, teams have widely either shunned TDD, or adopted TDD in small places, or tried to adopt TDD Across the application but through a patchwork of integrated tests and TDD’d code, are exhausted by the whole process. Simply put, it’s easier not to adopt TDD.

There is a solution for all this, and while I heard about it in 2012, I needed to experience it and use it for myself, and so I did. It wasn’t until last year that I realized, if we’re going to get teams and businesses to adopt TDD, we need to teach it in a way that makes it sustainable. We need to teach this approach, and more importantly, show how it works (as it is a rather large departure from how we develop software now).

And so starting in earnest this last January, I started focusing my efforts on this problem: Finding ways to help teams adopt TDD practices that will help them deliver better software, faster. The ‘faster’ part sounds.. trite, I know. That’s where the different stakeholders in a software project have different priorities. The delivery manager wants features yesterday. The development team doesn’t want a deathmarch. Neither wants 11th hour bugs and a stressful launch. And yet, that happens all too often.

I believe that adopting TDD (particularly the FauxO variety described in the above link) will help software teams and businesses that rely on project-based work to create a better experience for themselves and their customers. And since the majority of my experience is doing so in a .NET centered world, that’s who I’m serving: .NET project teams (think internal IT teams, and software development consultancies), and I’m serving them through on-site .NET TDD immersion training and remote TDD mentoring and coaching.

As an end note if this interests you, you can subscribe to my daily emails (in that nice input box below), you’ll also receive emails related to the intersection of productivity, .NET software teams, and TDD.

What costs do your team face?

Do you ever have the situation where someone on your team says,

“I’m not the best person to investigate that bug, Alice knows that part of the system better”?

“I’m blocked until Bob is available to take a look at this, as he was the original author”

“Trevor wrote that part, and since he’s left, we don’t have a good understanding of what it does, so fixing it will be hard.”

If you were to dig deeper, what would you think the problem was?

a) The entire team is not trained up on the whole code base

b) there is lots of silo’d work, causing gaps in knowledge

c) There are human egos at play, not wanting to step on each other’s feet (or on their code)

d) There isn’t good enough documentation to help developers learn the code base

So, which one is it?

It could be all of them.

How do you fix it?

You could require pairing on work; that would be a slow investment, with a long term payoff (but typically worth it, no matter what)

You could have lunch and learns about parts of the code base; though this sounds only slightly more enticing than watching paint dry.

You could “require” code reviews by two or more random people; and assign metrics to how successful those were; though that would have its own problems (I don’t recommend this, but I’ve seen it happen).

You could require Developers write documentation, or hire a BA or technical writer and require they write the documentation. This is only slightly more successful than pushing a string, and about as useful to developers in learning the system.

Or, you could do something else entirely. Something enticing to developers, useful to everyone in de-silo’ing knowledge, and something that shuts down the “only Trevor understands that part well enough”. It could provide a usable level of developer domain documentation.

The problems you hear from your people are symptoms: Your system isn’t built to be shared. It isn’t built to be accessible. It isn’t built for to be resilient against turnover.

One of the many benefits of adopting Test Driven Development is that you’re adopting change that helps your organization become resilient to turnover, resilient to bottlenecks in knowledge transfer; and allows developers across your team to understand parts of the system they have no knowledge with.

Such a change doesn’t happen overnight. It takes work and practice, and it’s as much a cultural change in how your team approaches solving a problem as it is a technical strategy.

But the real question is: Can you afford the cost of turnover, the cost of knowledge silos, and the cost of showstopper bugs at the last minute before delivery? The real alternative to Test Driven Development is to do nothing, and you already know that cost. You’ve experienced it.

To help teams achieve their full potential and to lower the cost of change while increasing team happiness and productivity, I now offer TDD immersion training and mentoring for .NET Teams.

Using TDD to actually test edge cases

I haven’t told this story yet, but I probably should soon. However because it’s part and parcel of what I’m talking about today, I need to spill the lede a little bit:

Test Driven Development is not about testing.

Sorry, had to say it. It’s called Test Driven Development because you write tests to help you iterate more quickly and solidly than you otherwise might have. Since I’m doing a lot of home DIY projects, I’ll use a DIY metaphor. TDD is like the jig you create to help you hold wood in place, be it to do your own architectural shingle siding, or to automatically space studs at 16 inches and keeping them in place while driving a screw, or to ensure your door holds in place while you’re installing it. TDD are those jigs you create to be able to tell if the shiplap you’re installing is plumb and level in a 1950s home where nothing is plumb and level (I wish I could say I hadn’t personally done this for all of those projects)

One of TDD’s unsung strengths is the ability to be used to do all sorts of positive and negative testing that would have been more difficult with unit tests alone.

When I’m using TDD, I’ll often use it to send syntactically valid but domain-nonsense inputs to a method. I’ll use it to explore the bounds of a given problem by writing as many tests that test different conditions that I can think of. A TDD ‘purist’ may think of that as wasteful; but what better time to do it than before it leaves development.

Try to Defer Decisions to the last Responsible Moment

Think back to the last few projects your team worked on. How did you spend your ‘sprint 0’ (or in waterfall your ‘requirements analysis’ phase)?

Did you spend it deciding on the technology stack your team would use? The Database, the front end framework, the message queueing stack, the server side framework, the language?

Or did you spend it building out the first feature to get feedback?

Chances are, you spent it making technology choices, but not really building anything.

Think back to the decision to chose the tech stack up front, or the database up front, or anything else that became a pillar of your application up front.

Did you know enough about the project and its intended purpose to be able to make those decisions, confident that it wouldn’t bite you later?

Put another way:

Why do we make the hard decisions up front, when we know the least? We will never know less about a project than we do in the first iteration. As project teams, we’re in precisely the worst possible place to make lasting decisions about the technology stack or framework or database.

You will never know less about what you’re working on than you do right now. Tomorrow you know more, and the next day you’ll know more than that, and so on. At some point, you’ll be able to make the hard decisions with confidence and the least risk possible.

Would you rather have a development strategy that helps you show progress while helping you defer the risky decisions, or a development strategy that requires you to have knowledge you can’t have at the beginning of a project?

Do you know what that development strategy is called that systematically helps you show progress early and often, and defer risky decisions until the risk is mitigated?

Test Driven Development.

Aim Small, Miss Small

There was something about movie epics from the 90s. They had action packed sequences, everything turned out OK in the end, and they didn’t try to stretch their source material into three movies (Luckily the Matrix was only one movie; I hope they do make sequels).

In The Patriot, there’s a scene where Mel Gibson and his sons ambush a British Squad, and he reminds them aim small, miss small. Having never really grown up around marksmanship or hunting, I didn’t understand the phrase at the time. Now though, I get it.

If you aim at a large target, you may miss that large target and hit nothing. If you aim for a small part of that target, and you miss, you’ll still likely hit the larger target.

It’s the same way with TDD. If you try to TDD something large, you’ll end up (likely) with a jumble of code that tries to do too much. If you instead stop and take very small steps and aim for very small results, you’ll get those results, and even if you were to miss, you’ll still have hit your target area.

As a practical example; if your application exposes a custom search feature; you may be tempted to try to TDD that whole feature at once, from the user’s perspective. This is commonly referred to as “Outside-in” TDD, or the London School of TDD. Instead, break the problem down and only try to use TDD for a small part of it. For instance, TDDing the syntax of a search query. In doing that, in fleshing out the syntax (a very small part of overall search), you’ll find answers and questions and flesh out requirements you might never have seen if you focused on search as a whole.

Please stop recommending Git Flow!

Git-flow is a branching and merging methodology popularized by this blog post, entitled “A Successful Git branching model”.

In the last ten years, countless teams have been snookered by the headline and dare I say lied to.

If you read the blog post, the author claims they successfully introduced it in their projects, but purposefully doesn’t talk about the project details that made it successful.

And for the rest of us, this is mistake #1 of trusting the blog post. I’ll claim it as a truism that not all strategies work in all situations, with all people, in all contexts, and I apply that same logic to this branching model.

The end, right? Well, not quite. I can tell a few of you are unconvinced by this line of reasoning, so let’s dig deeper into why the gitflow branching model should die in a fire.

GitFlow Is Complicated On its Face

Even before you think about Microservices, or continuous delivery, gitflow is complicated. Take a look at this image and tell me it’s immediately intuitive:

(source: https://nvie.com/posts/a-successful-git-branching-model/ )

So here you have feature branches, release branches, master, develop, a hotfix branch, and git tags. These are all things that have to be tracked, understood, and accounted for in your build and release process.

More so than that, you also need to keep track of what branch is what, all the time. The mental model you need to retain for this to be useful carries a high cognitive load. I’ve been using git for 10 years now, and I’m not even sure I’m at the point where I could mentally keep up with what’s going on here.

Gitflow violates the “Short-lived” branches rule

In git, the number of merge conflicts with people committing to a branch will increase with the number of people working on that branch. With git-flow, that number increases even more, because there are three other branches (of varying lifetimes) that merge into develop: Feature branches, release branches, and hot-fixes. So now the potential for merge-conflicts is not linear, it’s going to potentially triple the opportunities for merge conflicts.

No thank you.

While I hesitate to say “Worrying about merge conflicts” is a valid reason not to pursue a branching strategy like gitflow; the amount of potential complexity that is introduced when all these branches come together is too much to overlook. This would be fine if you have an organization with a low commit-velocity rate; but for any appreciable fast moving organization or startup, this won’t be the case.

Gitflow abandons rebasing

I recognize rebasing is a complex topic; but it’s important to this conversation. If you pursue gitflow, you’re gonna have to give up rebasing. Remember, rebasing does away with the merge commit — the point where you can see two branches coming together. And with the visual complexity of gitflow, you’re going to need to visually track branches, and that means no rebasing if you want to unwind a problem.

Gitflow makes Continuous Delivery Improbable

Continuous delivery is a practice where the team release directly into production with each “check-in” (in reality, a merge to master), in an automated fashion. Look at the mess that is gitflow and explain to me how you’re going to be able to continuously deliver *that*?

The entire branching model is predicated off a predictable, long term release cycle; not off releasing new code every few minutes or hours. There’s too much overhead for that; not to mention one of the central practices of CD is to roll-forward with fixes; and Gitflow treats hotfixes as a separate entity to be carefully preserved and controlled and separated from other work.

Gitflow is impossible to work with in multiple repositories

With the advent of Microservices; there’s been more of a push towards the idea of micro-repos as well (cue commenter shouting “they’re orthogonal to each other”), where individual teams have control over their repositories and workflows, and where they can control who checks in to their repositories and how their workflows work.

Have you ever *tried* a complex branching model like gitflow with multiple teams, and hoped for everyone to be on the same page? Can’t happen. Soon, the system becomes a manifest of the different revisions of the different repositories, and the only people who know where everything is at are the people pounding out the YAML to update the manifests. “What’s in production” becomes an existential question, if you’re not careful.

Gitflow is impossible to work with in a monorepo as well

So if micro-repos are out due to the difficulty in coordinating releases, why not just one big branching workflow that all the microservices teams have to abide by for releases?

This works for about 3.2 seconds, or the time it takes for a team to say “This has to go out now”, when the other teams aren’t ready for their stuff to be released. If teams are independent and microservices should be independently deployable, you can’t very well tie your workflow to the centralized branching model you created in your mono-repo.

Who should (and shouldn’t) use Gitflow?

If your organization is on a monthly or quarterly release cycle and it’s a team that works on multiple releases in parallel, Gitflow may be a good choice for you. If your team is a startup, or an internet-facing website or web application, where you may have multiple releases in a day; gitflow isn’t good for you. If your team is small (under 10 people), gitflow puts too much ceremony and overhead into your work.

If your teams, on the other hand, are 20+ people working on parallel releases, gitflow introduces just enough ceremony to ensure you don’t mess things up.

Ok, so my team shouldn’t use gitflow. What should we use?

I can’t answer that. Not all branching models work for all teams, in all contexts, and all cultures. If you practice CD, you want something that streamlines your process as much as possible. Some people swear by Trunk-based development and feature flags. However, those scare the hell out of me from a testing perspective.

The crucial point I’m making is to is ask questions of your team: What problems will this branching model help us solve? What problems will it create? What sorts of development will this model encourage? Do we want to encourage that behavior? Any branching model you choose is ultimately meant to make humans work together more easily to produce software, and so the branching model needs to take into account the needs of the particular humans using it, not something someone wrote on the internet and claimed was ‘successful’.

Author’s end note: I thought about using the ‘considered harmful’ moniker that is so common in posts like this; but then did a google search and realized someone else already wrote Gitflow considered harmful. That article is also worth your time.