The air around TDD has a lot of myths mixed
in with the reality, and those myths generally come from a place of
blog-post learning on TDD and unit testing. In a way, it’s sort of like
plato’s cave, we hear about the experiences with TDD and unit testing
others have, and we try to picture those experiences solely through what
we hear.
Here are some of the myths (and latent facts) about TDD, Unit Testing, and how they interact with team velocity.
Myth: Test Driven Development Slows Teams Down
Reality: It’s true that
when you first start using TDD, you’ll go slower than if you didn’t —
but that’s only transactionally and temporarily true. Think of it this
way; if I develop a feature in 1 hour, but spend 6 hours debugging it,
that’s worse than spending 6 hours developing the feature through TDD,
and 0 hours debugging it. Once that feature written through TDD gets to
test; I am confident in
it. I’m not as confident (or really confident at all) that the tester
won’t find an issue if I hadn’t built the tests first. After six hours
of debugging, it’s quite likely I missed something, I’m too far into the
weeds to realize it, but a fresh set of eyes (our friendly neighborhood
QA) will likely find it.
Babies
crawl before they walk, and they walk before they run; but no one ever
suggests babies should start out running — there’s too many
fundamentals that get missed if you try that.
Another reality is that we all take it for
granted that ‘hardening’ and ‘bug fixing’ sprints occur; and they occur
far too often, by necessity. What you may not realize is that that time counts against development too.
It’s an opportunity cost — if you have to spend entire sprints
“hardening” or “bug fixing”, then you aren’t able to spend those sprints
delivering features and value — you’re effectively shoring up the
value you’ve already tried to create.
Instead of doing that, why not build the feature through TDD?
Myth: Test driven Development is a Testing methodology
Reality: TDD is a development methodology. It’s a way to develop software to achieve the following:
- Code that is designed to be testable
- Well understood code
- Well specified code
- easy to change code
- consistent and constant delivery
The fact that there are tests is almost an
accident of speech. When Test Driven Development was created, tests were
the primary way to assert that behavior was a certain way; but we could
have easily called it ‘example driven development’. TDD’s purpose is
not to create tests; it’s to create code that’s easy to test and easy to
change, and allow the creators to be confident when changing the code.
It is not a replacement for the normal QA process; and not a replacement for system based tests — though it can cut down drastically on the number of paths integration tests need to take to be useful. This is especially true of the brand of TDD I teach: FauxO (see my last post on the subject).
Test Driven Development ensures teams of humans can work together to deliver better software, faster than if they hadn’t used TDD.
Myth: Unit Tests are a Development Methodology
Reality: Unit tests are a testing methodology, not a development
methodology. The difference is subtle but important. You don’t create
Unit tests to determine the path software ought to take; you create unit
tests after the software is created to verify it does what you think it
does — to double check your work.
TDD,
on the other hand, is a development methodology. it’s a way of
developing software that puts the test first; and the implementation
later. This allows you to specify the behavior you expect, before the
behavior itself is written. Unit Testing takes the opposite approach;
where the tests are written after the code.
The reality is when you create the Unit Tests after
the code is written, the tests are more brittle and necessarily
dependent upon all of the dependencies needed to run those tests.
Unit
Tests have their place — though I (and others) argue they should be
shown the door in favor of good TDD practices and integration tests that
don’t have to traverse every path through the application.
Myth: Integration Tests + Unit Tests are Good Enough
Reality: If you develop code with unit tests and integration tests, you’ll run into two problems pretty often:
1. Your Integration tests necessarily have to cover lots of different paths through the application if your application wasn’t developed through TDD. This is untenable. JB Rainsberger famously said Integration Tests are a Scam, and it’s partly because of how we write code.
2.
Your Unit tests (again, without developing them through TDD), are bound
to the implementation and dealing with all the dependencies you created
because the pain of adding a dependency wasn’t apparent until you tried
to write unit tests after the fact. That means if your implementation
changes or your caller changes, your test could very well fail, even
though nothing of substance changed. A common smell that this is the
case is extensive use of Mocks and Stubs.
In
reality, Integration Tests + Unit tests are treated as good enough,
when together, they’re about the most painful way you can test and
develop software. (I believe in Integration tests — but they should be
spared from having to go through the thousands of codepaths – that
should be able to be handled through code that’s been developed through
TDD.
Myth: The Goal of TDD is 100% test coverage
Reality: This
is another one of those instances where ‘test’ was an unfortunate
characterization for TDD. If some TDD is good, and more is better, why
not go with the most TDD? 100%?
Besides
being infeasible (Frameworks and the icky-real world structures like
the Network, Disk, Databases and the system clock get in the way), there
is a point of diminishing returns; and those returns hit right about
the time you try to deal with the Network, Disk, Databases, and the
System Clock in an extensive fashion. This is why “Outside-In” TDD comes
across as a brittle to change -> The Mocks and Stubs replace
real-world implementations; and we frequently find ourselves wanting to
change those implementations.
Also,
since TDD is a development methodology, it’s not trying to be a testing
methodology — that’s a useful by-product of TDD, but it isn’t the goal. The goal is code that is built to be easily changeable, well-understood, and its behavior specified.
Myth: You should aspire for 100% test coverage
Reality: The reality is if you work with very expensive equipment, the value of 100% test coverage outweighs its cost. If your software deals with safety or security, that’s also true. If you’re writing a banking back-end — that is very likely true; but for the rest of us; you want enough test-coverage that you’re confident in making changes; but not so much that the tests get in your way of making change (again, brittle tests make code hard to change).