“Unit Tests are a Design Smell. Do Not Write Unit Tests.”

In his talk at Rocky Mountain Ruby, Kill “Microservices” before Its Too LateChad Fowler had a single graf that (at the time) had me.. in fits. He said:

“Unit tests are a design smell. Do not write unit tests, they are a design smell. (…) Tests optimize for permanence. They create more coupling because there’s necessarily another file you have to change when you want to change this and that. But, the idea is, when you’re thinking about tests as validation, and by the way I don’t think Test Driven Development is a design smell, I think that’s a really good, productive way to work. But thinking of tests as validation, it just bakes all these assumptions about your system into a file you run all the time. It creates stasis.” (emphasis mine)

Chad Fowler, on the utility of Unit Tests

As a brief aside, if you watch the talk, it’s less about Microservices and more about how to create systems that are stable and resilient to changes made to that system, and paradoxically code that is easy to change and get rid of. Ok, brief aside done.

I want to go back to one thing Chad said: Tests optimize for permanence.

If I write a unit test (that is, a test written after the production code has been written), that test is coupled to that production code. Maybe on purpose, most likely accidentally. As Chad says, the assumptions about the system are also baked into that test file as well.

What are those assumptions? Well, that this method is set up in a certain way, that it makes calls 1,2,3 into the aether and receives specific payload xy, and z as a result of each calland that it produces output n.

It’s really hard to tell what’s going on at this point. What is this test trying to solve for? What’s it validating? What the hell does it do? Why does it do it?

The test is coupled to the implementation, and that means any time you change the implementation (removing call #2 and payload y respectively), you’re going to break the test; even if that payload and implementation were moved to call #1 or #3.

This is what we think of as a brittle test, and it was caused because we wanted developers to write unit tests.

This happens far too often, and it happens because we believe the code we’re writing is important.

It’s not. The system is important. The behavior the user wants to see is important. Its implementation in code is at best a temporary win.

This is why TDD is so powerful and useful for teams that are struggling with automated tests; it teaches teams to write tests that describe the behavior to occur, and to ignore the specific code that implements that behavior.

The code implements the behavior specified by the test; but the behavior stays even if the implementation changes.

This is, of course, not as easy as the writing on the page. It’s a lot like this drawing, tbh:

That’s why you’re here. That’s why I’m here. If it were easy to build a system that is testable and tests that are resilient in the face of implementation change, there’d be no need to help educate and mentor teams on TDD. Everyone would be able to read those three six rules of TDD and go about their day.

But really, what we all want — whether we use TDD or tea leaves, is to write software that our customers can use, make our stakeholders happy, and be easy to change without death marches, overtime, or crunch time. We want to be productive without headaches, and the business wants software that does what it says on the tin, when they need it. We want software free of regression bugs that haunt our team, our bottom line, and cause the customer and our stakeholders to lose trust in us.

P.S. If you think the possible outcome I describe is valuable, and you want your team to learn TDD together, consider a virtual TDD immersion training session for your team. If you’d rather do it on your own, I’m putting together a course on TDD that fills in the gaps between Fig. 1 and Fig.2 of the owl meme. Sign up to receive updates on when that course will be ready.