Ship of Theseus

Have you ever heard of the Ship of Theseus?

The idea behind the Ship of Theseus is that if a nautical ship (or Starship, if that’s your thing) had all of its components replaced throughout its life, is it still the same ship?

If it is the same ship, then are the components individually important? Are they too important to replace? They’re all necessary, but is their existence in their current form important? Or, is it the actions/needs each component serves that makes it important, and not necessarily the component itself?

Put another way,

If you throw away the code from a particular component and replace it; how important was that code?

It is both supremely important and unimportant, all at the same time.

Code itself is unimportant: What is important is that it fits together to make a whole that provides value for its users, but if you can’t replace code, then it is probably the most important code you own, as you have a single point of failure.

One of the interesting aspects of Test Driven Development (particulary FauxO) is that it makes the implementing code unimportant. It takes away that single point of failure. Now, any code that passes your tests is able to both replace and be replaced.

There’s a lot of power there, power that isn’t available when you’re only writing unit tests. Because unit tests are written after your production code; they’re necessarily coupled to it, as we’ve talked about before. But if your code is able to be replaced; the power isn’t in the code any more, it’s in the whole. If a component gives you problems, replace it. You can’t do that with just unit tests. It’s sometimes not even possible to do it with Unit test + Automated E2E tests due to the number of code-paths your automated tests have to traverse.

“Build One to Throw Away (You will anyway)”

When I read that line from The Pragmatic Programmer in the 2000s, it shocked me.

What do you mean throw software away? I just got it working!!

20 years after its initial publication, we still are adverse to throwing code away.

Why?

Some examples of this:

  • Your team uses a GenericRepository<T> that somewhat works. It works for your entities that have the same ways to get them (GetById, GetAll, Find), but it fails for when you need child discrete entities (think of getting all orders in the system for a report; or all orders that contain a certain product) without getting their parent entity (in this case, the “Customer”), and then you have this weird thing where you try to add another method to the GenericRepository to work around this because we’ve invested in the GenericRepository<T> pattern. (This is a real example I’ve encountered multiple places)
  • A developer comes up with a new way of doing something in your system that contravenes your established convention, but is more readable and maintainable. People on the team who are afraid of change start to bring up lots of reasons why you can’t possibly do it that way, and that this change should be ‘researched more’ (note: If you’re asking someone to ‘do more research’ without a specific deliverable and a specific question you want answered, you’re really just softly scuttling the idea. Also, asking someone to ‘do more research’ when they did the research and your question doesn’t have the answer you want is also scuttling the idea).
  • About six months into your project, you realize that a relational database wasn’t the best choice of data store for your project. Or that ArangoDB is just too niche to get support for, and instead of saying “We need to stop now, our foundation was for a house and we’re building an office building”, you plod on, introducing ever more complicated caching and retrieval mechanisms to work around the database.


I could go on. Over my career, I’ve heard and experienced stories of this that all come back to the same general idea:

Once it works, we don’t want to change it.

or put another way:

We get invested in how we solve problems, and don’t want to learn new things.

All of this is normal, and it the prime reason why we don’t change precisely when we need to. There is inertia to the old ways of thinking and doing business.

But remember the second part of the quote: You will [throw it away] anyway.

I suppose on a long enough timescale that’s a tautology, so I’ll focus on the short term.

What modules have you rewritten over the past year? Why did you rewrite them?

What changes did you make that your architecture hadn’t accounted for? What did you do when you found this out?

There’s a saying You’ll never know less about your problem than you do right now.

Tomorrow you’ll know more than today, and so on.

If your hesitance to throw away what you built is that you’ll lose work; you’re right. You’ll lose work created with less understanding than you have right now.

You’ll lose work created (sometimes) on false premises.

The upside to throwing away work is that it gives you room to correct those false premises.

The upside to using TDD is that while you may throw away the work, you’re generally not throwing away the customer’s view into your world; allowing you to correct your thinking and verifying it’s correct without disrupting your customer.

That’s very powerful, and a powerful reason to adopt TDD. You’re going to throw away your work, would you rather throw it away and replace it with confidence? Or without confidence?

An example of a non-brittle test

For the TDD course, I’ve been iterating through the course material; implementing it as a I go through each lesson.

In the most recent example, I wanted to show how a test could go from failing to passing, without changing any test code; and without changing the API.

Now, this is normal in TDD. The API is what the end consumer of your process will see (whether that’s another process or an external consumer).

For instance; here is the API for my budget right now:

Budget b = new Budget("name", startDate: new DateTime(2020,05,01));

the API for my budget takes in two things for its creation:

1. A name (how will you refer to this?)
2. A date the budget should start

The Budget exposes two operations:

b.Add(new BudgetItem(/*...*/));
b.CumulativeSpent(effectiveDate: new DateTime(2020,07,01).Date));

1. Adding a budgetItem to the budget with a .Add() method
2. Telling me how much as been spent as of a certain date (The “effectiveDate” in this example).


Now, here’s the test, and the test didn’t change from when it was failing (and it has been failing for a few days, I ignored it because I got ahead of myself in writing it) until when it was passing:

[Test]
public void BudgetShouldAccountForTotalsAcrossMonths()
{
  BudgetItem i = new BudgetItem("b1", 1.23M, new DateTime(2020, 05, 01).Date, new OncePerMonth(), new DateTime(2020, 05, 01).Date);
  Budget b = new Budget("Budget That Shows across Months", new DateTime(2020,1,1).Date);
  b.Add(i);
  var totalSpent = b.CumulativeSpent(new DateTime(2020, 07, 01).Date);
  Assert.That(totalSpent, Is.EqualTo(3.69M));
}

Ignore the BudgetItem’s API for now. It’s hideous and going to be refleshed out (that’s where the next LiveStream will focus on, using what I’ve learned about the domain and the APIs I’ve created to create better APIs)

So how did I make the test pass?

By making a one line change inside the .CumulativeSpent() call:

Importantly, and the takeaway from this email, this is how change happens. Not by affecting what the consumer of your code does to execute your code — we don’t want to change tests. If we have to change tests when we change code it’s a sign our tests are too coupled to how the code does its job. We don’t care if there’s a mechanical turk manually adding up each budgetItem, all we care is that it gives us the cumulative spent.

In short, this change had me like:

P.S. If you aren’t already a member of the course list, and watching the livestream (or getting more updates on the course, like this one) interests you, add your email to the list at https://course.doubleyourproductivity.io. I’ll send out an email to that list when I’m about to start the livestream (it’ll be either this weekend or early next week, since my wife is still working on finishing grad school in the evenings).

When did On-Base Percentage Come About?

If you’ve watched (or read) Moneyball, then you know that On-Base Percentage (OBP) in Baseball was a second tier stat for the a long time until the advent of Sabermetrics being used more widely as a way to gain advantages as a team by relying on undervalued players instead of superstars.

Wikipedia says OBP became an official statistic in 1984, but when was it created?

August 2, 1954.

30 years earlier!

Life Magazine published an article “Goodby [sic] to Some Old Baseball Ideas” written by Branch Rickey included an equation for On-Base percentage. You can read that article here. (In the article, OBP is called “On Base Average”).

Here’s what the calculation for OBP/OBA looks like:

Why did it take 30 years for an objectively better way to measure getting on base to become ‘official’?

Why did it take another 20 for it to form the basis of Sabermetrics and take the baseball world by storm?

Old habits die hard.

Can you imagine how the course of history for some teams would have changed if they had paid closer attention to something that was sitting under their noses for 50+ years? The baseball world would look completely different.

But, this isn’t a baseball blog. It’s a blog about how to double your productivity, focused on strategies for software teams to produce better software, sooner.

TDD has been around for 20+ years. How many teams have you been on that have practiced TDD? For me, the answer is one. Out of all the teams I’ve been on across all the industries and sized companies, I’ve only been on one team that practiced TDD before I arrived.

I’ve been with seven different companies in my career and twelve or thirteen teams, and there was one that practiced TDD before I arrived (incidentally, it’s also the same team where I first saw the value of TDD).

TDD and OBP share a lot in common as measures. They measure what is objectively important; and they are relatively simple measures. They focus on an outcome-based approach to measurement, instead of a heuristic based approach (unit tests and static analysis are heuristics), and there’s a binary result: Either your code is easy to change with confidence or it isn’t.

Either you got on base or you didn’t.

Now that doesn’t mean getting on base is easy, or that making your code easy to change is easy; but the outcome is essential to software teams: Being able to make changes with ease and confidence. Being able to deliver on time, without regression bugs, and without fear.

Can you imagine how different the software world would be if all teams could do that, right now?

Special Thanks to Ben Orlin and his book: “Math with Bad Drawings” where I found out this tidbit about OBP (Pages 222-226).

What to do instead of writing unit tests (5/5)

At this point, if you’re still with me after four posts on the subject, you may wonder:

How the hell does any change ever happen?

How does any codebase ever get better?

That’s a fair question, and if we’re all putting on our big boy sweatpants and being frank with each other; it’s really hard.

Why would I do all of this ?

Why would I:

  • Figure out the part of the codebase that life would be great if I could get past these unit tests that are hard to write and have questionable value
  • Figure out that part’s cyclomatic complexity
  • Figure out whether its value is visible or invisible to business stakeholders
  • Try to sell changing that module to the business

Today, we’re going to talk about the final part, putting everything together.

At this point, you are ready to dive into the code and make changes.

I’d recommend, if this is the path you’re intent to go down, to write characterization tests to fully understand what the bits are doing. That is an entire series unto itself; so I won’t cover it here, but the idea is to write a test against the outermost public API for the thing you’re touching (warts and all) and have that series of tests help you understand what’s at play when you make a change.

You should also write contract tests that ensure the shape of data you’re sending to the database or to the user doesn’t change.

And then you dive in. The book Refactoring by Martin Fowler is useful here; as it details step by step what to do in a given situation. You start to put parts under test and factor out changes, and you do it, again and again.

But those are the technical steps that come at the long end of the steps I mentioned above; and those steps I mentioned above are important.

For what? Why would I do all that?

Well, to be blunt: If you’re not developing software through TDD, this is the road you’re on for the rest of your career.
You’re on a never-ending cycle of pain in your codebase, using metrics like cyclomatic complexity to triage issues (as well as whether it visible or invisible, and who it affects), and trying to sell change to your business.

That’s going to be your life.

Inertia doesn’t just apply to physics; it also applies to making change, especially change to codebases that are working and in production.

The appetite for something breaking is very low, and the unforseen benefits of refactoring a module are too hard to quantify to get over that hurdle.

In short, if you work at a median company doing median work, the deck is stacked against you.

It is a much larger subject than one post can do justice to (or even 100 posts), but the art of selling the need to change is crucial to being able to improve your codebase if you make tests an afterthought.

The short question is: Do you want the pain associated with writing tests after? Do you want to go through these steps for the rest of your career?

If not, you have two choices: Adopt TDD, or ignore unit tests entirely.

Selling the Change

If you work on a self-organized empowered agile team (Scrum or otherwise) and therefore you don’t need to sell change, then you can skip this post. You’re already in the place where you are empowered to fix problems in the code base (your white whale, that we’ve been spending the last four posts talking about).

How do you know if you’re in a self-organized empowered agile team?

If this white whale we’ve been tracking is on your backlog (and it should be), then talk to the Product Owner and team about moving it up into the next sprint.

If there’s actual trust and agency there, then you’ll be able to work on it, just on saying, “This is causing us pain and we need to fix it.”

If that doesn’t work for you, then I am sorry to be the one to tell you this, but you’re probably not on a self-organized empowered agile team; even if your organization uses one or more of those words mashed together.

That’s ok, because you can still sell change, even if you aren’t empowered to make change.

So, you’ve found this part of the codebase that is your white whale, you’ve used metrics to determine how bad it really is objectively; and you’ve ascertained whether it’s a module that affects the customer, or just the health of the system (“just”, as if that’s any less important), and you want to dive in and fix it. There’s just one teensy, tiny thing you have to do before that.

You’ve got to sell that change.

In a business that understands the value of being able to move nimbly, you wouldn’t have to sell very hard, if at all; but unfortunately not all businesses value the expertise software developers bring to the table. There is a if it works, ship it mentality that causes long-term problems for software projects.

What is the outcome your business will see if they let you spend x time on this?

Will they be able to see features and changes sooner?

Will this change open up new features the business/customer has been wanting, but unable to receive due to architectural limitations?

Will the system be faster?

Will developer time maintaining that part of the system (or the system in general) go down due to this change?

Will the team feel better when this part of the system is better? Will it improve morale?

It’s important to speak in terms that whomever you’re selling this change to cares about.

For instance, for business people, there are five major reasons to make change:

1. Increase Revenue
2. Lower Costs
3. acquire new customers
4. Retain Customers
5. Expand into new Markets

Which one (or more) of those five will your fixes enable? How will it enable them?

It’s also important to understand the person or people you’re selling this change to. They’re thinking about these things; but more importantly, they have a set of goals for this quarter that are on their mind. What are those goals? If you haven’t talked to them about those goals, you’ll want to, and you’ll want to see if your changes help further those goals.

You’re not trying to sell just anyone on your change — you want to sell a specific person. In your organization, you know who this is, whether they’re a shadow leader or the titled leader. They’re the one you need to sell your change on. There’s a saying in Basketball Play the man, not the ball. Basically that means that the mechanics of basketball only get you so far; but to actually win, you have to know who you’re playing, and how they play.

This is true in every aspect of your life. This information shouldn’t be used for a zero-sum win; but rather finding a way to win that lifts everyone up. To do that, you have to understand what drives the person you’re talking to and trying to sell the change on.

Now, there’s no way for me to do justice to the art of selling in an email or blog post; literal volumes have been written on the subject; but the point of saying this is that even for software developers: we must sell our changes to other humans in order to make things better.

We as developers like to think of software development as a purely technical task; but that’s not the case. Of course in an ideal world we’d be able to pursue positive change in our codebase because we need it as developers; but as long as the world isn’t ideal and you’re not in a self-organized empowered agile team, you’ll have to sell this change to someone.

Next time we’ll talk about what to do once you’ve sold the change.

Living Room or Utility Closet?

In Part 1, you wrote down the value unit tests (not TDD, unit tests) bring you, and the pains they cause you.

In Part 2, You also found the white whale in your code base, the part of the system that gives you fits when you try to unit test it, and you figured out its cyclomatic complexity.

Today, we’re going to ask one final question, and then we’ll be ready to tackle this part of the code-base.

Is that part of the code-base important to your customer or business?

In other words, if your code-base was a house you’re trying to sell, is what you’re looking at the Living Room or the utility closet?

Both are important, but for different reasons.

The Living Room must be pristine. It must always be in a showable condition. It will set the tone for the rest of the house. After all, the buyer is going to *live* in that room (it is called a living room for a reason).

The utility closet, on the other hand, need not be showable, and often never is. “Oh, that’s the utility closet”, and the prospective buyer may peek in, but they’re not going to study it, apart from “Does this work”, they don’t care how it looks.

Now, you and I both know the Utility closet is important. Last July, under a sweltering heatwave, my 20+ year old furnace finally gave out, and to make matters worse my AC system used the old style refrigerant which has been banned from sale and is a year or two away from the secondary market being closed, meaning it would get tougher (to border on impossible) to find replacement refrigerant should I need to do that. So I paid to have the whole thing replaced. Condenser, AC Unit, and Furnace (as well the Hot Water heater — it too was on its last legs).

I had figured this was coming, but I hadn’t put resources to fixing it, until it was too late.

And by the time it was too late, it cost me $11,136 (USD) to fix.

Anyway, back to our analogy.

If the part of the system you’re looking at is the utility closet; you have to change how you pitch fixing it to your business stakeholders than if it were the living room.

The financial impact of your utility closet going down is high, typically, and the financial impact of a latent problem is also high.

The reputational impact of your living room developing a hole in the ceiling is high, but the fix can be low cost (Not a serious suggestion but putting a bucket in the attic would patch the problem).

As a software developer, those are the two levers you can pull that have merit to a business person: The reputational impact of a bug due to the nature of the code that is UI visible; or an actual “We’re down and we don’t know why” bug to code that lives in your utility closet.

Better companies, of course, will understand the “long term maintenance” argument for keeping code under test; but just because they understand it, doesn’t mean it crosses their radar that making code easier to test should be an activity developers do.

So, determine if that code you’ve identified as your white whale is in the living room of your software project, or the utility closet, and next time we’ll make use of that information to help our business and our code-base get better.

What to do instead of writing Unit tests (part 2)

Yesterday, I asked you to write down the value you get from unit testing, and also to write out your problems with it.

If you haven’t already done that, please do. You can hit reply in the comments, or email me, or you can tell your text editor.

The important thing is to actualize your expectations.

Ok, now that you’ve done that, how do you feel?

Has writing unit tests given you the value you’re looking for? Has it given you more pain than you were expecting?

If the answers are yes, and no (respectively), go about your business. Keep writing unit tests. At some point, you’ll hit the pain. It’s… inevitable.

For those of you that get value and pain from unit tests, that pain is something you want to hone in on.

That pain may be a general feeling of ennui, or it may be a specific class or area of the system you’re dealing with. But, today we’re going to focus on a specific part of a system you’re dealing with right now that causes you pain for writing unit tests.

For today’s assignment, I want you to do two things:

1. I want you to write down what that area of the system is supposed to do, from the consumer’s perspective (the user if the user interacts directly, or the expectations from whatever other part of your system consumes what that piece does).

2. I want you to eyeball its cyclomatic complexity. (if you have a piece of software handy that does this; great; if not, you can eyeball it).

Cyclomatic complexity is essentially the count of the number of independent paths or decisions that need to be made to traverse a piece of code.

Start at 1, and add one for every condition in if statements you encounter (case/switch is just another style of if statement). Now this isn’t exactly correct; but for our purposes it’ll do — we’re looking for a ballpark number.

Write down that number.

If you want to play with software that does this for free; I recommend FxCop, but there are others.

Also, it appears there’s a powershell script for using FxCop to generate cyclomatic complexity graphs. I haven’t tried it, but I should.

Next time, we’ll talk about what to do with this information. For now, it’s just important that you get that information.

What should I do *instead* of writing unit tests?

In yesterday’s post, the advice was pretty clear: Avoid Unit tests.

They will cause you pain, they will turn your team off to the idea of testing, and you will rue the day you thought unit tests were a good idea.

(Also, as was pointed out to me by very smart people, it’s important to note that this is advice for the developers among us who do *not* write Software libraries. If you write a software library, your library isn’t radically changing; and therefore permanence of a unit test doesn’t have quite the same negative effect that it does in normal code).

That advice seems pretty… harsh, doesn’t it?

But, let’s look at the landscape.

Do you write unit tests as a part of every story you write?

Do you enjoy it?

Do you get value out of it that exceeds the brittleness aspect?

Are you happy when you make a change and the unit tests break even though they shouldn’t?

With few exceptions, I’ve never seen anyone happy about unit testing.

On the other hand, I’ve seen (and experienced) Test Driven Development’s ability to help a developer break down a problem, solve that problem, and help them triangulate their way to a better codebase as a result.

I’ve not yet seen that happen for Unit Tests, writ large.

So what do we do?

The answer is an unsatisfying “it depends”, but I do know the first step.

The first step is to articulate the value you’re trying to get out of unit testing.

If you can write down what you think unit testing buys you, or what you hope it buys you, write that down here.

Go ahead, right now. Hit reply, and in the comments write down what unit testing buys you, what you hope it buys you, and its overall value to your team.

It’s bad to take down a fence before you know why it’s put up in the first place. So let’s dive into why that fence is there.

“Unit Tests are a Design Smell. Do Not Write Unit Tests.”

In his talk at Rocky Mountain Ruby, Kill “Microservices” before Its Too LateChad Fowler had a single graf that (at the time) had me.. in fits. He said:

“Unit tests are a design smell. Do not write unit tests, they are a design smell. (…) Tests optimize for permanence. They create more coupling because there’s necessarily another file you have to change when you want to change this and that. But, the idea is, when you’re thinking about tests as validation, and by the way I don’t think Test Driven Development is a design smell, I think that’s a really good, productive way to work. But thinking of tests as validation, it just bakes all these assumptions about your system into a file you run all the time. It creates stasis.” (emphasis mine)

Chad Fowler, on the utility of Unit Tests

As a brief aside, if you watch the talk, it’s less about Microservices and more about how to create systems that are stable and resilient to changes made to that system, and paradoxically code that is easy to change and get rid of. Ok, brief aside done.

I want to go back to one thing Chad said: Tests optimize for permanence.

If I write a unit test (that is, a test written after the production code has been written), that test is coupled to that production code. Maybe on purpose, most likely accidentally. As Chad says, the assumptions about the system are also baked into that test file as well.

What are those assumptions? Well, that this method is set up in a certain way, that it makes calls 1,2,3 into the aether and receives specific payload xy, and z as a result of each calland that it produces output n.

It’s really hard to tell what’s going on at this point. What is this test trying to solve for? What’s it validating? What the hell does it do? Why does it do it?

The test is coupled to the implementation, and that means any time you change the implementation (removing call #2 and payload y respectively), you’re going to break the test; even if that payload and implementation were moved to call #1 or #3.

This is what we think of as a brittle test, and it was caused because we wanted developers to write unit tests.

This happens far too often, and it happens because we believe the code we’re writing is important.

It’s not. The system is important. The behavior the user wants to see is important. Its implementation in code is at best a temporary win.

This is why TDD is so powerful and useful for teams that are struggling with automated tests; it teaches teams to write tests that describe the behavior to occur, and to ignore the specific code that implements that behavior.

The code implements the behavior specified by the test; but the behavior stays even if the implementation changes.

This is, of course, not as easy as the writing on the page. It’s a lot like this drawing, tbh:


That’s why you’re here. That’s why I’m here. If it were easy to build a system that is testable and tests that are resilient in the face of implementation change, there’d be no need to help educate and mentor teams on TDD. Everyone would be able to read those three six rules of TDD and go about their day.

But really, what we all want — whether we use TDD or tea leaves, is to write software that our customers can use, make our stakeholders happy, and be easy to change without death marches, overtime, or crunch time. We want to be productive without headaches, and the business wants software that does what it says on the tin, when they need it. We want software free of regression bugs that haunt our team, our bottom line, and cause the customer and our stakeholders to lose trust in us.

P.S. If you think the possible outcome I describe is valuable, and you want your team to learn TDD together, consider a virtual TDD immersion training session for your team. If you’d rather do it on your own, I’m putting together a course on TDD that fills in the gaps between Fig. 1 and Fig.2 of the owl meme. Sign up to receive updates on when that course will be ready.