Does The .NET Foundation support Open Source or just Microsoft?

In case you missed it, Microsoft fucked over an open source project again.

Microsoft has lacked an OS level package manager since… well, since its inception. Chocolatey (the OS version of ‘nuget’ (pronounced nougat)) has existed in the community space as an OS-level package manager for a while; but that’s an open source project. Another developer took a chance and created an Open Source Windows Package Manager written in .NET (which is pretty cool), called “AppGet”. (Not sure if the similarity to ‘apt-get’ is intentional or not).

Anyway, AppGet (Github, Website) is pretty snazzy from a marketing perspective; and it hits all the right notes from an OSS perspective. As to why it ‘won’ over Chocolatey, I’m not sure it did; but since Microsoft chose to fuck them over instead of Chocolatey, we’ll go with AppGet winning*.

Microsoft decided to create its own Package Manager, and their team apparently decided to look at AppGet for ‘inspiration’. I’ll let the creator of AppGet take it from here:

When I showed it to my wife, the first thing she said was, “They Called it WinGet? are you serious!?” I didn’t even have to explain to her how the core mechanics, terminology, the manifest format and structure, even the package repository’s folder structure, are very inspired by AppGet.

If copying without attribution wasn’t bad enough (keep in mind, Oracle v. Google is being fought over the exact same issue: copying the structure of someone else’s API for commercial gain — though I won’t use this space to debate the merits of that argument, only to say that multi-billion dollar companies are fighting over this problem), Microsoft strung Keivan along for months after interviewing him, only to tell him on the eve of their announcement that he didn’t get the job, and oh, by the way they’re releasing their competitor package manager the next day.

He speaks of the impact that had:

What bothers me is how the whole thing was handled. The slow and dreadful communication speed. The total radio silence at the end. But the part that hurts the most was the announcement. AppGet, which is objectively where most ideas for WinGet came from, was only mentioned as another package manager that just happened to exist; While other package managers that WinGet shares very little with were mentioned and explained much more deliberately.

Since the publication of his medium post; Microsoft’s PM for WinGet, Andrew Clinick, released a milquetoast oblique ‘apology’ with an even more abstract title, “winget install learning“, where he says:

Last week we announced a package manager preview for Windows. Our goal is to provide a great product to our customers and community where everyone can contribute and receive recognition. The last thing that we want to do is alienate anyone in the process. That is why we are building it on GitHub in the open where everyone can contribute. Over the past couple of days we’ve listened and learned from our community and clearly we did not live up to this goal. More specifically, we failed to live up to this with Keivan and AppGet. This was the last thing that we wanted.

So, let’s review:

Microsoft decides in 2019 that it needs to have an OS level package manager. Microsoft sees several open-source projects that do just that; and they decide to meet with at least one of them, ostensibly to hire them. (this whole situation is even more fucked if Microsoft brought Keivan out to Redmond to brain-rape him (CW: language, sexual assault imagery)).

So, Andrew and his team meets with Keivan, and then ghosts Keivan for 6 months, while they produce Winget. They announce Winget, with no appreciable credit towards Keivan. After an initial uproar, the PM, Andrew Clinick, releases an embarrassingly obtuse and non-apologetic blog post, and the world goes on ticking.

Apparently, for Microsoft, that’s the end of it. They’ve done their bit, and called it a day.

If this were the end of it; it’d be pretty shitty behavior on the end of Microsoft’s Winget team, and their .NET Developer Division for enabling this behavior from Microsoft towards an open-source project; but… This isn’t the end of it. There are other players, which is what makes this even worse — and the latest in cautionary tales for .NET Open source developers: Microsoft has not changed their “Embrace, extend, extinguish” philosophy towards Open source.

The ‘other players’ I speak of is the .NET Foundation, and its Board of Directors.

How does the .NET Foundation fit in? I’m glad you asked. For the answer, I went back to Beth Massi’s History of the .NET Foundation to get the answer:

So, why did we need an open source software foundation? It was S. Somasegar (Soma) that pushed this idea to us. Soma was the Corporate Vice President of Developer Division at the time and our executive sponsor. Soma believed that the survival of the .NET ecosystem depended on the open source community and we needed a foundation to foster it. 
(…)
Soma knew that we needed to change the perception of Microsoft in the open source world and the creation of the .NET Foundation and the open sourcing of the platform would prove to be a strong step.

Beth goes on to write:

We also had projects from the community as well as our own that needed help; not just legal and licensing help but basic development services like code signing and CI/CD. We also had customers that needed to trust and rely on .NET. I was the community manager for the .NET platform team before any of our stuff was open source. And I was on the v-team that stood up the .NET Foundation itself. We were going through a culture change internally and our customers needed to also come with us.

(…)


Many of our customers expected all the software they used to come from Microsoft. It was a direct result of us creating a hugely successful closed source ecosystem. Microsoft also didn’t have the greatest track record with some of the open source projects we did release — where they were basically “thrown over the wall” and abandoned. The challenge was to make sure we didn’t lose trust — to make sure our customers understood that open sourcing .NET was not the end of the platform, but the beginning.

There are a few crucial points Beth makes here:

  • Microsoft had a bad reputation in the open source world
  • Microsoft would release open-source projects and abandon them
  • Microsoft needed to cultivate a reputation that it was open source friendly; by lending support to open source projects

And so they created the .NET Foundation. So let’s see what the .NET Foundation says it does, from its own front page:

Independent. Innovative. Always open source.
The .NET Foundation is an independent, non-profit organization established to support an innovative, commercially friendly, open-source ecosystem around the .NET platform.

“established to support an innovative, commercially friendly, open-source ecosystem around the .NET platform.” Surely,– I wonder what that means? In the context of a .NET open source project being extinguished by a corporation? Surely a .NET open source Windows Package Manager would qualify for the .NET Foundation’s mission, right?

Maybe not, so let’s dig deeper. The .NET Foundation ‘about‘ page says:

The .NET Foundation supports .NET open source in several ways:

Promote the broad spectrum of software available to .NET developers through NuGet.org, GitHub, and other venues.

Advocate for the needs of .NET open source developers in the community.

Evangelize the benefits of the .NET platform to a wider community of developers.

Promote the benefits of the open source model to developers already using .NET.

Offer administrative support for member projects.

Support .NET community events with sponsorship and content.

If we’re grading the .NET Foundation, they didn’t promote AppGet, they didn’t advocate for AppGet’s needs to Microsoft or the larger community, they didn’t ‘evangelize the benefit’s of the .NET Platform’, they didn’t promote the benefits of the open source model. They get a solid D-.

Incidentally, the actions the .NET Foundation does to ‘support’ .NET Open source doesn’t include any actions that would actually help these projects become commercially viable.

And here we get to the problem with the .NET Foundation.

The .NET Foundation says it exists to make open source commercially viable (for whom?), but in reality it exists to further Microsoft’s reputation in the open source space, not to help the community produce ‘commercially viable open source software’.

The AppGet problem is tailor-made for what the .NET Foundation claims is its purview; but when you pull the covers back, especially on Beth’s wonderful blog post about the history of the .NET Foundation, you realize that it was created to help show that Microsoft branded open source had the same viability as Microsoft products you buy.

Don’t believe me? Here’s Beth:

Many of our customers expected all the software they used to come from Microsoft. It was a direct result of us creating a hugely successful closed source ecosystem. Microsoft also didn’t have the greatest track record with some of the open source projects we did release — where they were basically “thrown over the wall” and abandoned. The challenge was to make sure we didn’t lose trust — to make sure our customers understood that open sourcing .NET was not the end of the platform, but the beginning.

Another telling line:

We were going through a culture change internally and our customers needed to also come with us.

So on the one hand, the .NET Foundation claims that it exists to further the community’s interests around open-source; when in reality — from its own history, it exists to further Microsoft’s interests in open source: To give their open source projects an air of legitimacy and corporate backing.

That’s harsh, but it’s true.

I’m not alone in thinking this; there’s a wonderful blog post on this very subject, titled “The New Rules for playing in Microsoft’s Open Source Sandbox“.

One of the many gems from the post:

Another example story on why you can build either something on Microsoft ecosystem or build something popular, but never do both.

Tweet by @Horusiath

Go ahead, read it, it’s worth your time. If you come back and you still believe Microsoft believes in open-source software, I have a bridge to sell you.

But, that’s not quite right, is it?

Microsoft *does* believe in open source software, and the .NET Foundation does believe in supporting .NET Open Source software, as long as it’s from or controlled by Microsoft.

I’ve repeatedly asked for comment from the board of directors for the .NET Foundation, only to receive radio silence — and I’m a dues paying member of the .NET Foundation!

If the .NET Foundation believes in supporting commercially viable open-source software built on .NET, then they have an obligation to step in with AppGet and at the very least release a statement condemning Microsoft’s actions. If they believe in supporting open-source software built on .NET, then it’s incumbent on them to continually promote it, not stay silent when their patron decides to steal a community project.

If Microsoft wants people to believe it’s changed, it’s actually got to change. There was a right way to go about releasing Winget; and Microsoft failed to do that. It’s pretty easy for a company with Microsoft’s valuation to do (disclosure: I am a Microsoft shareholder): Buy AppGet from Keivan. You don’t even have to use it. But buy it. Or, pay Keivan for his insights and work that you used in producing Winget. A solid one-day consulting fee would be $100K.

If the .NET Foundation wants the .NET community to believe it stands with us, then the .NET Foundation needs to step up and speak out against these highly corrosive actions that its patron takes. .NET Open source will never be commercially viable if big corporations like Microsoft feel free to steal the community’s work without paying for it.

It’s not about the process

“If you follow Scrum, you’ll deliver better software”

“If you do TDD, you’ll deliver better software”

Both of those statements may be true, but they’re unhelpful at their core.

For one thing, the person speaking them generally has survivorship bias; it worked for them so it should work for you too, QED.

Besides the bias inherent in that statement, they also tend to ignore that the ample evidence around us that not all teams are successful with scrum, and not all teams are successful with TDD.

But let’s flip that statement around.

Is the process responsible for your success?

the process can fail you, and you can fail. Those are independent variables; and though some processes do tend towards helping teams fall in the pit of success, The process can’t guarantee an outcome, and blind reliance on a process is a sure path towards failure.

The reason why the robots haven’t taken our jobs is that Software Development is, at its core, about turning human desires into an automated and useful system.

It’s that human aspect that we try to create processes to counteract or channel. The human aspect is why we make mistakes when we’re developing code; and why TDD can help. The human aspect is why double-entry bookkeeping is the accepted method for accounting. The human aspect is why software is so darned useful in the first place. It was made for us.

Before you chide a process, or point towards a process for your success, remember that it started with humans. It started with you and your team. You’re the success story, not the process.

P.S., I started a new podcast that is centered around helping software leaders enable their team to build better software. I recently spoke with Ben Mosior about Wardley Mapping, a technique for contexualizing your team, your business, or your software stack to better understand the landscape. It’s hard to think strategically if you can’t model where you’re at, and Wardley Mapping helps with that. Check out the episode here.

The Build Better Software Podcast

If you’ve been a member of my mailing list for a while (that are later republished to this blog), you’ll notice a pattern that the emails are mostly short missives on viewpoints, strategies, or techniques to help teams double their productivity.

They aren’t in depth, and they aren’t meant to be — the ‘how’ is elsewhere, because email is the wrong medium for that.

While I’m putting together the TDD course (Which gets into the tactical ‘how’ behind TDD), I’m finding myself wanting to refer to strategies and techniques that I haven’t fully explained; and that I can’t do justice to in an email. I also want to give voices to these lesser known techniques and strategies, to help software leaders enable their teams to build better software.

To that end, I started a podcast focused on topics that software leaders would find important but may not know about. Because naming is hard, I called it the “Build Better Software” podcast, and you can find it at https://www.buildbettersoftware.fm.

It’ll be a mostly weekly show (I am vacationing here in a few weeks, so chances are I’ll miss a week or two) where I either interview an expert in a given practice and we dive deep into how that practice can help software leaders (or not!) and contextualize these things we call ‘best practices’ into a digestible form and dive into them. Are they really best practices? Do they fit your usecase, team, context, business, and ability?

I hope to answer those questions in this podcast.

The first episode is on Wardley Mapping; a technique that can help you and your team contextualize your work; which is critically important in producing the right software, and the right time, for the right user.

If you have any topics you’d like explored, let me know in the comments. I’d love to explore them and share the outcome with you.

Ship of Theseus

Have you ever heard of the Ship of Theseus?

The idea behind the Ship of Theseus is that if a nautical ship (or Starship, if that’s your thing) had all of its components replaced throughout its life, is it still the same ship?

If it is the same ship, then are the components individually important? Are they too important to replace? They’re all necessary, but is their existence in their current form important? Or, is it the actions/needs each component serves that makes it important, and not necessarily the component itself?

Put another way,

If you throw away the code from a particular component and replace it; how important was that code?

It is both supremely important and unimportant, all at the same time.

Code itself is unimportant: What is important is that it fits together to make a whole that provides value for its users, but if you can’t replace code, then it is probably the most important code you own, as you have a single point of failure.

One of the interesting aspects of Test Driven Development (particulary FauxO) is that it makes the implementing code unimportant. It takes away that single point of failure. Now, any code that passes your tests is able to both replace and be replaced.

There’s a lot of power there, power that isn’t available when you’re only writing unit tests. Because unit tests are written after your production code; they’re necessarily coupled to it, as we’ve talked about before. But if your code is able to be replaced; the power isn’t in the code any more, it’s in the whole. If a component gives you problems, replace it. You can’t do that with just unit tests. It’s sometimes not even possible to do it with Unit test + Automated E2E tests due to the number of code-paths your automated tests have to traverse.

“Build One to Throw Away (You will anyway)”

When I read that line from The Pragmatic Programmer in the 2000s, it shocked me.

What do you mean throw software away? I just got it working!!

20 years after its initial publication, we still are adverse to throwing code away.

Why?

Some examples of this:

  • Your team uses a GenericRepository<T> that somewhat works. It works for your entities that have the same ways to get them (GetById, GetAll, Find), but it fails for when you need child discrete entities (think of getting all orders in the system for a report; or all orders that contain a certain product) without getting their parent entity (in this case, the “Customer”), and then you have this weird thing where you try to add another method to the GenericRepository to work around this because we’ve invested in the GenericRepository<T> pattern. (This is a real example I’ve encountered multiple places)
  • A developer comes up with a new way of doing something in your system that contravenes your established convention, but is more readable and maintainable. People on the team who are afraid of change start to bring up lots of reasons why you can’t possibly do it that way, and that this change should be ‘researched more’ (note: If you’re asking someone to ‘do more research’ without a specific deliverable and a specific question you want answered, you’re really just softly scuttling the idea. Also, asking someone to ‘do more research’ when they did the research and your question doesn’t have the answer you want is also scuttling the idea).
  • About six months into your project, you realize that a relational database wasn’t the best choice of data store for your project. Or that ArangoDB is just too niche to get support for, and instead of saying “We need to stop now, our foundation was for a house and we’re building an office building”, you plod on, introducing ever more complicated caching and retrieval mechanisms to work around the database.


I could go on. Over my career, I’ve heard and experienced stories of this that all come back to the same general idea:

Once it works, we don’t want to change it.

or put another way:

We get invested in how we solve problems, and don’t want to learn new things.

All of this is normal, and it the prime reason why we don’t change precisely when we need to. There is inertia to the old ways of thinking and doing business.

But remember the second part of the quote: You will [throw it away] anyway.

I suppose on a long enough timescale that’s a tautology, so I’ll focus on the short term.

What modules have you rewritten over the past year? Why did you rewrite them?

What changes did you make that your architecture hadn’t accounted for? What did you do when you found this out?

There’s a saying You’ll never know less about your problem than you do right now.

Tomorrow you’ll know more than today, and so on.

If your hesitance to throw away what you built is that you’ll lose work; you’re right. You’ll lose work created with less understanding than you have right now.

You’ll lose work created (sometimes) on false premises.

The upside to throwing away work is that it gives you room to correct those false premises.

The upside to using TDD is that while you may throw away the work, you’re generally not throwing away the customer’s view into your world; allowing you to correct your thinking and verifying it’s correct without disrupting your customer.

That’s very powerful, and a powerful reason to adopt TDD. You’re going to throw away your work, would you rather throw it away and replace it with confidence? Or without confidence?

An example of a non-brittle test

For the TDD course, I’ve been iterating through the course material; implementing it as a I go through each lesson.

In the most recent example, I wanted to show how a test could go from failing to passing, without changing any test code; and without changing the API.

Now, this is normal in TDD. The API is what the end consumer of your process will see (whether that’s another process or an external consumer).

For instance; here is the API for my budget right now:

Budget b = new Budget("name", startDate: new DateTime(2020,05,01));

the API for my budget takes in two things for its creation:

1. A name (how will you refer to this?)
2. A date the budget should start

The Budget exposes two operations:

b.Add(new BudgetItem(/*...*/));
b.CumulativeSpent(effectiveDate: new DateTime(2020,07,01).Date));

1. Adding a budgetItem to the budget with a .Add() method
2. Telling me how much as been spent as of a certain date (The “effectiveDate” in this example).


Now, here’s the test, and the test didn’t change from when it was failing (and it has been failing for a few days, I ignored it because I got ahead of myself in writing it) until when it was passing:

[Test]
public void BudgetShouldAccountForTotalsAcrossMonths()
{
  BudgetItem i = new BudgetItem("b1", 1.23M, new DateTime(2020, 05, 01).Date, new OncePerMonth(), new DateTime(2020, 05, 01).Date);
  Budget b = new Budget("Budget That Shows across Months", new DateTime(2020,1,1).Date);
  b.Add(i);
  var totalSpent = b.CumulativeSpent(new DateTime(2020, 07, 01).Date);
  Assert.That(totalSpent, Is.EqualTo(3.69M));
}

Ignore the BudgetItem’s API for now. It’s hideous and going to be refleshed out (that’s where the next LiveStream will focus on, using what I’ve learned about the domain and the APIs I’ve created to create better APIs)

So how did I make the test pass?

By making a one line change inside the .CumulativeSpent() call:

Importantly, and the takeaway from this email, this is how change happens. Not by affecting what the consumer of your code does to execute your code — we don’t want to change tests. If we have to change tests when we change code it’s a sign our tests are too coupled to how the code does its job. We don’t care if there’s a mechanical turk manually adding up each budgetItem, all we care is that it gives us the cumulative spent.

In short, this change had me like:

P.S. If you aren’t already a member of the course list, and watching the livestream (or getting more updates on the course, like this one) interests you, add your email to the list at https://course.doubleyourproductivity.io. I’ll send out an email to that list when I’m about to start the livestream (it’ll be either this weekend or early next week, since my wife is still working on finishing grad school in the evenings).

When did On-Base Percentage Come About?

If you’ve watched (or read) Moneyball, then you know that On-Base Percentage (OBP) in Baseball was a second tier stat for the a long time until the advent of Sabermetrics being used more widely as a way to gain advantages as a team by relying on undervalued players instead of superstars.

Wikipedia says OBP became an official statistic in 1984, but when was it created?

August 2, 1954.

30 years earlier!

Life Magazine published an article “Goodby [sic] to Some Old Baseball Ideas” written by Branch Rickey included an equation for On-Base percentage. You can read that article here. (In the article, OBP is called “On Base Average”).

Here’s what the calculation for OBP/OBA looks like:

Why did it take 30 years for an objectively better way to measure getting on base to become ‘official’?

Why did it take another 20 for it to form the basis of Sabermetrics and take the baseball world by storm?

Old habits die hard.

Can you imagine how the course of history for some teams would have changed if they had paid closer attention to something that was sitting under their noses for 50+ years? The baseball world would look completely different.

But, this isn’t a baseball blog. It’s a blog about how to double your productivity, focused on strategies for software teams to produce better software, sooner.

TDD has been around for 20+ years. How many teams have you been on that have practiced TDD? For me, the answer is one. Out of all the teams I’ve been on across all the industries and sized companies, I’ve only been on one team that practiced TDD before I arrived.

I’ve been with seven different companies in my career and twelve or thirteen teams, and there was one that practiced TDD before I arrived (incidentally, it’s also the same team where I first saw the value of TDD).

TDD and OBP share a lot in common as measures. They measure what is objectively important; and they are relatively simple measures. They focus on an outcome-based approach to measurement, instead of a heuristic based approach (unit tests and static analysis are heuristics), and there’s a binary result: Either your code is easy to change with confidence or it isn’t.

Either you got on base or you didn’t.

Now that doesn’t mean getting on base is easy, or that making your code easy to change is easy; but the outcome is essential to software teams: Being able to make changes with ease and confidence. Being able to deliver on time, without regression bugs, and without fear.

Can you imagine how different the software world would be if all teams could do that, right now?

Special Thanks to Ben Orlin and his book: “Math with Bad Drawings” where I found out this tidbit about OBP (Pages 222-226).

What to do instead of writing unit tests (5/5)

At this point, if you’re still with me after four posts on the subject, you may wonder:

How the hell does any change ever happen?

How does any codebase ever get better?

That’s a fair question, and if we’re all putting on our big boy sweatpants and being frank with each other; it’s really hard.

Why would I do all of this ?

Why would I:

  • Figure out the part of the codebase that life would be great if I could get past these unit tests that are hard to write and have questionable value
  • Figure out that part’s cyclomatic complexity
  • Figure out whether its value is visible or invisible to business stakeholders
  • Try to sell changing that module to the business

Today, we’re going to talk about the final part, putting everything together.

At this point, you are ready to dive into the code and make changes.

I’d recommend, if this is the path you’re intent to go down, to write characterization tests to fully understand what the bits are doing. That is an entire series unto itself; so I won’t cover it here, but the idea is to write a test against the outermost public API for the thing you’re touching (warts and all) and have that series of tests help you understand what’s at play when you make a change.

You should also write contract tests that ensure the shape of data you’re sending to the database or to the user doesn’t change.

And then you dive in. The book Refactoring by Martin Fowler is useful here; as it details step by step what to do in a given situation. You start to put parts under test and factor out changes, and you do it, again and again.

But those are the technical steps that come at the long end of the steps I mentioned above; and those steps I mentioned above are important.

For what? Why would I do all that?

Well, to be blunt: If you’re not developing software through TDD, this is the road you’re on for the rest of your career.
You’re on a never-ending cycle of pain in your codebase, using metrics like cyclomatic complexity to triage issues (as well as whether it visible or invisible, and who it affects), and trying to sell change to your business.

That’s going to be your life.

Inertia doesn’t just apply to physics; it also applies to making change, especially change to codebases that are working and in production.

The appetite for something breaking is very low, and the unforseen benefits of refactoring a module are too hard to quantify to get over that hurdle.

In short, if you work at a median company doing median work, the deck is stacked against you.

It is a much larger subject than one post can do justice to (or even 100 posts), but the art of selling the need to change is crucial to being able to improve your codebase if you make tests an afterthought.

The short question is: Do you want the pain associated with writing tests after? Do you want to go through these steps for the rest of your career?

If not, you have two choices: Adopt TDD, or ignore unit tests entirely.

Selling the Change

If you work on a self-organized empowered agile team (Scrum or otherwise) and therefore you don’t need to sell change, then you can skip this post. You’re already in the place where you are empowered to fix problems in the code base (your white whale, that we’ve been spending the last four posts talking about).

How do you know if you’re in a self-organized empowered agile team?

If this white whale we’ve been tracking is on your backlog (and it should be), then talk to the Product Owner and team about moving it up into the next sprint.

If there’s actual trust and agency there, then you’ll be able to work on it, just on saying, “This is causing us pain and we need to fix it.”

If that doesn’t work for you, then I am sorry to be the one to tell you this, but you’re probably not on a self-organized empowered agile team; even if your organization uses one or more of those words mashed together.

That’s ok, because you can still sell change, even if you aren’t empowered to make change.

So, you’ve found this part of the codebase that is your white whale, you’ve used metrics to determine how bad it really is objectively; and you’ve ascertained whether it’s a module that affects the customer, or just the health of the system (“just”, as if that’s any less important), and you want to dive in and fix it. There’s just one teensy, tiny thing you have to do before that.

You’ve got to sell that change.

In a business that understands the value of being able to move nimbly, you wouldn’t have to sell very hard, if at all; but unfortunately not all businesses value the expertise software developers bring to the table. There is a if it works, ship it mentality that causes long-term problems for software projects.

What is the outcome your business will see if they let you spend x time on this?

Will they be able to see features and changes sooner?

Will this change open up new features the business/customer has been wanting, but unable to receive due to architectural limitations?

Will the system be faster?

Will developer time maintaining that part of the system (or the system in general) go down due to this change?

Will the team feel better when this part of the system is better? Will it improve morale?

It’s important to speak in terms that whomever you’re selling this change to cares about.

For instance, for business people, there are five major reasons to make change:

1. Increase Revenue
2. Lower Costs
3. acquire new customers
4. Retain Customers
5. Expand into new Markets

Which one (or more) of those five will your fixes enable? How will it enable them?

It’s also important to understand the person or people you’re selling this change to. They’re thinking about these things; but more importantly, they have a set of goals for this quarter that are on their mind. What are those goals? If you haven’t talked to them about those goals, you’ll want to, and you’ll want to see if your changes help further those goals.

You’re not trying to sell just anyone on your change — you want to sell a specific person. In your organization, you know who this is, whether they’re a shadow leader or the titled leader. They’re the one you need to sell your change on. There’s a saying in Basketball Play the man, not the ball. Basically that means that the mechanics of basketball only get you so far; but to actually win, you have to know who you’re playing, and how they play.

This is true in every aspect of your life. This information shouldn’t be used for a zero-sum win; but rather finding a way to win that lifts everyone up. To do that, you have to understand what drives the person you’re talking to and trying to sell the change on.

Now, there’s no way for me to do justice to the art of selling in an email or blog post; literal volumes have been written on the subject; but the point of saying this is that even for software developers: we must sell our changes to other humans in order to make things better.

We as developers like to think of software development as a purely technical task; but that’s not the case. Of course in an ideal world we’d be able to pursue positive change in our codebase because we need it as developers; but as long as the world isn’t ideal and you’re not in a self-organized empowered agile team, you’ll have to sell this change to someone.

Next time we’ll talk about what to do once you’ve sold the change.

Living Room or Utility Closet?

In Part 1, you wrote down the value unit tests (not TDD, unit tests) bring you, and the pains they cause you.

In Part 2, You also found the white whale in your code base, the part of the system that gives you fits when you try to unit test it, and you figured out its cyclomatic complexity.

Today, we’re going to ask one final question, and then we’ll be ready to tackle this part of the code-base.

Is that part of the code-base important to your customer or business?

In other words, if your code-base was a house you’re trying to sell, is what you’re looking at the Living Room or the utility closet?

Both are important, but for different reasons.

The Living Room must be pristine. It must always be in a showable condition. It will set the tone for the rest of the house. After all, the buyer is going to *live* in that room (it is called a living room for a reason).

The utility closet, on the other hand, need not be showable, and often never is. “Oh, that’s the utility closet”, and the prospective buyer may peek in, but they’re not going to study it, apart from “Does this work”, they don’t care how it looks.

Now, you and I both know the Utility closet is important. Last July, under a sweltering heatwave, my 20+ year old furnace finally gave out, and to make matters worse my AC system used the old style refrigerant which has been banned from sale and is a year or two away from the secondary market being closed, meaning it would get tougher (to border on impossible) to find replacement refrigerant should I need to do that. So I paid to have the whole thing replaced. Condenser, AC Unit, and Furnace (as well the Hot Water heater — it too was on its last legs).

I had figured this was coming, but I hadn’t put resources to fixing it, until it was too late.

And by the time it was too late, it cost me $11,136 (USD) to fix.

Anyway, back to our analogy.

If the part of the system you’re looking at is the utility closet; you have to change how you pitch fixing it to your business stakeholders than if it were the living room.

The financial impact of your utility closet going down is high, typically, and the financial impact of a latent problem is also high.

The reputational impact of your living room developing a hole in the ceiling is high, but the fix can be low cost (Not a serious suggestion but putting a bucket in the attic would patch the problem).

As a software developer, those are the two levers you can pull that have merit to a business person: The reputational impact of a bug due to the nature of the code that is UI visible; or an actual “We’re down and we don’t know why” bug to code that lives in your utility closet.

Better companies, of course, will understand the “long term maintenance” argument for keeping code under test; but just because they understand it, doesn’t mean it crosses their radar that making code easier to test should be an activity developers do.

So, determine if that code you’ve identified as your white whale is in the living room of your software project, or the utility closet, and next time we’ll make use of that information to help our business and our code-base get better.