Nulls Are Not The Problem

I recently saw this promoted tweet:

screen-shot-2017-01-03-at-10-31-28-am

Of course, this isn’t the first time Null has been brought up as a source of woe for programmers. There’s a wiki on why Null is Considered Harmful, and the most recognizable argument against null is from its inventor:

I call it my billion-dollar mistake. It was the invention of the null reference in 1965. At that time, I was designing the first comprehensive type system for references in an object oriented language (ALGOL W). My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn’t resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years. — Tony Hoare, 2009

Since ‘null’ is clearly the source of our problems (it’s right there in the name “NullReferenceException”), then we should get rid of Nulls. Problem Solved!

Or, as H.L. Mencken once said:

Explanations exist; they have existed for all time; there is always a well-known solution to every human problem — neat, plausible, and wrong.

Absolutist positions in programming immediately make me question the source. The absolute ‘badness’ of NULL is a sacred cow that needs to be slaughtered for the good of our industry.

Null, in its simplest form, is the absence of a value.  In the same way Evil is the absence of Good, or a donut hole is the absence of a donut, null is the absence of a value.

That absence, however, has meaning.  And this is where we as programmers falter.  We give that meaning several distinct meanings that depend on context. We make decisions based on that meaning without clearly communicating our intent, and we forget to check whether there is an absence of a value.  How many times have you heard a programmer say “There’s no possible way that method could fail!” only to have it fail in production? I’ve personally lost count.

If we banished null to the “never use” bin of object oriented programming because we use it incorrectly, then there are a whole lot of things we need to get rid of with that reasoning.

The second issue with banning null is that the absence of a value has meaning contextually.  In a purely closed system it’s not an issue, but any system that interfaces with the outside world (meaning any program other than yours) is going to have times where it doesn’t work, or times where (for whatever reason), it doesn’t return the data you need.

If we tell programmers to write code to handle failure, why wouldn’t we tell programmers to check for data failure as well? That’s what the basis of Object Oriented Protocol is, after all (check the name, “Object Oriented Programming”): it’s data and behavior mixed together in one cohesive unit.

So, if you write code that will never interface with anything else, feel free to ignore anything I’ve said. After all, you can program against that uncertainty.  But for the rest of us, Null is an important part of our toolkit.

In three different systems I’ve built, I had good reason to use Null:

  • Sometimes I don’t get data back from the source.  Not getting this data back means I shouldn’t reason about it negatively; but its absence is a decision point for other paths in my program
  • Parsing text is a common failure;  should I always and forever represent that as something other than what it is? Is the failure to create an object from a parse an actual object itself?  It’s the absence of an object. After all, I failed to create it!
  • They haven’t set a date for an action.  Should I treat that as a value?  Or the absence of a value?

In each case; I used null (wrapped in a method to adequately describe what null means). According to the Null detractors, I should have returned a constant, or thrown an exception, or something.

Sadly, none of the ways to get around null fix the problem, they only paper over the issue. Let’s take his first example, returning a constant instead of a null, using the null object pattern:

public Employee getByName(String name) {
  int id = database.find(name);
  if (id == 0) {
    return Employee.NOBODY;
  }
  return Employee(id);
}

If we don’t find an employee, it’s purely semantical to return a constant Employee.NOBODY, and even worse is that it’s potentially harmful downstream. If our payment processing software (as a purely contrived example) gets an employee object back, but doesn’t check that that ‘nobody’ isn’t an actual employee, then they’ll go ahead and use the default values for the properties in their calculations.  A salary of $0 really skews reports.

“But, they’ll check against Employee.Nobody”, you say.  That’s the same as checking against Null.  The only difference is one will throw a runtime exception and the other one won’t — they both still have logical errors if you insist on filling it with default values (and not null).

The second argument against null is to use exceptions as flow control:

public Employee getByName(String name) {
  int id = database.find(name);
  if (id == 0) {
    throw new EmployeeNotFoundException(name);
  }
  return Employee(id);
}

Except, of course, Exceptions should be used for handling Exceptional things (it’s in the name), and not for common problems.Not finding an employee (for instance, in the context of searching) should not be considered exceptional.  If anything, it’s normal. People mistype names all the time. Humans make mistakes.

It’s even another ‘best practice’ to not use Exceptions as Flow Control. While I happen to agree with that practice, I put best practice in scare quotes believe all best practices are conditional on context. I haven’t yet encountered an occasion where exceptions as flow control is good; but that doesn’t mean there isn’t one, and I won’t tell someone “don’t do that” unless I’m looking at the code with them when I’m saying it, and I’m saying it for that particular circumstance.

Matt Sherman, a development manager at Stack Overflow (and someone I greatly respect) has said this about Nulls:

I submit that we can avoid a large majority of null-related errors if we follow these two principles:

  • Passing a null is a programming error
  • Branching on null is a programming error

I think most of our problems trace back to imbuing null with meaning, i.e., we use it as a signal.

I submit that null is the absence of meaning. It is not a de facto Boolean describing existence. Rather, it should be treated non-deterministically.

This implies that most functions should not “handle” null. Because null is the absence of meaning, we can not use it to make a decision.

These are great ideas, but they only work for a closed system, which is a system where you control all inputs and outputs, and aren’t dependent upon any external systems or processes that might fail for unknowable reasons.

Passing a null in a system is contextual (and maybe it shouldn’t be): If an external system passed me null; I may have to make a decision about that; I can’t simply say, “It doesn’t exist”, because it may exist (I argue that if you want a certainty for truthiness, then you need true or false; but if there is a degree of uncertainty, null is useful for conveying that).

For instance, if I get passed null for an Employee (to go back to our previous reason, because another system couldn’t find an employee), then I have to make a decision based on that, so I have to branch on null.

My point is that we can’t wish away null or just chide programmers for using it. It’s here, and it has meaning.  What I believe we should do is to reduce the amount of meanings it has; and that doesn’t mean not using it or programming it away completely; but it does mean reducing uses of null to mean “I don’t know, and don’t make any assumptions on why I don’t know“.

You should read all of Matt’s post. He brings up some great common sense approaches for reducing null and where it could be a code smell:

A common idiom in C# is to use optional parameters, with null as the default value:

void Foo(int id, Bar bar = null, Baz baz = null)

When I see a method signature like this, I have $5 that says the method’s implementation branches on those parameters. Which means that parameters are no longer just values, but are control flow. Bad semantics.

Instead, consider creating several methods with good names, and which accept and require only the parameters they use – no more optionals. “Branching” is done via method resolution, about which we can more easily reason.

Also, you should follow Matt on Twitter; he’s pretty awesome.

But, back to slaying sacred cows. Another sacred cow for Yegor are that static methods are bad:

A dry summary of [the above listed arguments] is that utility classes are not proper objects; therefore, they don’t fit into object-oriented world. They were inherited from procedural programming, mostly because we were used to a functional decomposition paradigm back then.

And much like the arguments against null, none of this is prefaced with context. So if you say  statics are bad in OOP, you should then say why they’re good:

[Stack Overflow] goes to great lengths to reduce garbage collection costs, skipping practices like TDD, avoiding layers of abstraction, and using static methods. While extreme, the result is highly performing code. When you’re doing hundreds of millions of objects in a short window, you can actually measure pauses in the app domain while GC runs. These have a pretty decent impact on request performance.

Null, much like static; has its place.  We should not minimize its importance, rather we should write our software to handle it. For developers this means:

  • Don’t shy away from using null/returning null if it fits your business case.
  • Do ensure you’re documenting (preferably through great naming) what null means in the context.
  • Don’t abstract null away to a ‘default’ reference type if your system has consumers; they may treat a default value differently from not knowing the value; and that’s a source of bugs.
  • In your own closed systems, reduce the amount of knowledge someone needs to interact with your code. You can architect out chaos in a truly closed system and it’s worth the extra time to do that.  This may mean throwing exceptions or having a default type. This works because you control the inputs and outputs. It works because you’re in a closed system.
  • In a system where you communicate with the outside world (read: anyone other than your own program), you will encounter failure. You will encounter the absence of data. Prepare for it.
  • If you have downstream clients, then don’t pass them null for multiple reasons (if you choose to use null); and if you can possible pass something that conveys more information, do so.  If null means “I don’t know” in your system, then it should mean that everywhere.

Programming is a means of representing and automating life interactions with computers, and life is not nearly as well ordered as programmers want it to be.  To make better software, we should prepare ourselves and our code for external forces, not chastise them for existing.

 

9 thoughts on “Nulls Are Not The Problem”

  1. The issue with null in most modern languages it is the default for reference types. Regardless of the multiple meanings attached to null, if it were opt in (something like for structs in C# Nullable) it would be more obvious in intent and easier for client developers to reason about.

    I am going to use the term “maybe” because this is my understanding of the use of the maybe monad in functional programming (https://en.wikibooks.org/wiki/Haskell/Understanding_monads/Maybe).

    For example:
    Maybe Foo()

    Implies that there may be (Just) a value or not (Nothing). There is no gray area and its not open to interpretation. Whats more is that this language has constructs to branch on that maybe value (pattern matching) and call the appropriate function for return.

    The issue is not null, its null by default.

    Thanks for the post, it was a good read 🙂

  2. The issue with null is the loss of type safety. All reference values can have a value of their type, or they can be null, even when null is not contextually valid. A far better solution is to explicitly model values that can be missing as part of the type, ala http://fsharpforfunandprofit.com/posts/the-option-type/. That way the programmer is in control. If your function should always return a value you can exclude the null option. If your function may not return always return a value then you can explicitly represent that.

  3. Object oriented programming itself is the major problem, rather than any use(s) of null. Objects should never be the primary focus of a developer.

    CPUs are 100% procedural in their operation, anyway. Datatypes are fine, objects are fine, etc. The primary task of a dev should never, ever, to be shoehorning things into an “OOP” paradigm. That is the billion dollar mistake, to me.

    Talking about the proper use(s) of null is minutia.

  4. This is one of those things where language matters.

    The criticism of nullable values/references in statically-typed languages often comes from people working in languages with sum types. Take F#’s Option type (called Maybe in Haskell). In cases where Employee is optional, you just need to use the type Option, and for a missing value you provide the value None. Anyone receiving an Option type is more or less forced by the type system to handle the None case, so it doesn’t have the same disadvantage of your Employee.NOBODY special value.

    With an Option type you can make all references non-nullable and get more safety and expressivity: If this function takes in an Employee, then it really is an Employee, not a null.

    Sum types clean up other messy areas too. Once I got used to them I started to see them as an important missing part of mainstream static type systems.

  5. My experience is that most, if not all, “valid” cases of functions returning null are due to poor API design.

    You make your point around the example of looking for an employee by name, and state it’s fine to return null if none exists. However, for me there are two cases here:
    * The “name” is a unique id. If so, how did you get it in the first place and how is that it doesn’t exist? Then throwing an exception is better than returning null.
    * The “name” is not unique, the you should be returning a (potentially empty) list of employees.

  6. You’ve correctly identified that sometimes we need to convey lack of information (at the I/O edges of the system) and at other times you can guarantee information will be present (in the closed system core). This is why ubiquitous nulls are insufficient. Some languages allow you to define where information may be absent and where it is mandatory. Languages with implicit null (such as Java) don’t let you make the distinction, and are therefore less expressive and more prone to error.

    It’s not “remove all nulls”, it’s “only allow null where that is a possibility”.

  7. While necessary Nulls in RDBMS’s are over used. I find that they tend to indicate a poor data model rather than an explicit design decision.
    One of the problems with NULLs is that you can’t write WHERE column != Value and have the correct answer because any comparison against NULL always evaluates to false.
    I don’t buy into the argument that downstream systems suffer from an assigned “I don’t know” value, particularly when the upstream system comes first. Unless you are going to couple systems together any value /NULL
    handling would probably be better handled in the integration code

Leave a Reply

Discover more from George Stocker

Subscribe now to keep reading and get access to the full archive.

Continue reading