This is less a recipe or issue to solve, but rather a habit that facilitates faster learning and integration of your system into your head space.

The idea is to, before you run your test, say out loud what you expect to happen.

Why would you do this? If you’re doing TDD, and the test is new, of course you can always say “it’s gonna fail”. But how is it going to fail? If it’s calculation, can you guess the number your code will produce? If you’re fixing a bug, did you accurately simulate the error? This is your chance to do science, not just toss code at the wall. Make a hypothesis and test it (see what I did there?).

Doing this properly also requires to you to read the error message, which can provide way more information than just “red/green” or “pass/fail.”

The hope in doing this is to further strengthen your mental model of the system under test. Being able to accurately emulate our systems in our heads can prune entire branches of possibilities when debugging. When building greenfield code, calling your shot can help you spot bugs before they happen. Forcing yourself to reason about the code and stating your hypothesis out loud can cause you to double-check your thinking (especially when pair programming).

Note: From what I can find online, this idea originates from Kent Beck. The post is fairly recent, but I can definitely say I learned it much longer ago. If I had to guess, this is a reference to the billiards/pool mechanic of “calling your shot” by stating into which pocket you expect to shoot the ball.

Example

Let’s work an example.

Say you just got a bug in from production that your famous env_add function is throwing unexpected errors.

For all you pythonists out there, imagine someone unfamiliar with the nuances of Python wrote this.

def env_add(num: int, env: dict = os.environ) -> int:
    """
    Adds the env-configured `NUM` to the passed argument `num`
    """

    return int(env["NUM"] or "1") + num

# arbitrary test name for the sake of storytelling
def test_env_add_fails_sometimes():
    assert env_add(9, {}) == 9
    assert env_add(9, { "NUM": "1" }) == 10
    assert env_add(9, { "NUM": "0" }) ==  9 
    assert env_add(9, { "NUM": "" }) == 9

You’ve enumerated the options you think could be happening in prod that caused the errors. Now, right before you run this, you say to yourself (if it helps, you can put your hand over your heart):

I expect this to fail on line 14 (the empty string "" assertion) with an unconvertible str/integer error.

Then you run the test, and …

BOOM! It actually failed on line 11, the empty dict assertion!

And that’s how I learned that the subscript operator [] raises KeyError when the key is not present in the dict. Instead, you should use get if you’re accessing a key that may not exist.

For completeness, here’s the fixed example:

def env_add(num: int, env: dict = os.environ) -> int:
    """
    Adds the env-configured `NUM` to the passed argument `num`
    """

    return int(env.get("NUM") or "1") + num

# arbitrary test name for the sake of storytelling
def test_env_add():
    assert env_add(9, {}) == 9
    assert env_add(9, { "NUM": "1" }) == 10
    assert env_add(9, { "NUM": "0" }) ==  9 
    assert env_add(9, { "NUM": "" }) == 9

Conclusion

Deeper understanding of our system is always valuable. When writing your tests, try reasoning out why they’re working the way they are. This simple trick will speed you up and help you learn, for the low, low cost of basically free.