Saturday, March 07, 2009

Cognitive dissonance and correctness

I don't pretend to be a psychologist, but I find the mental process of getting a program correct fascinating. It feels like the process goes something like this:
  1. Cognitive dissonance ("this can't possibly always work, can it?")
  2. Confronting the danger ("what might go wrong?")
  3. Questioning assumptions ("what am I assuming, and why do I need to assume it?")
  4. Codifying assumptions as invariants ("P")
  5. Pushing invariants through the program ("∀x. P(x)")
There are lots of techniques for #5, of course, and modularity helps make it manageable; e.g., defensively add in checks to enforce P so it's safe to assume.

But the interesting part to me is really the fact that, if you have the discipline to follow it through, it's precisely that queasy feeling that something might go wrong that eventually leads you to discover the program's invariants and ultimately to get the program right.

I mentioned this to Mitch, and he added "it's that recognition of danger (#1) that keeps us honest."

2 comments:

Anonymous said...

When you codify your assumptions as tests and you have a test suite that you can trust to check them, you can let the test suite have the queasy feeling (or at least a huge unneccesary part of it). :-)

Have you had a look at Test Driven Development and its approach to making every assumption explicit as a test?

SkepticalExtropian said...

I believe he's talking more along the lines of 'Design By Contract', where the assertions are put inside the delivered code rather than in test.

The two techniques both have their strengths, with DBC making it easier to not violate encapsulation just to test something.

I tend to look at TDD as a form of verification testing ("Did we build the right thing?") and DBC as a form of validity testing ("Did we build the thing right?").