For the longest time, I followed a standard workflow for writing software code. And imagine many other fellow developers followed in my footsteps as well.
- Write your code
- Test your code
- If code fails, circle back to step 1
Even if you’re a non-programmer, it actually sounds logical, doesn’t it? After all, you’ve got to have SOMETHING to test, before you can actually test it, right?
It sure made sense to me. So that’s exactly how I went about developing software. I’d write some code, I’d test it out, quickly realize something went wrong, and have to go back to the drawing board and rewrite my code to retest it.
Nine times out of ten, I would have to go back to the drawing board, because the code I wrote ended up being massively incorrect.
When I finally got my code to do what I needed it to do, I realized that I wasted a huge chunk of my development time making lots of stupid mistakes, lots of dumb design decisions and completely going down the wrong path of development.
It was incredibly frustrating. It actually made me question my skills and worth as a software developer… did other software developers make as many mistakes as I did? Was I just that dense? Or an extremely slow learner?
These aren’t pleasant questions to ask oneself. After all, it’s human nature to think highly of ourselves and our skills, isn’t it? Our very self-worth is tied up in how we perceive how well (or badly) we accomplish things.
Now I can’t speak for other developers, but that feeling of self-doubt made me question what I was doing wrong.
Or was I just going through the normal process of developing software? Was this how every developer went about writing software? Writing some code, testing it, discovering lots of mistakes and flaws in the code, and repeating the process all over again?
Did I just have to teach my brain this was the normal way software gets written? With the assumption that you’re going to make lots of mistakes along the way and waste lots of time redoing your code? Was this the only way?
It turns out there was.
Test driven development came about to address the shortcomings of the traditional way software gets written.
It flipped the traditional way most developers write their code.
- Write a test that fails
- Write some code that will make the test fail
- Get the code to make the test pass
When I first read about this test driven development methodology, I went into major head scratching mode. Why on earth would a developer write a test for code that wasn’t even written yet?
My mind wanted to reject this new methodology as completely pointless. And believe me, a lot of my fellow developer colleagues had the same thoughts. They couldn’t figure out what writing a test BEFORE writing the code would buy you.
It took many years for me to realize the usefulness of test driven development.
Firstly, what does writing a test first accomplish?
Well, it’s important to step back and realize what a test actually IS.
A test is trying to prove that something does what it’s supposed to, right?
The reason why test driven development requires you to write the test FIRST is because it forces you, the developer, to think about how the final code should BEHAVE.
The test forces you to “pretend” the code you are trying to test, has already been written. It also forces you to think about WHAT you’re trying to accomplish, not HOW to accomplish it.
Why is this important?
It helps you to see the forest for the trees. Writing the test first forces your mind to immediately start thinking about the low-level implementation details of a piece of code.
You’re trying to think about your FINAL DESTINATION. When your test reveals what your final destination should be, whatever code you write to help you reach that destination will not be wasted effort.
I’ve lost track of all the false starts and paths I went down coding my applications the non-test driven development way.
My mind was often fuzzy about what exactly I wanted to accomplish.
A test forces you to think about your end result.
Once you’ve written a test, you write just enough code to make the test FAIL. It’s important to first make it fail, because you want to start at a failing state. That way it becomes clear when your test reaches a PASSING state.
There are lots of visual test tools out there that actually color code your test results. Red is a failing test result and green is passing. Which is why TDD is often referred to as red, red, green test-driven development.
It’s human nature to resist change. And it took me a long time to get to the point where my mind didn’t outright reject test-driven development.
And there are some real other hurdles every developer who embraces TDD must overcome.
Probably the biggest hurdle is the difficulties that come with testing existing legacy code.
Say you’re trying to write a test for an existing piece of code. Inside that piece of code is the functionality you want to test for. But there happens to be lots of other things that code is doing you don’t want your test to touch. Things like connecting to an external database. Or making calls to an external web service.
How do you write your test so it isolates those unwanted dependencies and only tests for the actual things inside that code you want it to test for?
That’s the challenge with TDD. One must learn the art of MOCKING and STUBBING out those unwanted dependencies. And oftentimes, that is not a trivial task.
Many developers actually get discouraged when they encounter these kinds of problems and often give up test driven development with the reasoning that it’s just too much trouble and too much time to address these kinds of problems.
But learning how to get past these hurdles is a richly rewarding experience. When you have corresponding tests that verify your code works, you have the confidence to tell your manager and your teammates your code is guaranteed to function according to the test.
You also have the confidence to know when something you change in your code will cause it to break, because your test results will immediately show it.
This is the true power of test driven development.
With all that being said, test driven development will still only get you halfway to the goal of producing code that works in the REAL WORLD.
A test will prove your code will work under test conditions.
But a test won’t prove your code will work in a real production environment where your code lives on a networked server with real world dependencies like databases, file servers, and more.
I’ve been recently wrestling with this in some code I’ve been debugging and testing.
My tests all show my code SHOULD work, as designed. But when I deploy this to a real world environment, the code fails to do what it should.
There is definitely a difference between a test proving your code works, and deploying your code in a real environment and proving your code works.
This kind of real world testing is often referred to as “integration” or “end to end” testing. It’s making sure your stuff actually works out in the real world.
Don’t get me wrong. Test driven development has proven itself to be a valuable and viable tool in a developer’s arsenal.
But you can’t really get to that “till the fat lady sings” phase of your project until it’s actually working in the real world.