1 + 1 = 2? Obvious right? How about (2 + 2 x 4)^2? That's a little more complicated but not so bad either. Over our series of blog posts about the Bloomberg Connects project you've might been able to tell that testing has been a very integral part of every step of the project. From testing code to testing our assumptions about the way our applications work, testing is what brings us from the darkness of uncertainty to the warmth of enlightenment.
As David has mentioned in a previous blog post, we follow a very Agile like process to guide us through our technical development process. One of the most important aspects of this process is writing tests for everything we can possibly write a test for.
What does a typical test look like? Let's take a look at our earlier example of 1 + 1 = 2. Imagine we have a block of code (we'll call this a function) that computes the sum of 1 + 1. We know that the sum of 1 + 1 should be 2. Knowing the definition/result of what we're looking for, we can then construct a test to help us determine whether our function is performing up to standard. This test could simply check that result is two. If we wanted to get more in-depth, we could check that the result is less than 3 and greater than 1.
This is a pretty simple example but sometimes the things we need to test for encompass more than just one isolated block of code but rather how a group of disparate pieces of code work together. When we write tests for the former we call those 'unit tests'. When we write tests for the latter we call those 'integration tests'. With a combination of unit tests and integration tests, we can cover most of the different permutations of our use cases.
However, even with an extremely comprehensive test suite, this doesn't prevent bugs from cropping up or other strange errors. Since writing tests is code too and all code needs to be maintained, there arrives a point of diminishing returns for a certain number of tests. After a certain point, a programmer will have a hard time coming up with different permutations to test against due to the inherent complexities in writing software. This isn't an excuse but rather a reality of projects where the resources of time and money are always a constraint. Sometimes going for the 'most bang for buck' tests is the best course of action.
So if writing tests doesn't ensure bug free code, what do we get from them?
One of the best advantages of writing tests is giving us some level of protection from regressions. As an example, imagine we found a bug in our code. Instead of just fixing the bug and being done with it we also wrote a test covering the newly discovered case the bug presented. Now if for some reason another programmer ever wrote code that reintroduced the same bug, our new test will now catch the error. While this sounds kind of stupid, this can be a fairly common problem due to the sometimes amorphous nature of code.
Another huge advantage of having code tests is that we can now run automated tests. As part of our development process, we use a continuous deployment system that only deploys new code if the new code passes tests. We can also run tests before attempting to push our code live as part of each developer's own development process. This means we can often times catch issues before they show up in production environments, meaning the end user should typically only see very stable versions of our software being used.
While our current development process has been pretty good, we know that we can always do better. It's part of our overarching theme of treating everything like an iterative process. As someone famous once said, "if you're not getting better, you're getting worse."