A big suite of various levels of automated tests can be a great way of quickly identifying problems introduced into the codebase.
As your application changes and the number of automated tests increases over time, though, it becomes more likely that some of them will fail.
It's important to know how to handle these failures appropriately.
Figure: How not to handle automated test failures (Sander van der Wel from Netherlands, CC BY-SA 2.0, via Wikimedia Commons)
When automated tests fail due to a genuine problem in the software, this is a good thing! You should thank them and address the problem asap.
But what about test failures due to other reasons? Let's look at some common anti-patterns for dealing with such failures.
Some "reasons" for tolerating test failures include:
Tolerating test failures quickly erodes the trust in the results of the tests, to the point where the results are ignored and so they become pointless to run. This is a significant waste of your investment in building automated tests.
You need anything other than a "green build" to be a problem that the whole team takes seriously. This requires your automated tests to be reliable and stable, so that they only fail when they've identified a problem in the software.
Tip: It's better to have fewer, more reliable tests than more, unreliable ones (since the results of these unreliable tests don't tell you anything definitive about the state of the software under test).
It might be tempting to deliberately skip the failing tests to get back to a "green build" state, with the intention of fixing them later.
The first problem with this is those failing tests that you're choosing to skip might actually be tests that find significant problems in the software - and now you'll deliberately overlook these problems.
The second problem is that "later" never comes - higher priority work arises and going back to fix up these tests is unlikely to get the priority it needs. Keeping track of which tests are being skipped also adds unnecessary overhead and increases the risk of problems being introduced but going undetected.
The best measure of success, is how you deal with failure. - Ronnie Radke
When an automated test fails because of a problem in the software, you should prioritise fixing the problem.
When a test fails but not because of a problem in the software:
Remember that you've invested the time and effort into writing automated tests for a reason. Quite reasonably, you have doubts about your code and you write tests to help overcome these doubts.
This means the automated test code is important and needs to be high quality.