Automation can be an awesome part of a test strategy, but not all tests are good candidates to be automated.
Not all testing can be completely automated, due to the uniquely human skills that are required (e.g. exploration, learning, experimentation). But even for those tests that can be automated, not all of them should be.
Figure: Making wise decisions about what to automate can prevent you from wasting valuable time automating less valuable tests
If you try to "automate" bad testing, you’ll find yourself doing bad testing faster and worse than you've ever done it before. - Michael Bolton
There are multiple attributes that make a test a good candidate for automation.
Consider how often a test is run. If it is run across multiple builds, or if the same test needs to be run on different data sets or browsers then it may be worth automating.
For example, if a test is run on Chrome, edge and firefox then automating it delivers more ROI since that is now 3 less tests the tester has to perform.
How easy is it for a human to test? If it requires many inputs where a human might make a mistake, then automating it could be a good idea.
For example, if there was a test for a calculator app and the tester had to enter 20 different inputs before pressing calculate, that would be a good reason to automate since there is a high chance of human error.
Always weigh the time to perform a test against the time to automate it. The longer it takes for humans to perform a test, the higher the value in automating it.
For example, if a test takes 1 hour for testers to perform and automating it takes 2 hours, then after only a few runs the automation will have delivered ROI. However, if a test takes 1 minute to perform but 3 days to automate, then it won't deliver ROI for a long time after automation.
Functionality that isn't well established or understood is risky to automate. This risk is because the test is liable to change as the requirements change.
For example, if the customer has asked for a new page and the V1 has been delivered, it isn't a good idea to automate the testing of that page just yet because customers and the client will likely have many change requests in the near future.
Tests that are run on huge data sets are often impractical for humans to perform, and are often better automated.
For example, if a test needs to be run against 5,000 records then it should be automated.
Some tests are easy to judge objectively, such as the outcome of a maths equation. Those tests often work great when automated. Conversely, tests which require human judgment, such as UX, do not work well when automated.
For example, if the user needs to judge how nice the colours on a page look to the human eye, then it may not be a good idea to automate it because it's subjective.
The more value a test provides, the greater chance it is a good choice for automation.
For example, if a test checks whether the application is going to crash, and it has a high chance of failing then automating it would likely be a good idea since it will ensure it always runs correctly.
Some types of test just don't make sense to even try to automate:
It's not always a black and white decision about whether to automate a test. Let's discuss:
Let's look at some tests and why we would choose to automate them or not.
Test Scenario: Collapse the sidebar and check that the main pane resizes and displays correctly
Reason: This test is a bad candidate for automation because checking the UI requires a human judgment call, it isn't a precise objective call that a computer can make.
❌ Bad example - Testing a Sidebar
Test Scenario: during video playback, set the “Playback” speed to 1.25 and check that the audio is played faster than before but remains clear.
Reason: In this case, the computer won't be able to easily judge whether the audio is clear or unclear.
❌ Figure: Bad example - Testing video playback
Test Scenario: Enter 100 into the amount field and check that the total invoice amount is updated to 110 (GST is added)
Reason: The test is a maths problem which is easy for a computer to evaluate.
✅ Figure: Good example - Testing a GST calculation
Test Scenario: Enter “abcdefgh” into the editor, press the “Save” button and save with filename “test”. Close the editor. Open the file “test” and check that it contains “abcdefgh” only.
Reason: It's easy to evaluate the expected output with objective criteria.
✅ Figure: Good example - Testing a save button