The Debate
It seems that the manual tester is going the way of the dinosaur in the quality world. The idea is, "Why would I hire someone to run through test cases each iteration, when I can hire someone to write some automation that will test in a tenth the time and will be (theoretically) more consistent?" So, do we need manual testers? Is automation the silver bullet of quality assurance? I think the answer to both is, yes!
Definitions
First, let me explain what I mean when I say automation. To me, automation testing involves a framework of tests that can be run without human intervention (other than firing some event to start the process, whether it be a button click by a user, a code check in, or a time of day). Once the test pass has run, you should have a report that tells you what failed and where the problems are. Most importantly, you should be able to rerun the tests in the same environment and get the same results.
Manual testing is any time a tester has to be sitting at a desk running the test steps, or even watching if a test passed. For example, if you are testing some video code and have to be sitting at the console to see if the video plays correctly, even if code is running the test, you are still manual testing.
The main difference to me is that automation testing should free up the tester's time once the tests have been written. If there is still some kind of manual validation as listed above, the tester is still doing manual tests. The testing is just faster than normal.
With that understanding of how I see the two types of testing methods, let's look at the pros and cons of each methodology. For simplicity, I will not discuss points where both methods are equally good (can validate if a feature is correct) or bad (will break if requirement changes aren't communicated to the testers).
Manual or Adhoc Testing
With manual testing, we can react faster to changes in code.The second the developer has checked in their code (or in some cases even before with pair testing...) the tester can validate. Often times, this is an invaluable ability when something has to go to production quickly (at least with web testing).
Manual testing can find holes in our test cases that would not have otherwise been found until the customer had the product in their hands. At one company I worked for, the day of the release, all developers, testers and product managers were asked to blitz the product and use it as much as possible during a period of several hours. Even with a good testing process that was working, we would still find some problem that was missed.
There are also cases where automating some feature or function is not financially smart. I think this is the exception, but does occur. For example, you might want to write a script that migrates a database in preparation for an upgrade that will occur. Before you run the script, especially in a production environment, you want to validate that it is working properly. Since you will only run it once, why go through the effort of automating it?
However, manual testing takes time, sometimes more than we can afford (though the argument should be that we can't afford not to do it...). Manual testing might take a day to do a full regression pass (I know of companies where it takes several days). To add to that, with each feature we add to a product, that time increases because we have to test everything we have tested previously but still get to the new code.
Additionally, manual testing is hard to consistently reproduce. There is the human error aspect that comes into play where a tester might miss a step, which can result in a false positive. Human error can also come by not having the environment set up correctly.
One final challenge with manual testing is dealing with testing products that don't have a front end. For example, the only ways to testing a web service are either through building a UI that can access the service (in which case the UI has to be tested as well in order to validate that the results are accurate), or through coding (which might as well be added to the automation library if you are going that route).
Automation Testing
Automation testing was built for speed. You pay a good price (time, money resources) to get the automation in place, but once it is there (and if it is built correctly), the tests run in a fraction of the time. As an example, I worked on a couple of projects that took one tester several days to do a regression test pass, but the same testing only took a couple of hours through automation.
Automation testing was built to be repetitive. The tests will always run every step of the test that has been created. They don't get tired running the same hundred tests every time a developer checks in. They will always report the same results every time (assuming that everything in the system is accounted for in the test). And as part of the tests, the environment can be automatically set up so that reproducibility is increased.
Automation testing can be programmed to handle combinatorial problems that would be very difficult to handle by hand. For example, I once wrote a system that would build a random genealogy family tree (based on some parameters given) that let us test many combinations that we would never have thought of as testers.
On the cons, automation does take some setup, as listed above. If you don't have a framework, it needs to be developed or adopted. Additionally, there are times where it is simple to test something manually (such as pressing a button and seeing the result) but more difficult to automate (find the correct button, call the "click" method" on that button, write code to validate result, report the result).
Automation is also difficult, at least where I live (Utah), because of the reasons listed in my previous post on testing stereotypes. In a nutshell, the problem is there aren't enough technically talented people (coders...) who want to test. I spoke with some recruiters and they told me in my geographic area there are more positions than people to fill.
Finally, an automation engineer/tester usually costs more than a manual tester. Not always the case, but definitely the norm.
Answers
So again, do we need manual testing? Obviously. Adhoc testing can't be done by computers (yet) and is a critical part of the SDLC (Software Development Life Cycle). And when some quick testing needs to be done, or something needs quick validation, the manual tester is the only one besides the developer who can do that.
Also, is automation testing a silver bullet? Yes, but not for everything. Automation should help reduce or even eliminate regression problems. If you have a good test suite created, a developer can feel confident that they haven't broken anything that use to work and all they need to focus on is the new stuff written.
As I have attended conferences and worked with different companies, I have also seen another trend: rather then manual testers and automation testers, we have domain experts and automation engineers. The idea is that the framework is written by the automation engineers in such a way the domain expert can create and run tests without knowing how to code. So you can hire someone to focus just on knowing the product and customer needs, the domain expert, and then hire someone else to do the coding, the automation engineer. Those two working together can often times work faster than a tradition shop because they can specialize and not have to know so many different technologies. Often times, this is done with a keyword-driven development framework like Fit of FITnesse. I am not sure how much I buy into the whole idea, but I can see the benefits.
Obviously, there are related topics that could have been addressed here (test driven development for example) but in an effort to keep this short, I tried focus just on the comparisons between these two in general. Anyway, as always let me know you thoughts.
No comments:
Post a Comment