That’s because agile development contradicts so many things that many Testers have been taught is ‘best practice’.
Testers might typically have gone through some recognised training such as ISEB certification.
ISEB testing qualifications, for example, specifically acknowledge iterative-incremental development models, including agile methods.
However, many testers would have undertaken their training, or gained their experience, when the waterfall model was more prevalent. And consequently would have potentially spent years practicing the V-model. With the V-model, system testing correlates directly to the system’s specification, and testing is conducted when the software is completed.
Put simply, a Tester’s life in traditional development methods was reasonably straight-forward. Give a Tester a spec and a finished piece of software, and they can check it works as specified.
Forget whether what was specified was really what the user wanted, or whether all the requirements were adequately captured, if it meets the spec, the quality is supposedly good.
With agile development methods, a tester’s life is rather more complicated.
First of all, there are no big documents specifying every detail of the requirements and functionality for them to test against. Only small pieces of documentation per feature and details captured verbally through collaboration.
Secondly, the software is tested early and throughout the lifecycle while it is still being developed. In other words a moving target.
Put like that, agile testing can be a real challenge.
Add to that the idea of writing test cases up-front, before the software is developed, so acceptance tests form part of the requirements analysis.
Add to that the idea that some tests will be automated at code level and implemented by developers.
Add to that a much greater emphasis on automated regression testing due to the fact that feature-level testing has been completed while the code was still being developed.
To cope with such demands, an agile tester’s role must change.
Some of the differences I’ve observed are as follows:
• With User Stories, there’s little difference between a requirements scenario and a test case. Some requirements are implied from the test cases.
• If test cases are written up-front, there’s a fine line between requirements analysis and test analysis.
• A Business Analyst may have special skills in workshop facilitation, interviewing, gathering business requirements, etc. But, regarding functional analysis, Tester and Analyst roles start to converge.
• If unit tests are automated, a tester needs to work with developers to ensure that the tests are complete and appropriate, and that all important scenarios have been identified?
• Testers need to avoid duplicating tests that have already been adequately unit tested when they and others come to system test the features later in the lifecycle.
• Automated regression tests need to be written and maintained.
• The tester’s involvement is increasingly important throughout the entire lifecycle of the development, not just in the latter stages of development.
• The role of tester is more aptly described as Test Analyst, and is more of a general QA role for the team, not necessarily the person who devises and executes all the tests.