Is Your Testing Agile? 7 Pitfalls to Overcome
Those testers who are used to working in an organisation that delivers projects in waterfall will be used to receiving a set of requirements early in the project and building their test cases around these before receiving an operational system to test.
In agile, testers are expected to be contributing to the fleshing out of requirements and then testing these without formal requirements documentation. They should also be comfortable testing code that is constantly changing in near real time and happy to operate as part of an agile team. That agile team continuously produces value, at a sustainable pace and is able to quickly adapt to the business' changing requirements.
When transitioning from waterfall to an agile approach, it is vital to address the following seven testing pitfalls, which, if not addressed can derail the entire transformation:
1. Testers are not integrated into development teams
Traditionally, the test team sat outside of the development team and were independent. In agile they must be integrated into the teams.
Are testers not being invited to meetings between developers and business users to discuss stories, or more subtly, are they being invited only to be told they cannot contribute to the discussion?
One of the biggest risks here is that the tester doesn't know whats going on; changes are made to the story and the tester is unsure how to test them resulting in rework and animosity. When the tester misses out on story conversations there can be a tendency to simply test the developer's build without thinking about the requirements.
- Use the Power of '3' - Ensure all three disciplines (Product Owner / Developer / Tester) are invited to every discussion on stories
- Logistics - Have the testers sit with the team
- Testers to help the PO/team define stories and create acceptance tests first
- Ensure the whole team understands the story before any development starts
- Encourage testers to ask questions and offer suggestions
2. Waiting for sprintly builds
In traditional development it is common to wait for a stable build before commencing testing. In agile we want to start testing as soon as - if not before - the story development is completed.
A common mistake is that teams continue to employ a sprintly build meaning testers must wait until towards the end of the sprint before they can test which leads to a mad rush with stories stacking up in the test pile, environments unavailable and stories been rolled over to the next sprint.
When there are these types of delays testing stories, feedback arrives too late and defects are discovered at the last minute with the developer having to context switch back to resolve them. There is a significant drop in the quality of code and automation suffers because there is no time to complete it. Testers are often left testing stories a sprint or two after they have been coded.
- Regularly deploy builds to test environments
- Prioritise the implementation of continuous integration that creates deployable software
- Ensure test environments are as like production as possible
- Test immediately, as soon as the story has been developed
- Employ pair testing
- Ensure feedback is given to developers immediately
3. No Input into requirements
Agile does not require Business Analysts gathering requirements months in advance. It is stories, and the acceptance criteria and conversations around them that are deemed the 'requirements' and testers provide a critical role in defining these.
When business users define stories and acceptance tests by themselves without consulting the agile team (devs and testers) there is an increased likelihood of bugs resulting from missed requirements resulting in developers having to write new code to finish the story satisfactorily. Testers often have the greatest knowledge of potential impacts to the system, by not allowing their proper input into the requirements there is a high risk that critical issues will be found late.
- Testers to understand test variations before story development has begun and share these with the team
- Have developers refuse to code without tests provided
- Testers to attend all sprint planning and refinements sessions and ask questions based on quality characteristics (testable / verifiable, consistent, unambiguous, complete, accurate)
- Testers to ask themselves 'how would I test this'
4. Manual regression testing
Manual regression testing is boring, repetitive and error prone. Too much time is spent testing old features and not the new ones with the risk that old bugs sneak back into the code as the system cannot be completely tested every time there is a new build or feature. Features that used to work and are now broken aren't even noticed as the constantly changing code base makes manual test maintenance too onerous.
- Automation is key, automate as much as possible
- Have testers and developers work to automate functional tests
- Find an automation tool that works for the whole team
- Design for testability (the whole team should be involved in testing)
5. Maintaining a traditional testing mindset
In a traditional waterfall environment the QA Manager would confirm whether the final build was ready to ship. It was the separate test team who had the best view of the end-to-end product.
In agile, whilst the team (Developers, Testers and Product Owner) should determine whether a build is ready to ship, the PO is the one who signs off based on the business needs and risks.
Agile teams who were previously used to working with a separate testing function may remain stuck in the mindset that the QA's are the quality controllers - this attitude inhibits one of the key drivers of an agile way of working: building in quality. Quality is the responsibility of the entire team, without this realisation there is a tendency that testers becomes the safety net for development.
- Define 'Done' upfront so that testing is defined as part of the 'doneness' of the story
- Get the whole team to commit to owning the quality of the product
- Ensure testing is happening during the sprint
- Bring up testing issues in the team retro so the whole team can solutionise
6. Not Keeping Up With the Joneses
In waterfall, testers don't receive anything to test until the entire application is code complete, or at least a large feature set has been developed whereas in agile, testers are expected to test every story as it is finished. This means stories must be small, independent pieces of work which are testable.
If we see that stories are constantly being tested in the sprint following the one they were developed in, we know we have a problem. As with our Sprintly build issue, developers receive feedback too late, the team's velocity is affected and the following sprint becomes more difficult to plan.
- Include testing as part of the story estimation so it becomes part of the team's velocity
- Plan testing and development tasks together
- Ensure stories are small (usually 2-3 days of development) and testable
- Anyone can do testing tasks
- Automate as you go
7. Forgetting the Bigger Picture
Stories are bite-sized chunks of functionality, it is easy to fall into the mindset that that is all there is and forget they are part of a larger feature, which is part of a larger product.
As a tester, you know you have forgotten the bigger picture when you are finding bugs late in the release or the work flows don't make sense or reports aren't getting developed until the end because they have been forgotten about. Stories don't connect and pieces of the functionality are missing.
- When splitting out a feature into stories, think about workflows
- Think about any impacts to other parts of the system
- Find a way to build 'real world' test data
- Use business facing tests to drive development
Testing is a fundamental part of software development and it always will be, but to really maximise its effectiveness it needs to be integrated into, and an integral part of the development function. Moving away from the separate test team into the agile team is the way forward.