Scrum + Manual Testing methodology (for WEB apps.)
Submitted by Ainars Galvans on Fri, 19/09/2008 - 07:51.
agile
I
believe there are two wrong ways to add the formal QA to Scrum: to give
up manual testing skills and learn automation or give up agile
principle of quick feedback and test one iteration behind. There is a
better way and I’ve seen some articles struggling around with generic
phrases like collaboration, optimization, exploratory testing. But none
describing the methodology.
What follows is a theoretical research (based on different resources)
trying to define such a methodology. The main idea is to add new role
(beside the team, scrum master and project owner) called a tester. The
methodology sounds very logical. Why nothing like this has been
published beforehand? Am I missing a lot of issues with this
methodology? Anyone willing to apply it?! Please e-mail
ojnjars@inbox.lv if you are willing to contribute this research.
Context – Assumptions
To make things easier we make several assumptions – clarification of
development project specifics. So we will describe them as assumptions
here:
1. Development Scrum consists of 2 week sprints
2. The software is a web based with certain data in database. The role
of tester will include testing software in “QA environment”, while
developers are still doing unit tests in their environment which is not
as close to production one.
3. It is decided that software deployment in “QA environment” should
follow the manual installation process the same as it will take place
in production environment. It optionally needs smart decision if to
redeploy data into database. The process may take up to 30 minutes and
is never done this way by developers in their environment
4. The process of creating installation package (build process) may
also take up to 30 minutes, so will be executed once per night (nightly
build)
Day 1. Sprint Planning
During the first day of a spring – sprint planning meeting – tester
participates in order to help to define acceptance criteria (by asking
what if questions to product owner) and estimating testing effort for
stories. It may appear that there are stories requiring a little
development effort but a lot of testing effort. Story selection may be
changed to disallow too much stories of such type into one sprint.
Alternatively part of testing activities may be postponed, creating a
separate story (adding what I would call “quality debt”) and adding to
backlog.
Day 2. Test Case Drafting
A tester works: one day behind” the team. The second day of a sprint is
dedicated by tester to draft initial set of test cases for each
features included in the sprint.
Day 3-8. Normal days
Each day starting day 3 of sprint in the morning tester installs latest
nightly build (manually using installation procedure/instructions
supplied by development team) to QA environment, do fast smoke tests if
they fail and emergency fixes are applied by the team.
Features developed yesterday shall be tested today by tester. As
required tester communicates with each developer to transition the
knowledge of the feature functionality. Tester must also update
(complete) Test Cases in the process of testing as more test ideas may
come out during Exploratory Testing. If a feature testing requires too
much effort or tester discovers too much ideas and can’t accomplish
testing within the day he create new story ticket for further feature
investigation. If the ticket is not closed by the end of a sprint it
appears to be “Quality Debt” and must be announced to product owner for
prioritization.
Because developers do unit testing most of the features should pass
acceptance criteria. However if for any reason they do not pass them in
QA environment, QA may return ticket to the team. If acceptance
criteria are met but there are more bugs around, tester reports them
separately, sending the initial story to End-To-End testing. Tester
also has to do wider investigation (based on project quality goals) and
report any usability suggestions, performance, security or other
potential treats. Such bugs must be assigned to Product Owner if they
don’t directly break the acceptance criteria of a story. Bugs that
(developer) team disagrees with or appears to require too much effort
also get assigned to Product Owner. They have to be addressed during
sprint retrospective meeting. So it appears that during sprint
retrospective no feature could be demonstrated if it has an open bug
against it assigned to anyone but product owner.
Day 9 – The QA day
Because tester works one day behind, the day 8 of the sprint shall be
the last day of a new feature development. That is the reason why day 9
can be dedicated by team to bug fixing, re-factoring and analysis of
new features which might be included into next sprint i.e. next sprint
preparation.
Day 10 - retrospective meeting
In order to demonstrate a feature in retrospective meeting it should be
implemented; tested and all bugs either fixed or assigned to Product
Owner for clarification (otherwise feature is postponed to next
sprint). Tester should demonstrate that features meet acceptance
criteria and demonstrate the issues assigned to product owner if any.
Product Owner should decide and prioritize those features/defects. The
team participates in the meeting to qualify their performance and
probably counter tester’s negative opinion. Tester also describes
“Quality Debt” changes – what was not tested due to lack of time.
Product owner may prioritize the postponed tests or even allow skipping
them.
More QA stuff after sprint
Test Cases created during sprint shall be peer-reviewed by another
tester at any time later. For example by End-To-End tester if there are
any. Ideally review should be done during or by end of the day 10 so
that tester has the feedback by next planning meeting, but it may be
impossible. In any case tester is not forced to respond to review
feedback (fix test cases and run additional test if identified) until
the end of sprint (so that tester commitment during sprint planning is
not compromised).
If more tests are identified during review they must be executed later
and any defects discovered should be assigned to product owner (added
to backlog).
Testing outside sprints
Still there are types of tests that are not directly related to
functionality implemented by sprints. For example non-functional tests:
to validate that given architecture satisfies high availability
requirements. Actually the tester may not have enough skills to do
performance tests and a separate team or person may be assigned to do
performance testing based on agreed schedule.
More over it is highly welcome to develop system-level automated
regression tests (based on, but not necessarily repeating manual test
cases) along with developer created unit tests. Those tests will aim
for different type of regression bugs. This activity may also take
place with a significant delay to feature implementation.
The tester should make a smart decision to nominate the build that was
especially successful (without critical bugs) to be installed to the
special environment such as for performance test or test automation
environment.
From: http://www.testingreflections.com/node/view/7378