Abstract

In the past few years, I have been coaching several clients and numerous projects on Agile management methodology. Unanimously, I was asked the same question many times: Can we track QA (Quality Assurance) work as we do with development? In another word, can we capture all the QA activities and work in a backlog list and do the iteration and release planning based on the QA staff plan, and assumed QA velocity? Can we also generate QA status report such as the burn up or burn down chart? Before answering these questions, my first response would be why. No doubt, QA is a core part for an Agile project. Suppose developers achieve the throughput per iteration as scheduled, does it mean the project will be released as planned? Not necessarily, what if QA could not complete testing all stories from previous dev iterations? What if QA has to support interim release testing? What if QA is pulled into other activities than iteration testing? What if QA team is mixed of on shore and offshore members, and has to support multiple dev teams at the same time? All these questions become the reasons behind the very first one – “Can we track QA as we do with our Dev?” The first project I worked on was for an investment banking client who had a QA team of 6 people with 4 on shore and 2 off shore, supporting testing for 5 development teams of almost 30 developers. The team faced several issues: There was no QA estimation for story; QA spent lots of time on non-story activities without tracking; the ratio QA to developers was 1:5. The second team I worked with was a QA team of an e-commerce retailer IT department. The team had 3 people supporting 6 developers. The team mixed the dev velocity with QA’s. By their definition, velocity was the amount of work that passed UAT in an iteration, measured in dev estimation points, while at the same time, Iteration planning only took dev velocity into consideration. The QA estimation for each story was well off mark. And as same as the first team, QA team members were pulled regularly to support release testing and other non project related activities. We helped both teams by adopting similar tracking mechanism we used for the dev team. We first created a complete QA backlog list including all the stories, plus the non-story QA tasks. In the backlog list, we indicated which dev iteration and QA iteration each story was schedule in. We then did QA estimation for each story using the “triangulation” rule. We separated QA velocity from Dev’s. We planned QA iteration as we did for Dev, and provided QA status report and burn up chart per iteration. Both projects achieved good results:

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call