Building Quality In
Building Quality In
In waterfall development, there is little opportunity for early testing. Most if not all of the delivery happens near the end of the project and this is a really bad time to find out that we have requirements issues, quality issues, performance issues, deployment issues, etc. One of the great advantages of Agile is in its ability to delivery early and often. This early delivery capability leads to an ability to start testing much sooner than usual. Early and frequent testing is critical to quality and agile's ability to support this early delivery model is one reason why agile projects often have lower production defect counts that systems built using traditional methods. However, what happens when you test early and you start to find a lot of defects? How should you handle that situation? Or should you even try to handle it? Perhaps this is to be expected given the fast delivery pace that agile demands. We think not. We are of the opinion that ...
A. Defects are very costly and are to be avoided where possible
B. Lean Thinking teaches us to build quality in
B. Many defects are actually the result of miscommunications and misunderstandings
C. There are simple techniques that can help
A. Defects Are Expensive.
Let's walk through the sequence of events when a typical software defect is found.
1. The developer writes code that it isn't going to pass through test and delivers it to testing
2. A tester finds the defect
3. The tester logs the defect in the ticket system
4. The developer tries to re-create the defect
5. Once found, the developer further investigates it
6. The developer, tester, and business analyst now have detailed discussions about how the feature should work
7. The developer re-codes, re-unit-tests, re-builds, and re-delivers the fix to the test environment
8. The tester verifies that the bug has been resolved and closes the ticket.
Wow! That's a lot of activity that would take up a lot of time for even the most trivial bugs. If teams are producing a lot of defects then this represents a huge 'quality debt' (kind of like technical debt) that will need to be paid back out of the velocity of future sprints. Either the team will need to hold back some amount of time from future sprints to work on defects or they will need to add additional sprints to deal with the quality backlog. So what can we do with our teams to better this situation?
B. Lean Thinking and Quality
There are a couple of different approaches to Quality in general. One way to deal with quality is to simply let defects happen and then rely upon quality control and testing to find the defects. This tends to be the standard approach. We don't think too much about quality until we enter the testing cycle and then we rely upon our testers to find the issues. The problem with this approach is that it is very expensive in terms of time as we have already discussed. Each defect will need to go through all of the steps that we outlined above. And who knows whether or not we will actually find all of the defects.
The Lean world has a different approach; build quality in and create continuous flow. In this model, we do everything that we can to keep defects from entering the system in the first place. By putting some additional controls in the up-stream process steps, and by giving the team members the ability to "stop the line", we can keep many of the defects from ever entering the product in the first place. This tends to greatly improve quality and also improve schedule performance since defects are so costly in terms of time. It also aids in continuous flow since you cannot get reliable forward flow when you are always having to go backwards to fix things. This is the approach that we will further explore. In order to keep defects out, we will need to think a bit about where defects come from.
C. Many defects are the result of miscommunications and misunderstandings
The US Constitution is only about 4000 words long. But to interpret it, we need 9 of the brightest and most educated legal scholars in the land who, at the end of the day, still cannot generally agree on what it means. These 9 people can read exactly the same document and come away with N number of interpretations, none of which exactly match each other. The same thing happens on software development projects. We can write the most beautiful, detailed, and highly elaborated requirements documents the world has ever seen and yet I am sure that if we gave the same document to 4 different software developers, 3 different testers, an architect, and a handful of end-users, we would come up with wide array of interpretations. So while the document may be of some importance in and of itself, it is in the end, insufficient. This is one of the reasons why we value people and interactions over documents!
Many of what we end up calling software defects are really just misunderstandings about requirements. The BA writes the requirement with one meaning in mind, the developer interprets it slightly differently, the tester interprets it in yet another way, and who knows whether or not any of them understand it in the way that the end-users want to see it work. And the funny thing is that the developer's unit tests will always pass because the tests are written against the developer's own assumptions!
The User Interface is another source of a myriad of little 'defects'. It is notoriously difficult if not impossible to write a UI requirements specification that completely and accurately captures every nuance of the user experience. I don't know that we yet have the tools and taxonomy to adequately communicate the user experience. So it is not uncommon to see a plethora of UI related defects for any system involving new UI development.
Another area of concern is that developers actually have at least 3 sets of requirements but they are typically only given 1 of them up front!
1. The Requirements Document or User Story: Yep, the developer usually has something to work with here.
2. The Tester's Test Plan / Approach: It is rare for the developer to see this in advance yet the code has to pass test in order to be released
3. The Customer's Acceptance Plan: It is rare the developer to see this in advance too yet we usually ask our product owners to 'accept' the solution before it goes into production
It is interesting that there are at least 3 sets of "requirements" that must be met and yet we tend to only have one of them defined in advance of development. The agile community is certainly getting better in terms of trying to define "acceptance criteria" up front but we still have a long way to go.
So how do we deal with all of this requirements uncertainty?
D. There are simple techniques that can help
I had the great pleasure of working with Jeff "Cheesy" Morgan a while back on a consulting engagement and we worked together to put some defect prevention steps in place that I will outline here. This is just a sampling of what is possible and there are certainly other techniques out there. The goal behind all of these tools is to either keep defects from entering the sprint in the first place or to at least keep defects from escaping the sprint once they are found.
1. Requirements Maturity Definition
If you've been in the agile community for any length of time, then you have certainly seen your fair share of inadequate requirements. And while even though we know that the requirements themselves will be insufficient, some level of requirements maturity is desirable. Some teams will create a "definition of done" for requirements that can help to make sure that poorly thought out requirements do not enter the sprint and end up being the source of a bunch of defects. For example, a story might need to pass the following basic criteria before being allowed into the sprint.
- It is testable and estimable
- The story has been decomposed to the point where it is less than N points in size
- The basic UI elements have been defined and are available
- The basic business rules have been defined and are available
- The customer's acceptance tests are defined
By defining a basic level of maturity for the requirements themselves, we can help to ensure that enough details are known that the team has a reasonable likelihood of success in delivering the feature.
2. The Three Amigos
As we have discussed, even the best of requirements are open to interpretation. A practice that I like and that is growing is the use of what we call "The 3 Amigos Meeting". The 3 Amigos is an informal discussion between the person who wrote the requirement, the person who is going to be doing the coding, and the person who is going to be testing the feature. These 3 roles get together and discuss the story in some detail to make sure that they are all on the same page about what the requirement means, what the special cases are, how we will test the feature, how it should behave under special circumstances, how error conditions should be handled, etc, etc. At the end of the discussion, we will should have a much richer understanding of how the feature is going to work and most importantly, the 3 Amigos will all be of the same mind with regards to how this feature is going to work. The result should be greater understanding, less confusion, fewer misunderstandings, and fewer defects. We have this discussion for each feature/story in the sprint prior to development.
Test Driven Development is a powerful technique that turns the whole quality issue upside down; quality comes first. When we define in some detail how we are going to test a piece of functionality, what the test cases will be, what the data inputs will be, what the expected outputs will be, etc, we are actually specifying the true requirements in great detail. When we then code to the test plan, we are really coding to the detailed requirements. Not only do we typically get better quality, we also get automated test procedures and detailed testing plans. TDD is a powerful practice that, while difficult for many teams to implement, typically pays off in many ways.
4. In Flight UI Review
How often do you perform a demo to your stakeholders at the end of an iteration only to get a hundred little comments/change requests related to the user interface? This can be maddening for team members and clients alike as they try to identify, manage, and resolve all of the little things that often come up as a part of UI reviews. The UI is notoriously difficult to get right on the first pass; it is an inherently iterative activity that begs for a more collaborative process. We used to have a saying that I still like very much: "The demo should not be the first time that the customer has seen the feature!". While Scrum does demand that we perform product demos after each sprint, that doesn't mean that we have to wait until the end of the sprint to get feedback. For UI work especially, we should be collaborating with our product owners frequently throughout the sprint to catch all of those little subtle issues/changes that could be addressed right on the spot. Waiting until the end of the sprint to catch these can result in a truckload of annoying little change requests that could have been avoided.
5. Pair Programming
Like TDD, pairing is a powerful technique that pays back benefits in a variety of ways. And like TDD, it can be difficult to implement. There is a lot of existing bias against pairing, but if you can get it going, you will likely see improvements in product quality, improvements in code quality, expanded understanding of the product and the technologies by the team, cross training, elimination of single team-member dependencies, shared code ownership, etc.
The traditional approach to quality is to let bad things happen and then use various QA activities to find those defects and tease them back out again. We let bad design happen and then use design reviews to find the problems; these will need to be fixed resulting in rework. We let bad code happen and then use code reviews to find the bad coding practices; these will need to be fixed resulting in rework. We let defects enter the system and then we use testing to find the defects which will need to be fixed resulting in rework. You get the idea. The Lean model is to find where defects are entering the system and to put what are often simple steps in place to help to keep defects from ever entering the system in the first place.