The testing
maturity grid concept.
This is a concept I first encountered doing product
development and that I found very useful to determine if a product development
was ready for release to the market. The same principles or similar principles
can be applied specifically to software development, whatever the type of
development is. This can be typical IT application development, or embedded
software or anything else.
The basic principle is to display the number of problems/issues/bugs in a project in a grid
form. The "look" of the grid will tell you in a very simple way if
you are ready with your project.
The grid has to axes, the first one being the step in the
bug workflow and the other dimension is the severity of the bug.
The different typical categories are:
Bug workflow
(inspired by JIRA, one of the widely used issue tracking systems):
- Opened
Meaning the bug has been entered by a beta user, a
developer, a tester. The bug has not yet been analyzed or allocated (can evolve
to in progress status, or be rejected, counted as duplicate, accepted, deferred
or non reproducible)
- In progress
After first analysis, the bug has been allocated and is
currently being solved but the final solution has not been implemented yet
- Resolved
The bug has been solved by the person it has been allocated
to, it has been typically verified by the solver
- Closed
The solution has been validated by an independent tester or
product owner
- Other end state: those are states which are equivalent to
a closed state from a bug tracking point of view, in this category (they can be
recorded separately), you can find states like: rejected (not an issue),
duplicate (already recorded), non reproducible (not an issue ?), accepted (we/or
the customers don't care), deferred/delayed (optional for a new release/version
of the application - irrelevant to this instantiation of this project)
Bug severity
- Safety/security issues
The product cannot be used by actual users, in most cases
even for testing purposes. For embedded software, it can imply safety issues up
to life threatening issues (e.g. brakes not working in a car), for IT software,
it can be related to major security issues (e.g. gaining access to strictly
confidential information, legal consequences...)
- Unsellable
A required functionality is not available or not functioning
properly making the usage of (part of) the delivered product impossible (e.g. a
banking application where payments don't work).
- Major
A problem a normal user would find critical or even
unacceptable but not making the application completely unusable (e.g. the
software sometimes hangs but can be restarted)
- Minor
Even a critical user might tolerate the issue, it is in no
way blocking to use the application or the product (e.g. a screen layout is not
exactly as it should)
The maturity criteria also depend on the project phase
- Inception
- Elaboration
- Realization
- Transition
The grid
Severity->
Status
|
Safety
Security
|
Unsellable
|
Major
|
Minor
|
Opened
|
10
|
50
|
75
|
|
In progress
|
45
|
15
|
10
|
|
Resolved
|
20
|
10
|
5
|
|
Closed
|
2
|
10
|
10
|
5
|
Other
End state
|
1
|
20
|
5
|
15
|
If first limited prototypes are available to show a limited
functionality, they cannot even show safety or security issues. Nothing can be
released even to developers or internal project testing with bugs in the red
area. The orange area can be acceptable during the realization phase, all types
of issues but the safety/security ones
can be discovered and addressed.
After the realization phase, the product/application is
being released to the final testing group (could be system testing, functional
testing, beta testing), typically in a so-called acceptance environment. In
that case problems can still be in the yellow zone. When the product is
released to actual users/customers, typically in a so-called production
environment, only the green area can be accepted.
The numbers indicated in the grid are the number of
bugs/issues in a defined category/state, the example shown gives a status
during the implementation (realization) phase.
Other information
The above is a specific interpretation of the concept: each organization can determine for itself the different steps in the workflow, the severity range and also the release and quality criteria for any phase of a project. Criteria might also depend on the domain area (medical devices, car industry, telecommunications, IT, public institutions...)
Another example of the same ideas can be found here:
No comments:
Post a Comment