Open Category List

Bug Triaging Best Practices

3 minutesread

Overview

Test cycles are the most productive when testers have a clear understanding of the value of the bugs you are expecting. Testers are incentivized to produce valuable bugs, so when those bugs align with the expectations of the cycle, they will produce more and higher value bugs. In addition, every tester has a rating which ranks them among others in the community for future testing assignments. Every approved bug will improve a tester’s rating – with boosts for higher value bugs; every bug rejected bug (with the exception of WAD) hurts their rating.

 

Approvals

To set the expectation with testers as to what constitutes a valuable bug, consider the scope of the cycle and what type of issues would have the greatest impact on quality. The three value ratings are Somewhat, Very, and Exceptionally Valuable. A useful guide to judging the value of a bug is to ask yourself two questions:

  1. Does this bug fall in line with the expectations of the cycle? If you are looking for minor GUI issues, and a tester finds minor GUI issues, the value in the grand scheme may be low, but for that cycle it might be greater.
  2. Does this bug impact my application? Aligning bug value to severity is a straightforward way to assign value to a bug. Simply put, if a tester reports a critical bug (repeatable crash, 404 errors, etc.) assigning that a value of Exceptionally or Very valuable is a useful way to tell the tester “I want to see more of this.”

When the answer to the above two questions is generally yes but you are either unable to reproduce the issue yourself (remember the TTL also does this as part of triage) or you cannot prioritize the issue to be fixed, designating the bug as Won’t Fix will still credit the tester without harming their reputation; it also reinforces positive testing behavior.

Want more information on best ways to manage bug approvals? Check out our article on Tips for Managing Bug Approvals.

 

Rejections

When a bug is either not in scope, a duplicate of something you’ve communicated to testers (or of an issue already reported), is not an actual bug, or does not follow proper guidelines, a rejection is appropriate. A rejection for any reason other than “Works as Designed” will impact a tester’s rating. A rejection sends the message to a tester to discontinue that line of testing and can correct behavior.

NOTE: There is no “Cannot reproduce” rejection reason – see: “Won’t Fix” for that scenario.

 

Rejection Reasons

If testers are provided a clear reason for a reject they learn your product and priorities better for future cycles. Rejects can be a great learning tool for testers but the lack of a clear reject reason or mis-classified rejects can damper participation.

WHEN TO USE:

  • WAD (Works as designed):
    • Tester did not know the specifications and took a guess.
    • The bug reported is not a true bug and is simply how the system works (i.e. “Team Roster is sorted alphabetically by name and not by position”).
    • Can be used when a known issue has not been fully communicated to the testing team and it is reported in the cycle (this would otherwise be a duplicate).
  • DID NOT FOLLOW INSTRUCTIONS:
    • Tester ignored clear instructions which affected the outcome or made the report unusable.
    • These issues can often be solved by using the tester messenger instead of reject.
  • OUT OF SCOPE:
    • A bug is logged that clearly falls into what is explicitly stated in the Out of Scope section of the test cycle (testers will assume if it’s not mentioned in the out of scope, and not explicit anywhere else, that it’s a bug. It is therefore best to be explicit in the scope and out of scope section).
    • If the scope is narrowly defined and reason dictates that the bug would generally be considered out of scope (i.e. scope is to test the search results and a bug is reported against the Terms of Service link), then this is reason is acceptable.
  • DUPLICATE:
    • Issues filed in same cycle that the tester could have found by searching.
    • Issues that were in a known issue list provided in the cycle.
      • When K.I. lists are extremely large, lacking detail for tester, or jargon laden that would have made the issue hard to find, it is often better to give the testers the benefit of the doubt for their efforts and reject these as WAD instead.
  • OTHER: None of the above.
0
0
64
3 minutesread

Related Knowledge Base Posts