As testing is executed by the Applause Community, the testers will be submitting issues when encountered. Through the triage process we – first your TTL then potentially other team members and even you – ensure issues are documented at the right level of detail that will allow your teams to attend, prioritize and ultimately fix them. “Good” issues are approved, “bad” or irrelevant issues are rejected, questionable issues are returned to the testers for clarifications, and so on. Learn more about best practices for issue triage here. The percentage of approved issues out of those submitted by the testers is many times a key metric to assess the Applause team’s work, and, ultimately, the value you see from implementing Applause’s crowdtesting model. Over time, trends in the issue approval rate may draw your attention, specifically when you see a significant decrease in the number of approved issues. Such shift in the number of issues is likely to also be reflected in your build’s Applause Quality Score and not always for the better, and you may find yourself asking why – and what to do next.
First, it is important to acknowledge that the issue approval rate is the symptom, not the cause. Granted, a reduction in issue approval rate may be frustrating, not only because you are seeing less meaningful issues to work with to improve your product – but also because the wasted time and effort invested in triage of issues that are later rejected. At the same time, it can be embraced as a great opportunity for improvement as it may uncover flaws in the alignment between you and your Applause team which may impact future tests. And, quite often fixing some of such findings might be quick and simple.
Here are a few directions to further investigate potential reasons for reduced approval rate:
- How clear are the directions provided to the Applause team on testing scope and goals? When many of the rejected issues are marked as “Out of scope” or “Did not follow instructions”, especially when spread across multiple testers, it often means the instructions are not clear to them.
- Was an up-to-date list of known issues provided to the Applause team prior to testing? When many of the rejected issues are marked as “Duplicate” it often means the testers are unaware of previously-found issues you already prioritized.
- How clear are the release notes provided to the Applause team in describing how the product and/or new functionality will be used by the end users? When many of the rejected issues are marked as “Works as designed” it often means the intended use and benefits of a new functionality is not clear enough.
- Was there a recent change in personnel – specifically in the interface between you and the Applause team – that may have resulted with lost knowledge or misaligned perspectives? Changes are inevitable, yet documentation and knowledge transfer are key to ongoing success.
Whatever the issue, you are advised to collaborate with your Applause team to further troubleshoot the causes for the decreased issue approval rate, and identify improvement opportunities in your processes and testing strategy.