What to Do When There are Unexpected Testing Results

Some suggestions on what to do when there are unexpected testing results.

What to Do When There are Unexpected Testing Results

As testing is executed by the Applause Community, over time, trends in the number of issues submitted may draw your attention, specifically when you see significant changes. Such a shift in the number of issues is likely to also be reflected in your build’s Applause Quality Score and not always for the better, and you may find yourself asking why – and what to do next.

First, it is important to acknowledge that the submitted issues are the symptom, not the cause. While intuitively having more submitted issues hints on greater quality problems with the product, it may also potentially mean that the build was simply tested differently than before. Thus, further investigation might be due.

It is always a good practice to ongoingly revisit and optimize the testing strategy, and is especially important to do so when there is a reduction of submitted issues with no apparent improvement in product quality to explain it. This will increase the confidence in the executed testing, ensure nothing is overlooked and prevent escaping issues.

Whatever the issue, you are advised to collaborate with your Applause team to further troubleshoot the issues.

Investigating the Decrease in Overall Issue Value
  • Were there any recent changes to the development process that may have resulted in improved quality? Such changes may be in processes, personnel or tools and are clearly desirable. While such changes tend to have a more gradual impact, a sudden reduction in reported issue value is certainly a possible result as well

  • Was there a change in the way testing instructions were composed and/or delivered to the testers? Lacking, unclear or confusing instructions not only prevent testers the full understanding of your product, use cases and their impact, but may also discourage the testers from participating in future testings of your product

  • Was there a recent change in personnel – specifically in the interface between you and the Applause team – that may have resulted in lost knowledge or misaligned perspectives? Changes are inevitable, yet documentation and knowledge transfer are key to the ongoing success

  • How clear are the release notes provided to the Applause team in describing how the product and/or new functionality will be used by the end users? Value is often subjective and members of your Applause team need to understand what you care about

When the Product Quality Resulted in the Increase in Submitted Issues for a Specific Component
  • Is the product component disproportionately represented in the scope of new functionality delivered in the tested build? New functionality should be more prone to errors in design and implementation before the feature stabilized

  • Was there any time or other constraints that dictated rushing the build through internal processes while skipping critical steps such as code reviews and unit tests?




If You Sense that the Testing Strategy Resulted in the Increase in Submitted Issues for a Specific Component
  • Was testing scope expanded beyond what is executed regularly to focus on the component? Clearly testing more time and/or product functionality increases the probability of finding more defects

  • Is there a specific issue that’s blocking the testers from properly testing the product? Oftentimes matters like incorrect access permissions, bad test data, a malfunctioning service or other infrastructural aspects of the testing environment as well as similar issues may present themselves to the end users as software defects

  • Has the list of known issues – specifically those known for the product component – been changed or not kept up-to-date prior to testing? It might be that the testers are unaware of previously-found issues you already prioritized

When the Product Quality Resulted in the Increase in Overall Issue Priority
  • Were there any recent changes to the development process that may have resulted in poor quality? Such changes may be in processes, personnel or tools

  • Was testing executed at an earlier stage in the Software Development Life Cycle? Later stage builds are expected to have better quality and see lower issue priority

  • Is the increase in overall issue priority attributable to a handful of “P1” issues or to a large number of “P2” and “P3” issues? Clearing your release blockers should have a greater positive impact on the overall priority than fixing the same number of lower-priority defects

  • Similarly, can you attribute the overall issue priority to infrastructural and/or backend changes made recently? It is not uncommon to have even small infrastructural changes resulting in highly-prioritized issues




If You Sense that the Testing Strategy Resulted in the Increase in Overall Issue Priority
  • Was there a change in the way testing instructions were composed and/or delivered to the testers? Testers may inaccurately represent the severity of issues as they document them without a proper understanding of your product, use cases, and their impact. Note that while the severity might be updated to more accurate values during triage, such frequent changes might discourage the testers from participating in future testings of your product

  • Was there a recent change in personnel – specifically in the interface between you and the Applause team – that may have resulted in lost knowledge or misaligned perspectives? Changes are inevitable, yet documentation and knowledge transfer are key to ongoing success

  • Is the scope of new functionality delivered in the tested build significantly different than usual? Increased complexity in the requirements and “big” new features in general, may result in testers misestimating the impact of issues

When the Product Quality Resulted in the Increase in Overall Issue Severity
  • Were there any recent changes to the development process that may have resulted in poor quality? Such changes may be in processes, personnel or tools

  • Was testing executed at an earlier stage in the Software Development Life Cycle? Later stage builds are expected to have better quality and see lower issue severity

  • Is the increase in overall issue severity attributable to a handful of “Critical” issues or to a large number of “High” and “Medium” issues? Clearing your release blockers should have a greater positive impact on the overall severity than fixing the same number of lower-severity defects

  • Similarly, can you attribute the overall issue severity to infrastructural and/or backend changes made recently? It is not uncommon to have even small infrastructural changes resulting in severe issues identified during testing




If You Sense that the Testing Strategy Resulted in the Increase in Overall Issue Severity
  • Was there a change in the way testing instructions were composed and/or delivered to the testers? Testers may inaccurately represent the severity of issues as they document them without a proper understanding of your product, use cases, and their impact. Note that while the severity might be updated to more accurate values during triage, such frequent changes might discourage the testers from participating in future testings of your product

  • Was there a recent change in personnel – specifically in the interface between you and the Applause team – that may have resulted with lost knowledge or misaligned perspectives? Changes are inevitable, yet documentation and knowledge transfer are key to ongoing success

  • Is the scope of new functionality delivered in the tested build significantly different than usual? Increased complexity in the requirements and “big” new features in general, may result in testers misestimating impact of issues

When the Product Quality Resulted in the Decrease in Submitted Issues
  • Were there any recent changes to the development process that may have resulted in improved quality? Such changes may be in processes, personnel or tools

  • Was testing executed at a later time in the Software Development Life Cycle? Later stage builds are expected to have better quality and see less issues

  • Is the scope of new functionality delivered in the tested build significantly reduced compared to usual? Reduced scope and complexity – such as minor releases and patches – is expected to not only be tested as lesser effort but also to yield less issues




If You Sense that the Testing Strategy Resulted in the Decrease in Submitted Issues
  • Was there any time or other constraints that dictated rushing the build through testing? Allowing the Applause Community sufficient time to run structured and exploratory tests on all applicable areas of your product is key. Note that this may apply more for Exploratory Testing, as tracking the execution of test cases under Structured Testing can be done easily through the Applause platform

  • Was there a change in the way testing instructions were composed and/or delivered to the testers? Lacking, unclear or confusing instructions not only send the testers to areas in your product you may not need tested, but may also discourage the testers from participating in future testings of your product

  • Are we always testing on the device environments and geographies while ignoring others your product is used in?

  • Were the known issues recorded for the product recently updated? An increase of the known issues list – while having a positive impact on reducing “noise” and allowing testers to concentrate on “real” issues – might also result in less issues reported

When the Product Quality Resulted in the Increase in Submitted Issues
  • Were there any recent changes to the development process that may have resulted with poor quality? Such changes may be in processes, personnel or tools

  • Was testing executed earlier in the Software Development Life Cycle? Later stage builds are expected to have better quality and see less issues

  • Is the scope of new functionality delivered in the tested build significantly more complex than usual? Increased complexity in the requirements often result in design mistakes, uncovered edge cases and performance issues

  • Were there any time or other constraints that dictated rushing the build through internal processes while skipping critical steps such as code reviews and unit tests?




If You Sense that the Testing Strategy Resulted in the Increase in Submitted Issues
  • Was testing scope expanded beyond what is executed regularly? Clearly testing more time and/or product functionality increases the probability of finding more defects

  • Was testing performed on device environments and geographies not commonly tested?

  • Was testing performed on a new – potentially unstable – environment, such as a newly released mobile device or a beta version Operating System?

When There is a Significant Decrease in Issue Approval Rate
  • How clear are the directions provided to the Applause team on testing scope and goals? When many of the rejected issues are marked as “Out of scope” or “Did not follow instructions”, especially when spread across multiple testers, it often means the instructions are not clear to them

  • Was an up-to-date list of known issues provided to the Applause team prior to testing? When many of the rejected issues are marked as “Duplicate” it often means the testers are unaware of previously-found issues you already prioritized

  • How clear are the release notes provided to the Applause team in describing how the product and/or new functionality will be used by the end users? When many of the rejected issues are marked as “Works as designed” it often means the intended use and benefits of a new functionality is not clear enough

  • Was there a recent change in personnel - specifically in the interface between you and the Applause team - that may have resulted with lost knowledge or misaligned perspectives? Changes are inevitable, yet documentation and knowledge transfer are key to ongoing success