Course 4: The Applause Quality Score
Learn everything about the Applause Quality Score.
With AQS it is easy to see how your quality is trending over time and to determine which factors are impacting your release quality. Leveraging years of historical quality data along with your testing results, we create a customized quality score for each build your company releases. You can view analytics, identify trends and verify coverage to make informed decisions.
To completely utilize the score when evaluating a build, you should take into account three components: the Applause Quality Score (AQS), its matching Confidence Level (CL) and the underlying test result data both are based on.
The AQS is a calculated value ranging from 0 to 100 which describes the quality of testing results for a product or build during one or more test cycles based on testing done and results collected.
AQS makes it easy to see how your overall quality is trending build-over-build and to indicate progress and/or opportunity for improvement. The purpose of the AQS is to help empower you to make critical release/don’t release decisions in a much more fact-based quick and easy manner.
While based on sophisticated data science models, the end results are as simple as a single metric displayed across three ranges:
High - between 90 to 100
Medium - between 66 to 89
Low - between 0 to 65
Log in to the Applause platform and navigate to “Products”.
Locate the relevant product from the products list.
Select the product name.
The activity dashboard for the product will be displayed, offering information general to the product, as well as per-build “cards” containing the build’s AQS, CL and issue distribution at a high level.
To drill into more details about a specific build, click on the “See more details” link at the bottom of the build card.
The current implementation of AQS is focused on Issues, collected on either exploratory or structured Functional testing. The following factors are considered:
Number of Issues collected
Distribution of collected Issues across the issue lifecycle (Submitted > Approved > Exported)
The severity of issues collected
Value of Issues collected
Priority of Issue collected as received over the Two-Way Jira Integration (where applicable)
The validity of fixing the collected issues (were they marked as ‘Won’t Fix’)
Duration of testing
The AQS model will be expanded and enhanced based on other functionality offered in the Applause platform, including priority of issues as set in your Bug Tracking System, current fix and fix verification status of issues, structured testing (test cases), reviews and rating, non-functional testing and more.
Evaluate the build’s AQS against the above objective ranges. Naturally, you may be aiming as close to a "perfect score" as possible
Since the score is calculated out of the scope of testing done and results obtained, make sure to also understand the nature of reported issues, their severity and value, type, components and status. Over time you’ll be able to identify irregularities that will direct you to better identify root causes
As not all products and builds were born equal, it is also imperative to evaluate the build’s score to those of preceding builds; a specific build might not generate a “high” score from the beginning, thus a steady, positive trend is certainly a valid goal to maintain
As you review the build-over-build trends, try to bring forth information not entirely known to Applause about these builds, such as testing done outside of the Applause platform, personnel changes and other interfering factors. As we keep enhancing the data science and machine learning models behind the AQS, it will become more effective and accurate to account for such uncertainties and others
Next to the Quality Score, you will also find an indication of the level of confidence we have in our calculations. The Confidence Level (CL) for an individual build is based on the scale and scope of the testing conducted for the build such as duration and coverage, as well as the breadth of historical data collected on previous builds of the product or app.
Reviewing the CL is key in transforming AQS into actionable insights. As the scope of testing is changing while the build is being tested, considering the CL will help you understand how reliable the AQS for a given build is at a given moment in time.
While a low AQS may result in you deciding to hold off from releasing the build (for instance, because it’s too “buggy”), a low CL may result in you deciding to intensify the testing to allow more data to be collected – longer, across more devices or regions, involving more individuals, etc.
Once calculated, the CL is presented as one of three available values: