Open Category List

The Applause Quality Score™

4 minutesread

Users of the Applause Platform can now easily understand the level of quality achieved build-over-build, so they can adjust testing scope and strategy accordingly. This is done through an innovative, first and only quality benchmarking tool built for the enterprise, delivered in a simple, clean interface. Together, it enables you to identify the quality of executed testing, spot trends, verify coverage and make informed decisions.

Using the Applause Quality Score for making informed decisions 

To completely utilize the score when evaluating a build, you should be accounting for three components: the Applause Quality Score, its matching Confidence Level, and the underlying test result data both are based on.

The Applause Quality Score (AQS™)

The Applause Quality Score (AQS) is a calculated value – ranging from 0 to 100 – which describes the quality of testing results for a product or build during one or more test cycles based on testing done and results collected. 

AQS makes it easy to see how your overall quality is trending build-over-build, and to indicate progress and/or opportunity for improvement. The purpose of the AQS is to help make the decisions around when is the right time to release a build much less of a gut call, and more data driven and defensible. By providing a tangible number based on testing results, AQS empowers you to make those critical release/don’t release decisions in a much more fact-based quick and easy manner.

While based on sophisticated data science models, the end results is as simple as a single metric displayed across 3 ranges:

  • High: between 90 to 100
  • Medium: between 66 to 89
  • Low: between 0 to 65

Here are few ideas to think about as you view a certain build’s AQS:

  • Evaluate the build’s AQS against the above objective ranges. Naturally, you may be aiming as close to a ‘perfect score’ as possible. 
  • Since the score is calculated out of the scope of testing done and results obtained, make sure to also understand the nature of reported issues, their severity and value, type, components and status. Over time you’ll be able to identify irregularities that will direct you to better identify root causes. 
  • As not all products and builds were born equal, it is also imperative to evaluate the build’s score to those of preceding builds; a specific build might not generate a “high” score from the beginning, thus a steady, positive trend is certainly a valid goal to maintain. 
  • As you review the build-over-build trends, try to bring forth information not entirely known to Applause about these builds, such as testing done outside of the Applause Platform, personnel changes and other interfering factors. As we keep enhancing the data science and machine learning models behind the Score it will become more effective and accurate to account for such uncertainties, and others.

Note: in the future, Applause may make industry benchmarks available for you to compare against as well.

AQS Confidence Level (CL)

Next to the Quality Score, you will also find an indication of the level of confidence we have in our calculations. The Confidence Level (CL) for an individual build is based on the scale and scope of the testing conducted for the build such as duration and coverage, as well as the breadth of historical data collected on previous builds of the product or app.

Once calculated, the CL is presented as one of three available values:

  • High
  • Medium
  • Low

Reviewing the CL is key in transforming AQS into actionable insights. As the scope of testing is changing while the build is being tested, considering the CL will help you understand how reliable the AQS for a given build at a given moment in time. In other words, while a low AQS may result in you deciding to hold off from releasing the build (for instance, because it’s too “buggy”), a low CL may result in you deciding to intensify the testing to allow more data to be collected – longer, across more devices or regions, involving more individuals, etc. 

How is AQS being calculated?

The current implementation of AQS is focused on Issues, collected on either exploratory or structured Functional testing. The following factors are considered:

  • Number of Issues collected
  • Distribution of collected Issues across the issue lifecycle (Submitted > Approved > Exported)
  • Severity of Issues collected
  • Value of Issues collected
  • Where Applicable: Priority of Issue collected as received over the Two-Way Jira Integration. Learn more about setting up the Two-Way Jira Integrations here.
  • Validity of fixing the collected issues (were they marked as ‘Won’t Fix’)
  • Duration of testing

The AQS model will be expanded and enhanced based on other functionality offered in the Applause Platform, including priority of issues as set in your Bug Tracking System, current fix and fix verification status of issues, structured testing (test cases), reviews & rating, non-functional testing, and more.

Viewing AQS for a product and build

To view AQS:

1. Log in to the Applause Platform and navigate to Products.


2. Locate the relevant product from the Products List. you may sort the table, filter it by Status and Application Type, or run a textual search.

Note: if you are managing an Agency, you may also locate the product among all your agencies.

3. Click on the Product Name.

The Activity Dashboard for the product will be displayed, offering information general to the product, as well as per-build “cards” containing the build’s AQS, CL and issue distribution at a high level.



4. To drill into more details about a specific build, click on the “See more details” link at the bottom of the build card.

Learn more about the Activity Dashboards here.

4 minutesread

Related Knowledge Base Posts