Bug Triaging Best Practices


Test cycles are the most productive when testers have a clear understanding of the value of the bugs you are expecting. Testers are incentivized to produce valuable bugs, so when those bugs align with the expectations of the cycle, they will produce more and higher value bugs. In addition, every tester has a rating which ranks them among others in the community for future testing assignments. Every approved bug will improve a tester’s rating – with boosts for higher value bugs; every bug rejected bug (with the exception of WAD) hurts their rating.



To set the expectation with testers as to what constitutes a valuable bug, consider the scope of the cycle and what type of issues would have the greatest impact on quality. The three value ratings are Somewhat, Very, and Exceptionally Valuable. A useful guide to judging the value of a bug is to ask yourself two questions:

  1. Does this bug fall in line with the expectations of the cycle? If you are looking for minor GUI issues, and a tester finds minor GUI issues, the value in the grand scheme may be low, but for that cycle it might be greater.
  2. Does this bug impact my application? Aligning bug value to severity is a straightforward way to assign value to a bug. Simply put, if a tester reports a critical bug (repeatable crash, 404 errors, etc.) assigning that a value of Exceptionally or Very valuable is a useful way to tell the tester “I want to see more of this.”

When the answer to the above two questions is generally yes but you are either unable to reproduce the issue yourself (remember the TTL also does this as part of triage) or you cannot prioritize the issue to be fixed, designating the bug as Won’t Fix will still credit the tester without harming their reputation; it also reinforces positive testing behavior.

Want more information on best ways to manage bug approvals? Check out our article on Tips for Managing Bug Approvals.



When a bug is either not in scope, a duplicate of something you’ve communicated to testers (or of an issue already reported), is not an actual bug, or does not follow proper guidelines, a rejection is appropriate. A rejection for any reason other than “Works as Designed” will impact a tester’s rating. A rejection sends the message to a tester to discontinue that line of testing and can correct behavior.

NOTE: There is no “Cannot reproduce” rejection reason – see: “Won’t Fix” for that scenario.


Rejection Reasons

If testers are provided a clear reason for a reject they learn your product and priorities better for future cycles. Rejects can be a great learning tool for testers but the lack of a clear reject reason or mis-classified rejects can damper participation.


  • WAD (Works as designed):
    • Tester did not know the specifications and took a guess.
    • The bug reported is not a true bug and is simply how the system works (i.e. “Team Roster is sorted alphabetically by name and not by position”).
    • Can be used when a known issue has not been fully communicated to the testing team and it is reported in the cycle (this would otherwise be a duplicate).
    • Tester ignored clear instructions which affected the outcome or made the report unusable.
    • These issues can often be solved by using the tester messenger instead of reject.
    • A bug is logged that clearly falls into what is explicitly stated in the Out of Scope section of the test cycle (testers will assume if it’s not mentioned in the out of scope, and not explicit anywhere else, that it’s a bug. It is therefore best to be explicit in the scope and out of scope section).
    • If the scope is narrowly defined and reason dictates that the bug would generally be considered out of scope (i.e. scope is to test the search results and a bug is reported against the Terms of Service link), then this is reason is acceptable.
    • Issues filed in same cycle that the tester could have found by searching.
    • Issues that were in a known issue list provided in the cycle.
      • When K.I. lists are extremely large, lacking detail for tester, or jargon laden that would have made the issue hard to find, it is often better to give the testers the benefit of the doubt for their efforts and reject these as WAD instead.
  • OTHER: None of the above.

Cycle Management Best Practices

In addition to following the [ilink url=”https://help.applause.com/bug-triaging-best-practices/”]Bug Triaging Best Practices[/ilink], in order to get the most out of your cycle we also recommend the following tips:


  1. If you make an update to your app during testing, inform testers immediately. Use the “Announcements” portion of the cycle chat to let your testers and PM know in real time if anything major has happened or changed during the active testing time period that may effect testing efforts.
  2. Think carefully about making changes to scope during active testing. Once the scope has been established and testers have had a chance to review it and accept the cycle, it is not recommended to make scope changes. If it is necessary, it should only be done for minor changes. If you have a need for a major scope change, talk to your PM, as spinning up a brand new cycle may be a more appropriate course of action than modifying the existing cycle.
  3. Discuss expectations with your PM and TTL upfront. Typically you will see bugs start to come through after the first few hours of a cycle’s activation, and TTLs will usually start triaging bugs after about 24 hours. This could vary by project, so it’s important to discuss this process and your expectations in advance.
  4. Raise a red flag right away if you’re not seeing what you expect to in a cycle. Let your PM know if you’re not seeing as many bugs as you expected, if you encounter a questionable tester, or just generally if you think something is off. We can quickly act to correct this, swap out testers, clarify testing focus, etc.


The Testers

  1. Get to know your testers! Don’t be a stranger – they are real people, too. You can learn more about your testers by checking out their uTest profile — simply click on the tester’s name either in the Dashboard, the Testers tab in the left navigation bar, or directly within the bug report. tester_profile
  2. “Favorite” your best testers. Have you noticed a rock star tester in your cycle? Would you like to see them participate in future cycles? By [ilink url=”https://help.applause.com/favorite-tester”]favoriting a tester[/ilink], they are more likely to appear in your future cycles.
  3. Don’t be afraid to ask testers to provide more information. Testers are contractors for uTest and are testing on their own will, but that doesn’t mean you can’t ask them for clarification or push them to expand (as long as it’s within scope). Treat them like they are just an extension of your own team and reach out to them through the Platform directly when you need to.



  1. Triage early and often! If you’re able to triage bugs early on in the cycle, and with consistency, you may notice a big difference in the quality of results you are seeing over time. Assigning values to bugs quickly will show active testers exactly what you’re looking for, and lead to more of those great results.
  2. Do not fix bugs before approving them in the Platform. This will make it harder for you to keep track of the status and progress of issues, and may also confuse things for the testers, especially if you attempt to run a bug fix verification cycle later on. The smoothest flow for you would be to review the triaged bug results in the Platform from the TTL, make your final bug value determination, push the results to your BTS, and then assign to your developers to fix. Once fixed, we can run a bug fix verification cycle for testers to confirm the fix was successful.
  3. Be careful about exporting non-approved bugs to BTS. Integration with your bug tracking system will save you tons of time and energy in sending discovered issues to your developers. Although you CAN export bugs into your BTS prior to Approving them in the Applause Platform, keep in mind that testers typically get compensated for their bug finding efforts, so it’s important to keep the community engaged and interested in your Product by going back and Approving those bugs shortly thereafter.
  4. Test one bug for BTS export before doing it in bulk. When you want to export your approved bugs to your BTS, before sending many over at once, try sending just one at first. In case there is a failure, you won’t have to redo anything major, and you’ll avoid getting bombarded with failure notifications. If there is a failure, you can try troubleshooting by reviewing the help topic on [ilink url=”https://help.applause.com/how-to-integrate-with-bug-tracking-systems/”]How to Integrate with Bug Tracking Systems[/ilink] , or reach out to your PM for help.
  5. Just because you can’t reproduce a bug doesn’t mean it’s not a bug. If a tester submits a bug and you are unable to replicate it, consider that there are several factors that could cause this, the most common of which is the environment on which it was found. If you run into this, we recommend that you lean on the detailed reproduction steps, how often it was found to be reproduced, and the screenshots & videos provided by the testers. If you see it’s clear that the bug occurred, the tester should get credit by having their bug Approved. Remember that bug Rejections hurt a tester’s rating, so don’t necessarily reject a bug because you can’t reproduce it on different environment. If you feel the bug isn’t important enough, or if you’re OK with it occurring on just one specific environment, you can always elect not to move that bug over to your BTS.

Test Cycle Setup Best Practices

This article will provide you with information and best practices that will help you get the most out of your test cycles. By following these guidelines, you will enable your Project Manager to set up the most thorough and comprehensive test cycles that will ultimately provide  the most relevant and actionable results with which to work.

First and foremost, it is always recommended that you Clone a cycle based on a previous cycle. Cloning a cycle will carry over all scope & instructions, as well as the testing team and TTL from the previous cycle.

Setup & Scope

Testing Coverage

The device/OS/carrier/geography combinations you would like to have bugs reported and/or test cases claimed against.

Defining In-Scope vs Out-of-Scope Issues

In order to ensure the success and proper focus of the test team, it is very important that our customers provide details about what they hope to accomplish in each test cycle. Clearly defining scope guidelines for testers to follow will enable the Project Manager and team to test apps with a more targeted approach and deliver results that are more relevant and useful to your business. For best results, be prepared to provide the following:

  • Testing Focus: A high level overview of the functional areas of the application for testers to focus new or regression testing on; and any steps they will need to take to access the site/app.
  • New Items/Changes For This Build: This is a detailed export and description of release notes of the specific fixes or features implemented in the build to provide better testing focus and results
    • Release notes or list of bugs/enhancements that have been addressed in the build, the title of which are typically sufficient as long as they are fairly descriptive
      • Gives the testers more context and details around what has changed so they can focus effort and test strategies more effectively
      • Example: If you changed the search feature it is important to add the additional detail of what has changed (i.e. added ability to do wildcard searches) – Testers will want to focus on the appropriate area(s) to make sure it has not broken and returns the correct information.
  • Bug Values & Examples: Provide a list of specific issues that you consider to be Exceptionally, Very, and Somewhat Valuable. This provides more focus to the testers and shows them what is really valuable to you.
  • Known Issues and Out of Scope: Known bugs list must include BOTH your internal bugs and all bugs submitted by Applause in previous cycles. Review Known Issue Management section for instructions.
  • Video & Tips: It is very helpful when the Test Manager working with Applause records a short video/webinar covering their expectations for the upcoming test cycle. This should include scope definition, product explanation, and any additional information that would be useful for testers. This helps us ensure that we are providing the most thorough coverage and our testers will deliver the most relevant results.
  • Out of Scope: Define areas of the app that do not need to be tested, including areas that may still be in development that you do not want the testers touching. This helps to ensure that any bugs that are filed are relevant.

Cycle Naming Convention

Give your cycle a name that has meaning, character, and personality. Testers will frequently be invited to numerous cycles at the same time, so naming your cycle accurately helps with clarity and avoids confusion. A good test cycle name typically includes the name of your company or product, the type of testing taking place, and the start date. For example:

  • Applause, Inc. – New Chat Features – 07/11/2016
  • Applause, Inc. – Regression Suite – 07/11/2016


Components are a feature in the Applause Platform that allow you to set a custom list of different application/product components that testers can then select when they submit bugs. The idea is to make it easier for you and the testers to categorize bugs based on the part of the application it effects/is related to, and to be able to focus testers on testing in a certain area of the application. Components are set up in Step 5 of Product creation. If you think Components might be useful for your cycle, reach out to your PM who can get it set up for you.

Activation Timing

When setting up and activating your cycle, there are a few important factors to take into consideration:

  • Cycle Duration: The length of a cycle will vary depending on what offering you have purchased, your specific testing goals, the maturity of your product, etc. Your PM will be able to assist in determining the ideal cycle duration, but typically we recommend no more than 3-5 days in length.
  • Lead Time: It’s important to give your PM adequate lead time before kicking off a new build so they can search for and prepare the testing team. Typically 1 business day’s notice is sufficient.


Test Cases

If you have purchased test case execution hours for your engagement, you will be able to set up and distribute test cases to your testers. The Applause Platform has a built-in test case management system for entering test cases and logging execution steps, Pass/Fail results, and screenshots. Instructions for creating Test Cases in Excel.

Setup and Distribution

If you require Applause to help create test cases, please provide appropriate documentation as a guideline (e.g. use cases, requirements docs, etc). If this documentation does not exist, provide a primary point of contact for us to work with.

Any cycles with new or updated test case coverage typically require 24 hours notice and need to be setup with the PM so we can ensure appropriate coverage by:

  • Reviewing for clarity
  • Reviewing to confirm timing estimates
  • Making appropriate tester or environment coverage updates

When building out your test cases, there are a few things to keep in mind:

  • Environments – Determine the environments you want the test case(s) executed against
  • Number of test case executions per environment – This is to help limit the amount of results you want to have to a reasonable level and prepare to slot testers into particular environments
  • Approximate time to complete test case – This gives the test team more info on whether they can commit to finishing the test case(s) within the time allotted against their schedule


Getting the Most out of Bugs Results

Customized Bug Reports

The Applause Platform allows you to define custom field within your bug reports on a test cycle level. These custom fields allow testers to capture additional information about a bug, beyond what the standard bug report template contains, which will then be visible upon bug submissions, through the BTS export, or when exporting to CSV. Custom Fields are set up in Step 2 when creating a new test cycle.

Bug Tracking Systems

If you have an external bug tracking system, we highly recommend you integrate it with the Applause Platform. This allows for bugs found to easily flow to your BTS with little/no manual intervention, which gets the bugs into the hands of your developers faster. BTS Integration is set up when creating a Product in the Platform.

SDK Data

The Applause SDK is a mobile app quality tool that standardizes and improves mobile testing deliverables. Most of the Applause’s Functional testing packages come with the SDK, which, when instrumented into your app, collects usage analytics, tracks session information and crash reports from the testers. It helps you stay on top of serious quality issues and find and fix problems immediately without compromising you users’ security. More information on the Applause SDK can be found here, or talk to your Project Manager.


Unsupported Bug Tracking Systems

The Applause Platform currently supports integration with over a dozen [ilink url=”https://help.applause.com/working-with-bug-tracking-system-bts-integration-connectors/”]Bug Tracking Systems[/ilink]. If the tool you use is not on this list, you still have a few options to easily extract bug information out of the Applause Platform.

  1. [ilink url=”http://help.applause.com/exporting-bugs-to-bts-via-email/”]The Email Connector[/ilink]: This option allows you to export bugs via email. Most tracking systems offer the ability to configure a special email inbox that can then create a ticket when an email is sent to that address.
  2. [ilink url=”http://help.applause.com/how-do-i-integrate-with-webhooks/”]Webhooks[/ilink]: Webhooks allows you to push bug details directly to an external URL.
  3. Export to CSV: You can export lists of bugs into a standard CSV format, which you can then import into your own BTS.


Additionally, for the integration options we currently support, there are a few limitations of which to be aware.

  • We only support basic authentication at this time, so if your BTS instance uses two-step authentication, it will not be able to integrate
  • Two-way integration is not currently supported, meaning:
    • We cannot automatically send non-Applause-discovered bugs to the Applause Platform or through the Applause Bug Fix Verification
    • Comments or other updates made within your BTS will not push to or update the Applause-discovered bug in the Applause Platform
    • You cannot approve or reject Applause-discovered bugs directly within your BTS interface

Communication Tools

Within a cycle, you have several different options to communicate with the testing team, or get additional information

Tester Messenger & My Notes:

Tester Messenger: Sends an email directly to the tester if you have any questions you want to verify or do some limited troubleshooting with them to understand the issue better. They will get the email and respond in the messenger.
My Notes: Sends an email to the Team Test Leader only. This can be used to add a note to the ticket on export. If for any reason you’d like to make a note that will be exported with the ticket or you want to communicate with the TTL WITHOUT the tester seeing it, use the customer messenger.

Chat Room:

  • Testers are required to monitor chat for updates to scope or changes in directions.
  • If you want to communicate to the group as a whole, use the chat room.
  • Your name is highlighted in a color that lets testers know you are the client.
  • Testers may also reach out to you and the TTL for clarification in chat room as well.


  • If you’re unsure who to contact, email your Project Manager directly (if applicable) for any questions or assistance during a cycle.
  • When in doubt, you can reach our technical support team for assistance at support@applause.com

Tips for Managing Bug Approvals

Use Your Test Team Leader (TTL)


  • The goal of the team test leader is to speed up your triage process and manage the testing group.


  • TTLs are asked to review and replicate every issue within 24 hours of filing when volume allows.
  • For each issue you should see a recommendation: Could they reproduce, Was it a duplicate, Is it in Scope, and do they recommend Approval and what value or Rejection.
  • NOTE: If you are in line with the TTL’s judgement and time is pressing you can do a batch approval by approving all tickets at the recommend value.

Report Quality

  • The TTL is asked to review that every report has all the information you requested and has provided clear steps and followed our reporting standards. They may also leave a comment under the recommendation section to explain further.


  • The TTL is your immediate source to help redirect testers or modify instructions. Feel free to reach out to them if something needs to change during a cycle.


Triaging Early In Cycle


  • When you start approving issues testers tune in immediately to learn what is valuable to the client and what is being approved and rejected. Doing this while the cycle is active early is a means of keeping them on focus.


When to Use Works as Designed


  • When you reject for this reason their tester ratings are not affected, so anything that they could not have known should be rejected as WAD.


  • The Works as Designed button (as well as Approving as “Won’t Fix) is designed to encourage testers to submit issues when they are unsure about them, so potential concerns aren’t missed for fear of rejection.


Bug Severity VS. Bug Value

When to Over Value a Bug Report:

Limited Scope:

  • There are times when the issue itself may not be that severe to the site, but because of a limited scope, or a mature product, the tester has found something right on target to the task given in scope. In those situations it may be appropriate to value the bug based on being on scope rather than the priority level it will be assigned.

High Effort/High Quality Testing:

  • Occasionally you will see some high quality testing that required a lot of effort but the value of the issue is not that big a concern. Feel free to reward good testing/reporting as well as high priority issues. This keeps them going for more.


Too Many Somewhat Valuables


  • Testers are paid according to your value of the bug and the scale is significantly weighted to provide them incentives for looking for high value issues. If too many of their bugs are being valued at the lowest level, they will stop testing to avoid diluting their rating.


  • Assigning the right value to a bug is the ultimate aligning of expectations for our testers. For your sake, approving as very or exceptionally valuable is telling the testing pool “this is what I want to see the more of” – when appropriate of course.

Too Many Very Valuables


  • Conversely to the above, if too many tickets are overvalued you will see testers throw a lot more against the wall, and you may see them “over reporting” the same issue in different locations in order to garner another high payout.