Core Framework v1.6.7 Release Note

Applause Core Framework v1.6.7 Release Notes – 2/22/2017

Applause is excited to announce the release of v1.6.7.  This release is primarily focused on high level architecture refactoring to support a Page Object Factory and a Locator Factory Pattern for Native, Web, and Mobile Web tests. Also included is support for real device testing on Test Objects device lab, support for Sauce Mobile Web testing, screenshot logging support for new dashboard enhancements, and several bug fixes.  Please note that this is a breaking change and requires some reconfiguration of current projects.

New

  • Added support for Page Factory Pattern for Mobile and Web Tests
  • Added support for Locator Factory Pattern for Mobile and Web Test
  • Added Browser Profile Support for Firefox on Sauce and BrowserStack and moved Profile support to driver config json files.
  • Added full support for interaction with Test Object lab real devices for Native and Mobile Web tests
  • Added full support for Sauce Mobile Web on emulators and simulators
  • Added Native JS Hover Function
  • Added ability to allow SauceConnect Tunnel identifier
  • Updated framework ScreenshotManager classes to log screen shots to API endpoints for better reporting

Changed

  • Updated default TestNGListner to pull test tags value from the currently running context
  • Fixes for issue where AbstractPageChunk.getElement() doesn’t properly wait 30 seconds
  • Fixes for issue where ChunkFactory doesn’t properly invoke waitUntilVisible()

Core Framework v1.5.0

Applause Core Framework v1.5.0 Release Notes – 7/22/2016

Applause is excited to announce the release of v1.5.0.  This release is primarily focused on high level architecture refactoring, a deprecation of several driver management patterns, and a move to using 100% configured drivers.  Please note that this is a breaking change and requires some reconfiguration of current projects.

New

  • Refactored Framework to be DriverWrapper rooted
    • DriverWrapper is now the top level object and each driver wrapper is configured and points to its own Driver object, Snapshot Manger, its own query helper, and its on synchronization helper.
    • This allows us to now execute test sessions with multiple drivers operating within each sessions for multi headed tests.
  • Added in Support for configured drivers supporting BrowserStack (Web and MobileWeb), TestDroid (Native ClientSide and ServerSide), DeviceCloud (Native ClientSide and ServerSide), SauceLabs (Native, Web, and MobileWeb) and AWS (ServerSide Native)

Changed

  • Removed all references to “browser”  and replaced with “driver”
  • Removed all client side driver objects and code for TestDroid connection.  These are now configured as JSON files.

 

Core Framework v1.4.19

Applause Core Framework v1.4.19 Release Notes – 6/13/2016

Applause is excited to announce the release of v1.4.19.  This release is primarily a performance optimization release for web based automation tests.  The following key changes are included:

New

  • Implemented the ability to set the web element time out value by setting the JVM property webElementTimeOutSeconds.
    • The expected value of this parameter is an integer representing the maximum number of seconds to wait for a web element to be displayed before failing the test.
    • The default value is 30 seconds.

Changed

  • Optimized getWebElement() call to only suspend thread execution if element is not gettable.
  • Optimized WebElementQueryHelper to only inject sizzle if jQuery and custom jQuery is not available.

Developer

  • Added additional debug metrics to output the total summation of implicit wait times during test execution

 

What are the advantages of the Applause Automation Framework?

Applause has created a best-in-class automation framework based on popular open-source tools and technologies. Unlike most other frameworks that focus on either mobile OR web, the Applause automation framework supports native, mobile web, web, and hybrid applications. There are many other benefits that our framework provides:

  • By implementing a locator and test data isolation pattern plus a page object design pattern, the framework is designed to minimize code duplication and keep maintenance to a minimum.
  • When the framework executes the scripts, it returns objects of a page. Execution will halt until that object appears, making the execution more stable. In the case of a failure, the next object won’t load, and in the logs we can tell what happened.
  • The framework keeps querying whether an element is present. It pauses if something can’t be found for a few milliseconds and then restarts the search for the element.
  • The framework understands context and browser/device type, and loads the proper locator map [if on iOS, it knows to pull iOS locators].
  • Our automation code never goes onto the devices.
  • The framework allows for grouping by functional area, priority, component, feature, etc. so executing in a specific order can be accomplished.
  • Because we already have the core framework built out, it is less expensive than designing and building out a new solution.

Who are the members of the automation team?

Project Manager (PM) / Solutions Delivery Manager (SDM): Your primary point of contact, responsible for coordination between your team and Applause.

Lead Software Development Engineer in Test (Lead SDET): An assigned automation expert who will lead all technical aspects of the project. Will work directly with your stakeholders and will actively manage the automation team.

Software Development Engineer in Test (SDET): Team of engineers who develop and maintain automation scripts per test cases and the Applause common coding standards.

Automation Support Engineer (ASE):€“ Team of engineers that oversee the execution and maintenance of the automation. Will review and correct script failures, and verify and log application bugs.

Test Case Writer (TCW):€“ An assigned test case creation expert who will work with you to document or modify test cases to ensure they are appropriate for automation.

What is a test case?

There are several different ways to define a test case or scenario:

  • Atomic test scenario – This is how we automate and what we mean when we say ‘test case’€. It’s a self contained, atomic set of steps that produce a specific and measurable expected result. We generally try to have no more than 1-3 assertions per scenario.
  • End to end test scenario – A linked set of atomic scenarios that run in a specified and controlled sequence. Typically one large end-to-end scenario may become 2-3 atomic scenarios.

This is important to differentiate, because an atomic test scenario will take us X lines of code and an end-to-end test scenario (which some may just call a test) could be X + 100’s of lines of code and consist of asserting 20-50 conditions.

What occurs during the Quick Start phase?

The 6-week Quick Start is designed to get you up-and-running with an enterprise-grade test automation environment, initial set of automated scenarios, and the knowledge to extend test coverage.

The Quick Start consists of Six 1 week sprints where Applause will perform the following:

  • Setup and integration of the solution (framework, repository, continuous integration, reporting dashboard)
  • Workshop with your team to review test cases, prioritize backlog, and perform a live coding session
  • Automate initial scenarios (BVT suite) and provide regular execution on real devices
  • Training with your team covering how to maintain and extend the solution
  • Develop a Go-Forward plan covering post-Quick Start strategy and roles

What happens when a script fails?

During the Quick Start, Applause will investigate and correct any script failures (that are not related to application code).

After the Quick Start, investigating and correcting a failure would fall on your team or Applause, depending on what is specified in your Go-Forward plan.

If your plan includes hours for additional script development and maintenance:

  • Automation Support Engineers (ASEs) will investigate any script failures that occur during your scheduled runs.
  • If the script failed due to a problem with the script, ASEs will attempt to adjust the script and then run the script again. If they are unable to correct the script, they will elevate the issue to the SDET team.
  • If the script fails due to an application bug, ASEs will log the issue as a bug which will then get pushed to your bug tracking system and the Automation Dashboard.