testtalks.com founder Joe Colantonio, who has been an advocate and public figure in the automation testing space for years, came out with a blog post back in January featuring 7 predictions for how the state of automation testing would evolve in 2019. We found the fact that 6 of the predictions aligned with current aspects of mabl very encouraging, with 3 of them explicitly describing maintaining tests, auto-healing tests, and using ML to improve your test automation. With this article, we’re going into depth on how mabl today embodies each of these predictions and why these practices will improve the QA process.
Joe actually gave us a shout-out in this section, linking to the podcast he did with Dan Belcher, one of our co-founders, in 2017. In the podcast, Dan details a survey mabl did of over 100 companies on the struggles testing teams face. He found the number one struggle is the ability to create and maintain test automation. This was before coding work had started on mabl, so we naturally picked alleviating the pains of test creation and maintenance as our two of our main goals. To simplify the maintenance of tests, we decided to focus on integrating auto-healing into every test, which Joe goes into more depth in with his next prediction.
Joe’s theory about auto-healing, which is a technology that allows tests to look for a correct solution when a step can’t be resolved, is that more test automation tools will begin using it in 2019. The biggest benefit of auto-healing in test automation is that it has the potential to remove the need for human intervention on your tests every time your application changes, which is one of the hardest hurdles to clear for test automation. Auto-healing means that when an element in a step can’t be found, mabl automatically looks for any similar elements to the one used in the step and, when it finds one, will complete the test using it. You can see this in the insights mabl sends you: even before the insight about an auto-heal is sent to you, the test has already been run once with the auto-healed change and has passed successfully. This means the report the testers receive in the form of an insight is much more than a simple “this test failed because of a UI change” response; its suggested new element has led to an instant fix of the test. The reporting of these changes by testers that we described in the last section are done through these easily accessible insights that you can get through Slack, email, or simply through the app. This capability allows testers to eliminate much of the time they spend maintaining tests when small changes to their UI occur and instead focus on better test coverage and exploratory testing — something machines can’t do.
Joe’s next point is about these types of insights, specifically how they can point out and send you to the specific errors in your test runs without you having to dig through countless test run logs to find them yourself. Besides insights that point you to the specific steps in the tests that they failed on, two of mabl’s more unique types of insights focus on visual and performance aspects of your application past what the test steps do. As a particular test is run repeatedly, mabl builds a visual model using screenshots taken during the test and a page load time model of each page using the runtimes recorded. If a journey’s runtime is significantly outside the predicted range of the model, mabl will alert you to the unexpected behavior, and similarly, an alert will be sent if a visual change to the model occurs.
In this section, Joe makes the interesting observation that a new crop of proprietary vendor tools are starting to be adopted by teams whose primarily testing tool has been Selenium, which had been the tool that displaced the browser-based automation tool vendors of the past. There’s no one magical tool that will solve all the test automation problems we’re facing in one fell swoop, so these tools are specializing in more specific aspects of testing and using new technologies to cater to “different teams with different needs, styles, and preferences” as Joe puts it. Many of these tools, including mabl, apply AI and ML to enhance test stability and provide additional information from test runs.
Joe says that in this new crop of tools, we are seeing evolutions of the old capture/playback tools that were prone to unreliability and hard to maintain. To combat the problems those types of tests face, the mabl trainer retains the basic functionality of training a test by mimicking the user’s journey that capture and playback tools are based on and does exactly as Joe predicted: “use machine learning to help improve reliability at runtime”. Auto-healed steps combat the issue of tests failing after code changes. The test’s maintenance is also streamlined with a comprehensive dashboard where you can review the test output, recorded journey steps, and insights in plain english, as well as make edits to the journey at any time. All tests are also run in the cloud, meaning there are no worries about maintaining your infrastructure either.
According to Joe, the best definition of continuous testing is “the ability to instantly assess the risk of a new release or change before it affects customers.” Essentially, you want to find the bugs in your software rather than have your users find them first by testing before the release goes out, as it goes out, and while it’s out. This is the kind of process where mabl can really shine, with its capacity for continuous testing and its numerous CI/CD integrations. mabl’s own Lisa Crispin is currently writing a blog series on having your team adopt holistic or “shift left / shift right” testing throughout the infinite loop of building, delivering and learning from new features that constitutes continuous testing.
Thanks again to Joe for the insightful article! We were happy to be included in it and especially excited we could expand on it with our own perspective.
To test out these features of mabl yourself, sign up for a free trial or request a demo for a personalized walkthrough.