In an ideal world, transitioning from manual testing to automated testing would be like breathing a sigh of release. Things aren’t that simple, however. As it turns out, some skills developed via manual testing don’t carry over to an automated testing context. In fact, there’s almost a signature that manual testers leave if they try to perform automated testing without sufficient training. In any event, here’s why you should always fully onboard your manual testers when you transition to an automated testing tool.
Manual versus automated testing
If you’re a manual tester, then you are effectively the software agent that’s executing a test. In a UI test, for example, you’ll literally run through a workflow and note the things that look out of place. Your eyes are the test tool, and your understanding of the application is how you validate it. You know, from using the application, how it’s supposed to work—and conversely you know when it’s not working. None of this can be expressed as code.
Oftentimes, when a manual tester is given an automated testing tool with little-to-no training, these ingrained behavior patterns will often lead them to create extremely long tests with no assertions. These tests take a while to run and are more likely to lead to false positives. What’s more, they often fail in ways that do not provide meaningful data that would lead to improving application quality.
Here are a few examples of how a manual testing mindset can make automated testing less useful.
Testing without assertions
To start with, think about a sample workflow that’s designed to test a login page. First you use the software to enter the email password and login, and then you tell it to fill out a contact form. So far, so good—except you didn’t add any assertions. What happens next?
With assertions in place, an automated testing solution like mabl would know something like, “after login, the contact page will load. If the contact page doesn’t load, the test fails.” Without assertions, mabl won’t know that the test has failed if the contact page doesn’t load. Instead, it’s going to move on to the next step, which is “put contact information into contact form.” Then, mabl will try to find a page element that looks like a contact form, (in this case, the login form) dump all its contact information into the login form, and nonsensically pass the test.
Including those insertions are a critical best practice for building effective automated tests that will deliver more reliable and effective tests that yield meaningful results.
Adding unnecessary test steps
Let’s say that you’re testing the web application for an airline. If you’re testing the ability to search for a flight, then it’s easy enough to instruct the test to input search parameters and then validate what comes back. Meanwhile, if you’re testing the ability to book a flight with the application, you can skip the search field since you’re not testing that and you’d start the test at the URL for a specific flight.
If you’re a manual tester, however, you’ve never had the ability to begin testing by starting in the middle of a workflow. Even if you’re only testing the booking capability, you still have to begin by searching for a flight, so you might as well test that too. Once manual testers transition to automated testing, there’s a subsequent tendency to over-test.
Over-testing presents a real problem because failures often obscure meaningful information. If the search functionality fails in our example, then you still don’t know whether the booking functionality works properly.
Out in the real world, however, we see examples that get much more extreme. Imagine a test with 500 conditional steps—if any step fails, the entire test fails as well. It’s hard to extract useful information from a test like this, because you need to drill deeply into the test to see the exact point of failure.
An important rule of thumb is that if you can’t tell what failed by seeing the word “failure” next to the name of the test, you should probably rethink your testing strategy.
Ignoring Advanced Functionality
Lastly, there’s a tendency of manual testers to ignore more valuable functionality. In mabl, for example, there’s the ability to combine an API call with a UI test. Using the API call, you can create, edit, or delete data whenever you want to. Manual testers often go to the extra step of creating a dummy record just so they can test or delete it using an API test.
What’s more efficient however, is for testers to combine two tests. You can start with an API test that makes a call to create a test record. You can then have mabl automatically spin up your UI tests so they can work with this API generated data without needing to first create the test record in the UI. By daisy-chaining tests together like this, you can test larger portions of the application faster and more efficiently.
Here at mabl, we know that good testing habits don’t emerge overnight. We’re dedicated to providing not just the most advanced automated testing tools, but also the highest standard of training as well. Let us know how we can help!
To see how easy it is to create reliable automated tests, sign up for a trial of mabl today!