Priceline is the home of modern travel experiences. Whether it’s finding the perfect place to stay for a much-needed getaway, snagging a reservation at the hottest spot in town, or taking to the open road in a rental car, Priceline can get you there.
Managing quality across this wide range of personalized experiences demands an extensive and adaptable software testing strategy. Dynamic content dominates the customer experience, from sorting algorithms to recommendations and close user groups. The intense level of customization delivers an exceptional experience for users, but creates challenges for the QA and development teams.
When Test Automation Architect Antony Robertson joined Priceline in 2015, their software testing strategy was 100% manual. Regression testing, feature (A/B) testing, web/API testing, and end-to-end testing were all performed step-by-step by a dedicated team of QA professionals. Between product complexity and rapid-fire delivery cycles, the team simply didn’t have the bandwidth to build or buy a test automation tool that would have a steep learning curve and require ongoing maintenance. Yet manual testing was increasingly proving to be too time-intensive and ineffective for scaling quality across dynamic user experiences.
From Manual to Automated Testing
Antony started building towards Priceline’s automated testing future by gathering requirements from the QA team. At the time, only quality engineers were engaged in testing, supporting an extensive testing strategy that included core functional testing, GUI testing, end-to-testing, regression testing, cross browser testing, and more. As Antony dove into the challenges limiting testing, he found that Priceline’s complex technical needs had prevented them from investing in a test automation tool that could reduce the time needed for testing. So he decided to build his own.
The Rise of Autobot and Early Automated Testing Efforts
The earliest iteration of Priceline’s homegrown testing framework, which came to be fondly known as the Autobots, was the first step towards test automation. As the entire Priceline organization became more invested in the value of test automation, Antony created new, more capable iterations of the Autobot that incorporated a growing number of requests from the team.
The first version of Antony’s homegrown test automation framework was CLI-only, which was challenging for those new to automated testing. Though the Priceline team was used to working with their existing technology stack - Node.js, WebdriverIO, Mocha, Chai, and page object models - they weren’t used to leveraging these tools for testing, which added friction to adoption. V0 was also hard coded, which made it difficult to adapt to quickly changing user needs as they looked for different hotels, restaurants, or travel options.
With that feedback in mind, Antony created V1 of his automated testing framework. He began by creating abstraction layers that reduced how often his teammates needed to re-write the same code over and over as they created tests. This included creating parameterized functions that the team could simply copy and paste into their testing script. This effort to streamline testing workflows paid off as more people adopted automated testing. More automated testing, however, introduced a new challenge: test maintenance.
To manage test maintenance as automated testing increased, Antony introduced test templates for common test types. For example, if a Priceline QA or developer wanted to test a hotel search happy path, they had a standardized template to use. This involved encapsulating the homepage as one entity in a page object model, then creating a single test script that covered the homepage, the search page, the checkout, and the entire customer journey. This made creating an automated test as easy as introducing conditional logic and incorporating individual test steps.
The test templates resulted in Priceline’s “Prometheus moment” as automated testing reached parity with manual testing, at least in terms of effort and accuracy. Having simple, clear-cut scenarios meant that the team was able to shift from 100% manual testing to 20% automated testing. Antony explained the mindset shift that took place:
“People realized their boring work could be done with the push of a button, and the conversation went from ‘what’s going to happen to my job’ to ‘what more can I do in my job.’ They could spend more time doing higher impact tasks; the harder edge cases, the corner cases, the tasks that are really valuable and have true monetary value when they create outages, but are also hard to test.”
Integrations, Maintenance, and Scaling Automated Testing
Once the V1 Autobot proved the viability of automated testing, Antony was deluged with further requests from the Priceline team. The first step was moving off the CLI and into a full test automation platform. Then, Antony began adding further abstraction layers that made it easier to configure different permutations of tests, including virtual machines for different types of mobile devices. He also introduced new ways to import new data into the testing framework for more realistic testing scenarios. Each product team had expanded test templates and new levels of data to inform and execute their quality strategies.
But again, more testing introduced more test maintenance challenges and demands for greater functionality from the Autobot. Antony wanted to remove any human bottlenecks to testing, i.e. being called when a developer or product owner reached a particular environment. So his team began brainstorming how to integrate automated testing into their CI/CD pipelines. As a GitHub team, they built a GitHub Action that allowed people to trigger tests as needed. Additional tooling pulled test results, triggered alerts to Splunk or Slack, ingested the information, and then allowed code to pass to the next logical environment if it met minimum pass percentages set by Priceline QA engineers.
New Abstraction Layers Democratize Automated Testing, But Infrastructure Challenges Build
These improvements and expansions empowered everyone in the Priceline organization to participate in software testing. Different teams had access to data from their specific product, the ability to trigger tests as needed, and could share results to their preferred toolset.
The final lingering challenges to a fully scalable, high-performing automated testing strategy were in the final requirements to accelerate testing. Antony wanted his entire team, from QA and developers to product owners to the CTO, to be able to test whatever they wanted. The infrastructure to enable this, however, is complex. Priceline used Sauce Labs for virtual machines, which allowed for parallelization, an essential component for software quality strategy at the scale of Priceline’s. But those parallelizations were being maxed out as more people began participating in automated testing. Antony calculated how much time was being spent waiting for testing capacity and its true cost to the company, and realized this was an unsustainable limitation. So he created an individualized type of parameterization by injecting environmental variables from the UI, which enabled the Priceline team to change tests extremely quickly.
Parameterization resolved the parallelization issues, but unleashed a new wave of automated testing that strained Priceline’s infrastructure. Testing was happening hard and fast, and not all services in non-production environments could handle the load. To get around this, Antony built a queuing mechanism that offset and ran each test within a test plan at randomized intervals. The workaround was effective at mitigating the capacity challenge.
Building a Culture of Quality with Mabl
Mabl shared Antony’s commitment to making automated testing as easy and accessible as possible. Thanks to the simplicity of low-code, Priceline was able to quickly onboard dozens of team members. Just as new versions of Autobot generated new levels of excitement, Priceline team members were eager to see how they explore new areas of testing with mabl.
Once initial test cases were built out and the QA team felt comfortable testing complex user journeys, the team migrated the entirety of their web tests (~5K tests) into mabl. That momentum resulted in developers asking for the same deploy triggers and automatic gating they had had with Autobot.
Luckily, mabl made it easy to accommodate these (many) requests. Antony was able to build automated triggers and deployment actions with Priceline’s CI/CD pipelines. With just a few shell scripts, the team eliminated several human touch points that accounted for multiple work hours per deployment. Several teams have developed such highly tuned tests that they can deploy to production with zero human intervention (so long as all tests pass).
Armed with a reliable and intuitive test automation platform, engineers began attending Antony’s weekly office hours to learn more about how they could use mabl, especially flows, JavaScript Snippets, and end-to-end testing templates. Once they understood the basics of mabl and understood how to use reusable components, they were able to immediately start contributing to software testing efforts and building a culture of quality. Hundreds of users running UI and API tests are centralized in mabl’s unified test automation platform, making it easier to collaborate on quality even as deployments increase.
Antony summarized the importance and impact of this culture of quality:
“In the world of test automation, our effectiveness is measured by the combined results of velocity and quality. Meaning to say, we want to deploy our code as often and as fast as possible with no bugs. Rushing code to production without proper testing is a recipe for disaster and, conversely, being slow to market with new features leaves you in the dust.
At Priceline, fostering a culture of quality has been one of the primary reasons for our success with mabl. Make test creation a breeze, make deployments faster and safer by gating based on test results, and, lastly, make test results actionable. Build it and they will come, or, rather, test.”
Teams can start building their own cultures of quality with mabl by registering for a free 14-day trial.