Get a Free Trial
Creating, executing, and maintaining reliable tests has never been easier.
One year after ChatGPT’s launch, artificial intelligence (AI) continues to dominate the hype cycle across software development, software testing, and test automation. As exciting as it is to see new AI capabilities and tools emerge, time-crunched QA teams are forced to consider how much time and effort needs to be invested in learning AI skills. Between increasing the output of development pipelines, DORA metrics and DevOps maturity, and expanding their team’s test automation strategy, software testers need to invest their valuable time and effort in skills that will help them navigate the long-term future of AI in software testing.
The Different Types of AI Tools
Before diving into how AI will impact test automation and software testing, it’s important to understand the different types of AI.
- Expert systems combine human expertise with machine speed and efficiency. Best suited for accelerating simple decision making and task automation, test automation tools utilizing expert systems include many low-code platforms, including mabl.
- Machine learning tools learn patterns among large amounts of data. Ideal for drawing conclusions about new data in scenarios with specific or large numbers of parameters, machine learning is commonly used for content recommendations on streaming platforms.
- Machine vision is a subset of machine learning focused around image recognition. Existing tools include Google translate from images and facial recognition in images.
- Natural language processing uses machine learning to make decisions about text. Current AI tools using natural language processing include Grammarly.
- Generative AI learns patterns and generates similar data based on a given input. One of the buzziest and fastest growing areas in AI, existing generative AI tools include ChatGPT, Bard, Bing, and GitHub Copilot.
The last example under generative AI, GitHub Copilot, is just one example of the growing range of AI tools built specifically for software development. With more organizations reducing budgets and facing competitive markets, there’s no question that the coming year will be focused on integrating AI tools into development pipelines.
Understanding AI in Software Development
Popular software development forum (and mabl customer) Stack Overflow found that 70% of developers were already using AI tools as part of their work, or planning to start using them soon. Gartner further affirmed this trend in a recent article, predicting that 70% of professional developers will be using AI-powered tools by 2027.
For now, AI tools aren’t being used for software testing. The same Stack Overflow survey found that the vast majority of early AI adopters were using AI to write code (82.55%), followed by debugging (48.89%), documenting code (34.37%), and learning about their codebase (30.1%). Only 23.87% reported using AI tools in software testing.
Despite the slower adoption of AI in software testing, Stack Overflow found that 55.17% of developers were interested in using AI for testing, the highest level of interest across all use cases. AI is clearly seen as a productivity booster for many developers, and quality teams have an opportunity to play an important role in ensuring code quality through this transformation.
AI Test Automation Tools and Applying AI to Software Testing
Mabl’s 2022 Testing in DevOps Report asked 560 software developers and QA professionals how they spend their time focused on software testing. Despite the wide range of tasks involved in software testing, test planning/test case management and test maintenance emerged as the clear “winners'' with 56% and 39% of respondents reporting them as their top two most time-consuming activities.
When it comes to improving test coverage and achieving quality engineering goals, test planning and test maintenance aren’t the most effective ways for QA teams to spend a plurality (or majority) of their time. Luckily AI tools can help reduce the burden of these tasks so quality professionals can invest more time in higher impact work.
AI Test Automation Tools Will Help Decide What to Test
Test case management is both an art and a science: as much as QA teams can predict customer needs based on current user behavior patterns, sudden changes instigated by app updates, new features or products, or changing consumer trends can still disrupt existing habits. Generative AI has the potential to augment quality engineering expertise and knowledge by further refining the test planning process.
Large language models (LLMs) generate insights based on language, particularly text. Fortunately for quality professionals, a significant amount of data specific to their product and users already exists across help documentation, Frequently Asked Questions (FAQs), company websites, and internal documentation. These documents and web pages contain a wealth of information that can be used to define product features, testing needs, and user behaviors.
Analyzing these documents would take software testers days, if not weeks. But AI is extremely well-suited to leveraging these massive datasets. ChatGPT is famously (or infamously) good at summarizing information from across the Internet, and adapting those summaries into different styles of writing. Recommending test cases based on documentation, help docs, and/or user manuals isn’t all that different. In the future, we see a world where generative AI tools for software testing can help quality professionals reduce the amount of time spent on test case management and test planning without risking quality gaps.
AI Will Reduce Test Maintenance for Software Testing Teams
The second most time-consuming software testing task, test maintenance, is also well-suited for AI support. While AI capabilities in test automation solutions have been helping quality teams reduce the burden of test maintenance for years, generative AI has the potential to further reduce test maintenance efforts.
Imagine a team shifts from using Ant Design library for UI styling and components to Material Design. That change impacts everything from the UI to how these components are structured, and their IDs. But also imagine the same team is making significant design changes, including text, button formatting, and components. Earlier AI tools would have a hard time handling this level of change.
Generative AI, on the other hand, will have a more nuanced understanding of these changes because LLMs have been trained on text across the web. These tools will be able to understand the difference between a button and a heading, and how that impacts the text in those features. Consider a button that reads “start your order.” If that button changes to say “begin shopping,” large language models will recognize that although the phrasing is different, the context and intent are the same.
Pulling from the context of other buttons with active keywords, ancestor elements, and other application data, generative AI will be able to identify a new version of this button when major changes have been made to the application.
Building AI Skills for Test Automation and Software Testing
It’s important to note that while impressive advances have been made in AI, it’s still highly error-prone, particularly in nuanced scenarios. As these capabilities emerge and evolve, software testers, developers, and QA professionals with the right skills will be essential for harnessing AI-backed test automation tools effectively.
Soft Skills Complement AI in Quality Engineering
The emergence and growth of AI tools for software testing only adds to the importance of soft skills for quality engineering. Though AI is excellent at detecting patterns and summarizing vast amounts of data, these tools can’t identify what problems need to be solved, consider new ways to improve processes, or make real decisions. Soft skills like critical thinking, empathy, and problem solving are critical for creating and scaling software testing strategies that add value to the company and the customer.
Technical Knowledge Limits AI Risk in Software Testing
Thinking back to the trust gap in AI - 55% of developers are interested in using AI for testing, but just 3% trust the output of AI tools - it’s clear that AI won’t replace technical skills, merely shift them.
While software testers may not need to learn how to code as low-code and AI democratize test automation, skills like prompt engineering will help QA teams fine-tune their requests through iteration and minimize the risk of poor output. As AI test automation tools take on a greater role in generating test cases and updating tests, quality professionals will need the skills to check their output for accuracy and effectiveness.
Building Holistic Software Testing Skills: AI, Test Automation, and UX
One of the most important skills in the era of AI isn’t a skill at all, but the ability to contextualize data across software testing and quality engineering skills. Software testers already have a diverse range of expertise, including manual testing, automated testing, communication, and user behaviors. While AI can reduce the burden of rote tasks, quality engineers can focus on higher value work like exploratory testing, collaborating across the company, and improving test coverage.
Navigate the Changing Field of Quality with In-Demand Software Testing Skills
As disruptive as AI will be to software development and software testing, people will undoubtedly play a critical role in ensuring that AI tools are used effectively. With their valuable set of soft skills and technical knowledge, QA professionals are well-positioned to learn AI skills for software testing and test automation.
See how mabl makes it easy to harness AI for automated testing with our 14 day free trial!