0% found this document useful (0 votes)
17 views

Testing Chapter 5

The document discusses potential future directions of AI-driven test automation. It describes enhancing existing tools, replacing full test automation stacks, and developing self-testing adaptive AI systems. The long term vision is for AI systems to incorporate adaptive machine learning and self-testing frameworks to validate runtime changes.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Testing Chapter 5

The document discusses potential future directions of AI-driven test automation. It describes enhancing existing tools, replacing full test automation stacks, and developing self-testing adaptive AI systems. The long term vision is for AI systems to incorporate adaptive machine learning and self-testing frameworks to validate runtime changes.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Chapter 5.

Future Directions

Now that you’ve taken the tour of AI-driven test automation, let me take you on a journey into the future of this emerging
technology. In this chapter, I share my thoughts on how AI-driven test automation tools will evolve over the next decade. In my
opinion, the AI for automation evolution and/or revolution, if you watch enough Hollywood movies, involves three steps. Within
the context of software testing, these including using AI for the following:

1. Enhancing existing tools and frameworks at each testing level and dimension
2. Full stack replacement of entire test automation tool sets
3. Adaptive ML systems designed with self-testing capabilities

Although I have zero faith in my ability to predict the future, I do think it’s important to spend some time discussing and
theorizing about the future directions, paths, and intersections of AI and testing, prior to wrapping up this report.

Enhancing Existing Tools

Considering the current state of the art, I believe the immediate future of AI-driven test automation is on track with
continued integration of AI into existing tools that target different testing levels, quality attributes, and application
domains. As shown in Figure 5-1, the trend is likely to be one where researchers and practitioners continue to tackle
each concern somewhat in isolation, until individual areas become stable and reliable.

Figure 5-1. In the near term, AI enhances existing tools that tackle individual testing concerns

Full Stack Replacement

After its successful application and adoption in specific testing areas, I foresee AI-driven automation shifting toward tighter tool
integrations and a more holistic testing solution. By building on past experiences, sharing training data, and cross-pollinating
various techniques and lessons learned, practitioners will be able to develop general ML models and classifiers for automating all
kinds of testing tasks. Figure 5-2 depicts the idea that in the medium term, AI replaces full stack testing with capabilities that
address both core and cross-cutting concerns across multiple application domains.

Figure 5-2. In the medium term, AI replaces full stack test automation across application domains
Initially, full stack solutions like the one shown in “Self-Testing Adaptive AI” may still require regular human intervention for
complex testing decisions. However, over time, as the AI trains on that human feedback and the technology advances, the bots
will eventually take over those testing tasks.

Self-Testing Adaptive AI
Until now, I’ve focused your attention on what AI can do for testing and test automation. However, I believe that AI
needs testing just as much as testing needs AI, if not more. Technology giants like Microsoft, IBM, and Google are
all grappling with ethical, fairness, privacy, security, and a slew of other issues surrounding AI.1 As more
governments and businesses incorporate AI into their decision making, there is increasing risk that data biases and
blind spots can lead to discrimination, financial loss, or even death. With so much on the line, it is important to bring
a testing mindset and perspective to the world of AI. Even though an ML system may be complex, it can be
quantified, measured, and controlled, not just with statistics but with thorough testing.

AI that incorporates both online and adaptive ML presents an interesting validation and verification challenge.
Unlike offline ML systems that learn from well-organized datasets through single-batch processing, online ML
leverages multiple data sources that are continuously delivering real-time data via sensors. These types of systems
learn and adapt their behavior at runtime in response to environmental changes. Online ML is necessary in highly
dynamic environments, where it is typically infeasible to stop the system, re-collect data, retrain, and retest
previously learned models whenever the environment changes.

Testing adaptive ML requires testing to occur online while the system is operating.2 Figure 5-3 presents a long-term, future vision
of AI. In that future, AI-based systems incorporate adaptive ML to keep pace with changes in user needs and application
environments. A self-testing framework, which leverages AI for automated testing, is responsible for validating dynamic runtime
adaptations. Now while the idea of self-testing in adaptive systems is not new, the technology is finally here to make a potential
solution to these challenges a reality.

Figure 5-3. In the long term, self-testing is an implicit feature of adaptive AI/ML systems

Conclusion

Software test automation has a bright future ahead of it, and I believe it’s largely due to AI. Throughout this report, you’ve seen
several examples where AI is already helping to bridge the gap between human-present and machine-driven testing. AI is testing
at different technical levels, validating quality attributes such as performance and accessibility, and allowing automated testing to
reach areas like gaming, where it has been lacking for some time. These advances are not only important to the software-testing
field but may come full circle to benefit AI itself. AI-based systems that incorporate online and adaptive learning will need an
automation framework that is dynamically adaptive and embedded in the runtime environment. The only lingering question that I
still have about that future is, who will test the AI that is testing the AI? To avoid this infinite loop, I hope that the engineering
community continues to make investments at the intersection of AI and software testing, including the role that humans will play
in a future where AI tests software.

1 Tom Simonite, “Tech Giants Grapple with the Ethical Concerns Raised by the AI Boom,” MIT Technology Review Magazine,
March 30, 2017.
2 Tariq M. King, “A Self-Testing Approach to Autonomic Software” (PhD diss., Florida International University, 2009).

You might also like