The role of AI and Machine Learning in SW testing – Part 1

June 29, 2020
The role of AI and Machine Learning in SW testing – Part 1
One trending application of AI into software testing is within the subject of Test Automation.


Nowadays we hear and see the acronyms ML and AI spread across news, articles, papers and so on, standing for “Machine Learning” and “Artificial Intelligence”, respectively. But what do they actually mean?

Artificial Intelligence represents the capability of a machine to demonstrate intelligence, meaning that any device with perception of the surrounding environment and ability to take actions to achieve an objective can be considered intelligent. This denomination can be applied if a machine is able to mimic human cognitive functions.

Machine Learning is the field of Artificial Intelligence that heavily relies on algorithms and statistical techniques, not only to simulate human behaviour, but also to give computer systems the ability to “learn” from data, without being explicitly programmed. The basic premise is to have an algorithm that can receive input data and use it to predict an output, while updating those outputs as new data becomes available. These systems can learn from and make predictions on data, overcoming strictly static software by making decisions and building a model from sample inputs.

Quality Assurance and the need to keep up with technology

According to the CEO of Google, Sundar Pichai, we are suffering “an important shift from a mobile first world to an Artificial Intelligence (AI)-first world”. Gartner Group agrees and assert that the “coming years will mark the beginning of the democratisation of AI” and “a much broader range of companies and governments will use AI” in the future. Gartner also surveyed that:

  • 35% of the AI initiatives are on the radar, but no action is planned;
  • 25% are in short-term planning/actively experimenting;
  • 21% are in medium or long-term planning;
  • 4% have no interest;
  • Only 4% already have invested and deployed.

Given the fact that software industry has never been so complex and volatile, systems are growing from one day to the next in an irregular shape. Every day we are becoming more dependent on technology and the cost of delivering poor quality software is increasing. New ways to deliver better quality are being thought.

The first challenge encountered in AI is not related to the technology itself, but with the identification of specific applications and, in the State of Testing Survey 2017 and the World Quality Report 2019/2019 (WRQ) understanding, one of the possible ways to apply AI is in the Quality Assurance field.

Merging Quality Assurance and Artificial Intelligence

For the testing purpose, we can consider AI either as a tool or as a final product under testing. In other words, we can use AI to test and we can test it. In this article we will reflect on the first approach.

AI-driven testing tools and levels

One trending application of AI into software testing is within the subject of Test Automation. According to automation test architect and blogger Joe Colantonio, we can divide test automation tools into three waves:

  • The first one was accomplished by tools as WinRunner, Silk Test or QTP and made automation of regression tests feasible for the first time;
  • The second one began with Selenium, “focusing more on developers and programming best practices when creating automated tests”;
  • The third wave, which is around AI and ML, where “companies are rushing to create tools they can pitch “AI-driven””. Some of the most known third wave tools are Applitools, ReTest, Test.AI, SauceLabs, Testim, Sealights MabI or ReportPortal.

Besides this, according to the senior architect of Applitools Gil Tavar, we can divide the AI-driven testing into six levels giving the automation capability:

  • Level 0 – Test scripts written by the tester to be run every release. Each time developers make a change in the application, the tester needs to modify the scripts manually;
  • Level 1 – An AI system begins to be applied and the better an AI system can see the application, more autonomous the testing activity will be. In this level, the AI system can check if the test passes, but when it fails the AI has not the ability to check if the failure is real and why it happened. The tester is still needed to verify every change;
  • Level 2 – The AI system should be able to group changes since it will be able to understand, semantically, those that are the same change. After, it will tell the human when the changes are the same and ask whether to confirm or reject all the changes as a group;
  • Level 3 – In addition to run tests, AI should look at hundreds of test results and see how things changed over time. By applying ML techniques, the AI system can detect anomalies in changes and submit only those to a human for verification;
  • Level 4 – The AI system can understand the application under testing and, using ML techniques, is able to write the tests;
  • Level 5 – The AI would be able to interact with the team members and stakeholders, understand the application and fully run the tests by itself.

Currently, we are in the third wave, with the tool’s vendors working on Level 2 tools and large companies using Level 1 advanced tools.

On Part 2 of this article, we go through the advantages and challenges of AI in testing activity, along with the application of Machine Learning for Test Suite Optimisation.



Written by
Edgar Oliveira
Sofia Rebelo
Guilherme Maia
Download White Paper
Ready for a deep dive?