Wayne Roseberry’s Post

I took a while to write this, but I felt the public discussion about AI as a tool to support product testing needed some real accounts of having done so. I have several years of it, and the experience was fascinating. #softwaretesting #aitesting #softwaredevelopment

Lessons learned using an AI army to test Microsoft Office

Lessons learned using an AI army to test Microsoft Office

Wayne Roseberry on LinkedIn

Hey, Stephen Spady, it is not lost on me that you were doing this kind of thing more than 20 years prior.

Jim Hazen

Software Test Automation Architect and Performance Test

12mo

Wayne, great article. Fascinating, intriguing, and interesting how this came together and worked. Boat loads of information and food for thought, both positive and negative (points you made). You allowed the AI Agents to act like different types of users (novice, intermediate, advanced, expert) to go around the applications and learn their behaviors/capabilities. Seems they started off as Novice type users and learned and grew into Experts, thus hitting upon areas of common functional use and others of statistically little use. I can't imagine the logs/data and the database/datastore size that contained the knowledge. You'll need to explain more and how the "rewards" factored into things. This is an example of what a large corporation with vast resources (time, money, people, equipment) can do. Question is how does this apply to the 90% of us that are in small and medium sized companies. As you said there are always the people with misconceptions around this (Automation in Testing, and now AI). How can we deal with them (how did you really deal with them) so that we don't get painted into a corner right off the bat? This is something I've dealt with for over 30 years. I see AI as adding more to deal with.

Martin Gijsen

I help automation-in-testing seniors level up to consulting

12mo

Very interesting, thanks Wayne Roseberry. It does raise a question with me about the AI side of this. At first I thought that this was simply Model Based Testing, but that is not the case as the model is created by the tool while exploring and there are no expected outcomes. And the exploring itself is not AI as I understand the term. It is just a smart algorithm, doing a smarter version of monkey testing. So I guess the AI bit is strictly in the reward mechanism? Can you say a bit more about how that worked? (The AI label is attached to just about anything these days, so it helps to distinguish between real AI and really smart tools.)

Like
Reply

This is a huge success story from my perspective. Yes, it didn't replace the actual testing YET but it accomplished a lot, reproducing hard to reproduce scenarios, covering corner cases and something that usually there is no time to cover on its own is amazing. While it cannot do what the army of automation testers can do, as a next level can we educate it a bit more and point it towards scenarios we care about so it tests what we actually want and need to test in addition to everything random? can we produce some sort of scenario list it should pay attention to?

Like
Reply
Bill Kirst

Leading Change in the Era of AI | Storyteller | Poet | Adobe | Podcast Host - "Coffee & Change" | ex-Microsoft, IBM

12mo

“Tools enhance and extend our capabilities. Rarely do they replace them.” And core to success in a human-centric approach. Thanks for this in-depth analysis. Assures me and others of the continued value of change management in the partnership efforts of AI enablement.

Mansoor Shaikh

Vice President - JPMorganChase | 3X-Oracle Cloud | AWS | Java | Rest Assured | Selenium | Cucumber | TestNG | Spring boot | Kafka | Micro services | jMeter | BlazeMeter | SRE

12mo

Wayne Roseberry Thank you for sharing. It is very insightful. Microsoft has the capacity to deploy AI agents in huge number as you described above. But many companies won't have this capacity. There is lot of pressure to get AI embedded in testing domain. People are sold out on the idea that AI is the future. I believe many will resort to popular tools which has AI and which works.

Like
Reply
Ahtisham Ilyas

Top Software Testing Voice | Software Test Engineer | Automation Specialist: Web, Mobile, API | Scrum Fundamental Certified

12mo

Interesting read, there is still many point which I did not get yet will read it again, but I like most the article specially where you have written "Testing methods complement each other more than they replace each other." Btw whats was the accuracy in % while performing testing via those AI agents

Like
Reply
Mike Verinder

Strategic Test and Dev Engineering | Digital Transformation | Product and Marketing Advisory | Community Led Growth | Leading Communities + 400,000 Members

11mo

The life of a Great Software Engineer that couriously focuses on QA. I love this article Wayne Roseberry It made me think creatively through the problem and map out solutions. Thanks for the journey !

Like
Reply

I have done a simular test, ofcourse not on that scale. Used a program that was developed to search for errors/crashes in SUTs running and learning via q-learning algorithm. It found some interesting bugs.

Daniel Toms

CI - Build - Quality - Automation - Productivity

12mo

This is great. Thanks for writing it.

Like
Reply
See more comments

To view or add a comment, sign in

Explore topics