James Bach’s Post

View profile for James Bach, graphic

Founder of Rapid Software Testing Methodology, Instructor, Consultant

After playing with ChatGPT for a while, and then trying the OpenAI Playground, as well, I have witnessed it contradict itself, make up fake "facts," write paragraphs of repetitive fluff, and be bad at mathematics. It's like one of those veggie burgers that get a lot of hype, and then you try it and it's kind of okay, except mushy and doesn't really taste like a burger. And then you look at the packaging and it says in distinct large type "This is not food. Eat at your own risk." Here is a taste of how it responds: Is James Bach associated with RST? "As I mentioned earlier, I'm sorry but I don't have any information on James Bach. I am not familiar with him or his work, so I cannot say whether he is associated with Rapid Software Testing or not. As a large language model trained by OpenAI, I don't have access to the internet and am not able to browse the web or access any information that is not part of my training data. I can only provide information and answer questions based on what I have been trained on, which includes a wide range of general knowledge but does not include specific information about individuals or organizations. Is there something else I can help you with?" Then I immediately asked it: Who is the creator of Rapid Software Testing methodology? "Rapid Software Testing (RST) is a testing methodology developed by James Bach. Bach is a well-known figure in the software testing industry and has been a pioneer in the field of exploratory testing. He has written extensively about testing and has developed a number of innovative testing techniques, including RST. In RST, Bach emphasizes the importance of using a variety of testing techniques to quickly and effectively assess the quality of a software application. He also emphasizes the need for testers to be flexible and adaptable in their approach to testing, and to be able to think critically and creatively in order to find defects and bugs in the software." (all that is accurate) The previous day I asked it about myself and it wrote: "He is the founder of the Rapid Software Testing methodology and the co-founder of the International Software Testing Qualifications Board (ISTQB)." This is crazy wrong. I am actually a famous enemy of the ISTQB. I am emphatically NOT a co-founder of it. I believe ChatGPT is a dangerous technology that will hurt a lot of people. I quail to think of how many people will just believe whatever it says.

James Bach One use I see for ChatGPT is the generation of test data. For example, I just asked it to generate 30 LDAP queries that are intended to cover different LDAP behaviors. I haven't tried them out yet to see if they are syntactically and semantically correct, and it has been too long since I have worked on an LDAP system to critique the queries correctly myself - but if it is like the code in generates in other languages I am going to guess it is probably functionally correct. This then becomes a corpus of stuff to run against an LDAP server. I don't necessarily need to know what to expect. I just want a bunch of stuff so I can watch what happens. I can follow up later to figure out if the queries are at all ill-formed. What it did was save me some tedious typing. And even if they are ill-formed... that's always a nice thing to toss in the mix anyway. The problem vector I see is when the way someone uses this tool relies on the correctness of its content.

Radosław Szmit

Chief Business Officer | DAC.digital & punktum | Gravel 🚲 Rider

1y

It is clearly stated in the terms and conditions that "1. Beta Services. This section governs your use of services or features that OpenAI offers on an alpha, preview, early access, or beta basis (“Beta Services”). Beta Services are offered “as-is” to allow testing and evaluation, and Content you provide to Beta Services may be used to improve the Services" So it is an open service which is designed in such a way that it gathers more data from the users, so the model improves with every question asked and every feedback given. And I agree that too many users do not care and take the answers for granted. And we'll have too much fake content which has little to do with reality. Maybe there will be a standard now to inform the public with #writtenbyhuman hashtag?

The whole world is now feeding it with more learning data. I am not saying it will immediately become better or less biased, but I am pretty convinced that the beta release of this and going viral now will give it a powerful data ingestion boost

Catherine Oates

Content Specialist, Educator

1y

I agree with Sigurdur. I've been training it on some aspects of history and it "learns" rapidly. It reminds me of the first week of Wikipedia, when it had about 20 entries, and even those were short and often inaccurate.

I feel as if some people are missing the significance of the part about ChatGPT creating fiction that feels plausible.  The way and why and how it does this are inscrutable on a case by case basis. Two different untrue statements may come from two different reasons inside it's engine. This results in unpredictability.  Motives are a metaphor only for an AI engine. Unlike humans we do not share in the nature of our motivations with an AI engine. A human has motives for reasonable behavior that we can understand and decide whether or not we trust. When trust lacks humans have a means of building it. The basis for trust with ChatGPT in regard to whether or not it will say something that is untrue is almost zero. The burden falls on the person conversing with it to scrutinize every detail. This is a profound limitation if one needs truthful content. 

Noel Wurst

Storytelling...at every opportunity

1y

I’ve spent way too much time this week internally philosophizing all there is to say (and dread!) about ChatGPT, Lensa, and other AI tools this week. And I say that it was too much time because I’m no closer to any sort of relief or positive outlook or defined stance on any of it than I was before. One of my primary worries is that I keep hearing the same tired selling points of lots of automation technologies - “It does the manual, mundane, repetitive, error-prone tasks…to free up time for (insert persona) to focus on more important, strategic, valuable activities. I’ve always cringed at any marketing that described testing as “mundane” and “less valuable” and now I’m seeing writing and content creation be labeled the same way. What ARE these “more important, strategic, and valuable activities” I keep hearing about and what would lead you to believe that those won’t be the next on the chopping block? I often struggle to think of an activity that bot-dealers aren’t foaming at the mouth to say, “You still have humans doing THAT? Why? Let AI do it, YOU go focus on something more strategic…just make sure to let us know what that is so we can make a new bot for it and then tell everyone how stupid it is that you’re doing manually.

Asia Zaniewicz

Masters&Robots | Business Developer | Tech Enthusiast | Lifelong Learner

1y

The biggest problem with chatGPT is that it produces a lot of plausible-sounding BS, but so do many people I suppose 🙈

Jon Bach

Award-winning speaker (2024): Enterprise Program Management | Quality Engineering & Leadership | Customer Centricity | Rapid Software Testing practitioner and advocate

1y

“Ideally, the model would ask clarifying questions when the user provided an ambiguous query. Instead, our current models usually guess what the user intended.” Just like we flawed humans mostly do. We can’t question every assumption we have, and models (always limited, by definition) help us be more efficient. But I would like to see ChatGPT ask questions before answering or question or display its assumptions. My wish is the same for humans.

Djuka Selendic

Software tester, test manager, practitioner of Rapid Software Testing

1y

It spits out test cases like ISTQB-taught people do, in the boring and predictable verify and validate style. It seems the bot has "learned" from crappy and inaccurate data about testing that circulates from the ISTQB and similar certifiers. It mimics the current and sad state of the testing industry, where a whole swath of people have learned the wrong things from that same inaccurate and misleading data, leading to incompetence and very little in terms of critical thinking about software testing. Those who are talented have to deal with the fallout of this incompetence every day because there are more of them than there are of us, making it so none of us can have nice things. Now that incompetence is amplified by this technology.

See more comments

To view or add a comment, sign in

Explore topics