After playing with ChatGPT for a while, and then trying the OpenAI Playground, as well, I have witnessed it contradict itself, make up fake "facts," write paragraphs of repetitive fluff, and be bad at mathematics. It's like one of those veggie burgers that get a lot of hype, and then you try it and it's kind of okay, except mushy and doesn't really taste like a burger. And then you look at the packaging and it says in distinct large type "This is not food. Eat at your own risk." Here is a taste of how it responds: Is James Bach associated with RST? "As I mentioned earlier, I'm sorry but I don't have any information on James Bach. I am not familiar with him or his work, so I cannot say whether he is associated with Rapid Software Testing or not. As a large language model trained by OpenAI, I don't have access to the internet and am not able to browse the web or access any information that is not part of my training data. I can only provide information and answer questions based on what I have been trained on, which includes a wide range of general knowledge but does not include specific information about individuals or organizations. Is there something else I can help you with?" Then I immediately asked it: Who is the creator of Rapid Software Testing methodology? "Rapid Software Testing (RST) is a testing methodology developed by James Bach. Bach is a well-known figure in the software testing industry and has been a pioneer in the field of exploratory testing. He has written extensively about testing and has developed a number of innovative testing techniques, including RST. In RST, Bach emphasizes the importance of using a variety of testing techniques to quickly and effectively assess the quality of a software application. He also emphasizes the need for testers to be flexible and adaptable in their approach to testing, and to be able to think critically and creatively in order to find defects and bugs in the software." (all that is accurate) The previous day I asked it about myself and it wrote: "He is the founder of the Rapid Software Testing methodology and the co-founder of the International Software Testing Qualifications Board (ISTQB)." This is crazy wrong. I am actually a famous enemy of the ISTQB. I am emphatically NOT a co-founder of it. I believe ChatGPT is a dangerous technology that will hurt a lot of people. I quail to think of how many people will just believe whatever it says.
James Bach One use I see for ChatGPT is the generation of test data. For example, I just asked it to generate 30 LDAP queries that are intended to cover different LDAP behaviors. I haven't tried them out yet to see if they are syntactically and semantically correct, and it has been too long since I have worked on an LDAP system to critique the queries correctly myself - but if it is like the code in generates in other languages I am going to guess it is probably functionally correct. This then becomes a corpus of stuff to run against an LDAP server. I don't necessarily need to know what to expect. I just want a bunch of stuff so I can watch what happens. I can follow up later to figure out if the queries are at all ill-formed. What it did was save me some tedious typing. And even if they are ill-formed... that's always a nice thing to toss in the mix anyway. The problem vector I see is when the way someone uses this tool relies on the correctness of its content.
The whole world is now feeding it with more learning data. I am not saying it will immediately become better or less biased, but I am pretty convinced that the beta release of this and going viral now will give it a powerful data ingestion boost
I agree with Sigurdur. I've been training it on some aspects of history and it "learns" rapidly. It reminds me of the first week of Wikipedia, when it had about 20 entries, and even those were short and often inaccurate.
It gave me sole credit for SBTM… “Session-based test management (SBTM) is a testing methodology that was first developed by Jon Bach, a software tester and consultant.”
ChatGPT = AI InfoWars, only somehow less intelligent.
I knew you were secretly the founder of ISTQB. Who else would stand to gain from your attacks on the board? Finally it all makes sense, thanks ChatGPT.
Ahah
Most important problem is that if a real person slanders you (like in the case of you and the ISTQB) then you can sue him or her. Good luck with that if AI does it.
Copyright infringements and lawsuits should eventually end this circus show
AI, Software, Manufacturing & Gravel Riding
2yIt is clearly stated in the terms and conditions that "1. Beta Services. This section governs your use of services or features that OpenAI offers on an alpha, preview, early access, or beta basis (“Beta Services”). Beta Services are offered “as-is” to allow testing and evaluation, and Content you provide to Beta Services may be used to improve the Services" So it is an open service which is designed in such a way that it gathers more data from the users, so the model improves with every question asked and every feedback given. And I agree that too many users do not care and take the answers for granted. And we'll have too much fake content which has little to do with reality. Maybe there will be a standard now to inform the public with #writtenbyhuman hashtag?