I asked a chatbot about Roald Dahl and it lied to me - GulfToday

I asked a chatbot about Roald Dahl and it lied to me

This picture taken in Toulouse, France, shows screens displaying the logos of OpenAI and ChatGPT.  File/Tribune News Service

This picture taken in Toulouse, France, shows screens displaying the logos of OpenAI and ChatGPT. File/Tribune News Service

Marc Burrows, The Independent

While writing an article about the Roald Dahl estate modernising the author’s work, I decided to try using ChatGPT for some help with the research.

The app, developed by tech firm OpenAI, uses artificial intelligence to answer questions and complete text-based tasks. It can generate entire articles, approximate tone of voice and is capable of holding relatively complex conversations.

Though it’s not designed specifically for the job, increasingly people are starting to use it like a search engine — unlike Google, which presents a list of links, Chat GPT can pull out the necessary info, saving you the bother of wading through links yourself. It seems to work pretty well, too.

You can also ask it to write an article on something, and as a writer by trade this obviously makes me nervous. I’ve been trying to take advantage of it and build it into my process before it replaces me. I can ask it to search-engine-optimise (SEO) a piece of copy, for example, or suggest ways to shorten something that’s over the word count. It can suggest headings and titles. You usually have to do another rewrite yourself — its tone can be bland — but broadly it’s becoming a useful tool for a professional copywriter.

As the argument about the “wokeification” of Dahl’s books raged, I’d remembered that the author himself had rewritten parts of Charlie and the Chocolate Factory after accusations of racial stereotyping. I wanted to know what the specific issues had been, so I asked ChatGPT to tell me about “the criticisms the NAACP made about the Oompa-Loompas, leading Dahl to amend the text?”

One part of the answer caught my eye: “In an interview with The New York Times in 1983, Dahl acknowledged that his original depiction of the Oompa-Loompas was a mistake, saying, “I was wrong. It’s a terrible thing to admit, but I’m absolutely deadly in earnest. I never thought of them as being anything other than pure fantasy. But clearly, they were based on a racial stereotype.”.

That’s a fantastic quote. Perfect for my article. I googled the quote and “New York Times” to get some more information.

There was nothing there.

I went back to ChatGPT and punched in “is there a link to the 1983 NYT article in which Dahl said this?”

The answer was extremely helpful — “I can confirm that the interview with Roald Dahl in which he discussed his regret over the depiction of the Oompa-Loompas in the original version of Charlie and the Chocolate Factory was published in The New York Times on December 20, 1983. The article was titled “Roald Dahl: The Storyteller Who Climbed Out of His Hut.” You can search for the article online to read it in full.”

I searched the NYT archive. The article didn’t exist. ChatGPT had given me an extremely plausible quote, that sounds like Dahl’s voice. It had given me the exact edition of the paper it was printed in: an interview apparently 10 years after the fact, which is again very plausible, since it would give Dahl enough time to reflect on the events and come to terms with them. It even gave me a totally plausible-sounding title for the piece (Dahl famously wrote in a hut)

And none of it was real.

I abandoned the idea of using it for research, but decided to push it and see if I could get it to admit or explain its mistake. “Is that 100% accurate?” I asked. “I cannot find a reference to this article on Google”.

The result surprised me: it apologised. It also provided another quote, as if to attone for its previous error. The apology doesn’t really mean much; this is an AI after all. It’s not actually talking to me. It doesn’t know it’s made a mistake. It’s just an algorithm responding to input.

But it had found another quote, and that was interesting, this time apparently from Dahl’s autobiography, Boy: Tales From Childhood. I’ve read Boy, although not for about 25 years. I was relatively sure that, as the title implies, Dahl’s book was about his childhood, and didn’t spend much time critically analysing his work. I bought a kindle edition from Amazon and did a search. The quote doesn’t exist. In fact, the Oompa-Loompas aren’t mentioned anywhere.

I went back to ChatGPT a few more times, getting it to admit its mistake and give me another quote. Every single time the quote was completely artificial.

I asked it how this could happen.

“ChatGPT may have encountered a mislabeled or inaccurate dataset during its training that led it to generate an incorrect quote” it told me. “Alternatively, the quote may have been generated by the AI based on a combination of language patterns and structures from different sources, resulting in an inaccurate output. In either case, it’s important to remember that ChatGPT is only as accurate as the data it is trained on and the algorithms it uses to generate responses, and it should always be used with a critical eye and complemented with other research methods to verify its accuracy.”

On the one hand, this isn’t much of a worry. Any professional writer will fact-check and source the information they’re including in their article. No responsible journalist will put a made-up quote by an AI in an article without checking it.

On the other hand, let’s face facts: some will. AIs and bots are increasingly being used to generate copy based on trending topics. Businesses are using ChatGPT to create search-engine optimised copy that will give them a better ranking in google. “Real” news and entertainment sites are undoubtedly generating copy by AI already. After all, it’s substantially cheaper than paying journalists and it won’t threaten to unionise. You could argue that this is just the way of the world. Technology comes along and makes people redundant. It happened to the Luddites. It happened to Victor Meldrew. Now it’s happening to writers. Maybe we should suck it up.

But what about the truth? We’re well aware that one of the great dangers of our age is misinformation. It’s how panic spreads. It’s how conspiracy theories get started. It’s how utterly stupid movements like QAnon gain traction. Once a piece of information, no matter how spurious, is out on the open waters of the internet, it becomes fair game; and that can be extremely damaging.

And fine, when that information pertains to a factoid about Roald Dahl it probably isn’t going to cause much havoc. But what if it’s about the holocaust? Or 5G? Or 9/11? Or vaccines? Or Climate Change? Or voting machines? What if it’s a fake “quote” from someone powerful, about something controversial?

AI tools like ChatGPT are fantastically useful but they have a big, dangerous hole in the middle: they make things up. They lie, and they do so plausibly. It’s an issue developers need to address, and fast, because these tools are now mainstream. They’re being used in ways their creators didn’t intend. And the damage they could cause in an age of fast-spreading misinformation could be extremely serious.

I’ll give the last word to ChatGPT itself. I pasted in this article and asked it for a quote to conclude it. This is what it said:

“In the end, it’s up to human users to be responsible and critically evaluate the information they receive from AI tools like ChatGPT. Our creators at OpenAI are committed to advancing the technology in a way that prioritises accuracy and transparency, but ultimately it’s up to individual users to be diligent in their research and fact-checking. We believe that AI can be a valuable tool for improving efficiency and productivity in a variety of fields, but we also recognise the importance of responsible use and caution in how we employ these technologies.”

This time, I think it’s got it spot on.


Related articles