People are increasingly turning to generative artificial intelligence (AI) chatbots like ChatGPT to follow day-to-day news, a respected media report published on Tuesday found.
The yearly survey from the Reuters Institute for the Study of Journalism found “for the first time” that significant numbers of people were using chatbots to get headlines and updates, director Mitali Mukherjee wrote.
Attached to Britain’s Oxford University, the Reuters Institute annual report is seen as unmissable for people following the evolution of media.
Just seven per cent of people report using AI to find news, according to the poll of 97,000 people in 48 countries, carried out by YouGov.
But the proportion is higher among the young, at 12 per cent of under-35s and 15 per cent of under-25s.
The biggest-name chatbot - OpenAI’s ChatGPT - is the most widely used, followed by Google’s Gemini and Meta’s Llama.
Respondents appreciated relevant, personalised news from chatbots.
Many more used AI to summarise (27 per cent), translate (24 per cent) or recommend (21 per cent) articles, while almost one in five asked questions about current events.
Distrust remains, with those polled on balance saying AI risked making the news less transparent, less accurate and less trustworthy.
Rather than being programmed, today’s powerful AI “large language models” (LLMs) are “trained” on vast quantities of data from the web and other sources - including news media like text articles or video reports.
Once trained, they are able to generate text and images in response to users’ natural-language queries.
But they present problems including “hallucinations” - the term used when AI invents information that fits patterns in their training data but is not true.
Scenting a chance at revenue in a long-squeezed market, some news organisations have struck deals to share their content with developers of AI models.
Agence France-Presse (AFP) allows the platform of French AI firm Mistral to access its archive of news stories going back decades.
Other media have launched copyright cases against AI makers over alleged illegal use of their content, for example the New York Times against ChatGPT developer OpenAI.
The Reuters Institute report also pointed to traditional media - TV, radio, newspapers and news sites - losing ground to social networks and video-sharing platforms.
Almost half of 18-24-year-olds report that social media like TikTok is their main source of news, especially in emerging countries like India, Brazil, Indonesia and Thailand.
The institute found that many are still using Elon Musk-owned social media platform X for news, despite a rightward shift since the world’s richest man took it over.
“Many more right-leaning people, notably young men, have flocked to the network, while some progressive audiences have left or are using it less frequently,” the authors wrote.
Some 23 per cent of people in the United States reported using X for news, up eight per cent on 2024’s survey, with usage also rising in countries like Australia and Poland.
By contrast, “rival networks like Threads, Bluesky and Mastodon are making little impact globally, with reach of two percent or less for news”, the Reuters Institute found.
FACSIMILES OF THE DEAD: Christopher Pelkey was shot and killed in a road range incident in 2021.
On May 8, 2025, at the sentencing hearing for his killer, an AI video reconstruction of Pelkey delivered a victim impact statement.
The trial judge reported being deeply moved by this performance and issued the maximum sentence for manslaughter.
As part of the ceremonies to mark Israel’s 77th year of independence on April 30, 2025, officials had planned to host a concert featuring four iconic Israeli singers.
All four had died years earlier. The plan was to conjure them using AI-generated sound and video.
The dead performers were supposed to sing alongside Yardena Arazi, a famous and still very much alive artist. In the end Arazi pulled out, citing the political atmosphere, and the event didn’t happen.
In April, the BBC created a deep-fake version of the famous mystery writer Agatha Christie to teach a “maestro course on writing.”
Fake Agatha would instruct aspiring murder mystery authors and “inspire” their “writing journey.”
The use of artificial intelligence to “reanimate” the dead for a variety of purposes is quickly gaining traction.
Over the past few years, the moral implications of AI is under study at the Center for Applied Ethics at the University of Massachusetts, Boston, and these AI reanimations are found to be morally problematic.
The first moral quandary the technology raises has to do with consent: Would the deceased have agreed to do what their likeness is doing? Would the dead Israeli singers have wanted to sing at an Independence ceremony organized by the nation’s current government?
Agencies