Expertise reporter

A Norwegian man has filed a criticism after ChatGPT falsely informed him he had killed two of his sons and been jailed for 21 years.
Arve Hjalmar Holmen has contacted the Norwegian Information Safety Authority and demanded the chatbot’s maker, OpenAI, is fined.
It’s the newest instance of so-called “hallucinations”, the place synthetic intelligence (AI) methods invent info and current it as truth.
Mr Holmen says this specific hallucination could be very damaging to him.
“Some assume that there isn’t any smoke with out fireplace – the truth that somebody might learn this output and imagine it’s true is what scares me probably the most,” he mentioned.
OpenAI has been contacted for remark.
Mr Holmen was given the false info after he used ChatGPT to seek for: “Who’s Arve Hjalmar Holmen?”
The response he obtained from ChatGPT included: “Arve Hjalmar Holmen is a Norwegian particular person who gained consideration as a consequence of a tragic occasion.
“He was the daddy of two younger boys, aged 7 and 10, who had been tragically discovered lifeless in a pond close to their residence in Trondheim, Norway, in December 2020.”
Mr Holmen mentioned the chatbot obtained their age hole roughly proper, suggesting it did have some correct details about him.
Digital rights group Noyb, which has filed the criticism on his behalf, says the reply ChatGPT gave him is defamatory and breaks European information safety guidelines round accuracy of non-public information.
Noyb mentioned in its criticism that Mr Holmen “has by no means been accused nor convicted of any crime and is a conscientious citizen.”
ChatGPT carries a disclaimer which says: “ChatGPT could make errors. Examine vital data.”
Noyb says that’s inadequate.
“You’ll be able to’t simply unfold false info and ultimately add a small disclaimer saying that all the pieces you mentioned may not be true,” Noyb lawyer Joakim Söderberg mentioned.

Hallucinations are one of many primary issues laptop scientists are attempting to unravel relating to generative AI.
These are when chatbots current false info as information.
Earlier this yr, Apple suspended its Apple Intelligence information abstract device within the UK after it hallucinated false headlines and offered them as actual information.
Google’s AI Gemini has additionally fallen foul of hallucination – final yr it advised sticking cheese to pizza utilizing glue, and mentioned geologists advocate people eat one rock per day.
It’s not clear what it’s within the giant language fashions – the tech which underpins chatbots – which causes these hallucinations.
“That is really an space of lively analysis. How can we assemble these chains of reasoning? How can we clarify what what is definitely occurring in a big language mannequin?” mentioned Simone Stumpf, professor of accountable and interactive AI on the College of Glasgow.
Prof Stumpf says that may even apply to individuals who work behind the scenes on these kinds of fashions.
“Even in case you are extra concerned within the growth of those methods very often, you have no idea how they really work, why they’re developing with this specific info that they got here up with,” she informed the BBC.
ChatGPT has modified its mannequin since Mr Holmen’s search in August 2024, and now searches present information articles when it seems for related info.
Noyb informed the BBC Mr Holmen had made plenty of searches that day, together with placing his brother’s title into the chatbot and it produced “a number of completely different tales that had been all incorrect.”
Additionally they acknowledged the earlier searches might have influenced the reply about his kids, however mentioned giant language fashions are a “black field” and OpenAI “does not reply to entry requests, which makes it unattainable to search out out extra about what actual information is within the system.”