In an alarming incident that highlights the potential dangers associated with Artificial Intelligence (AI), a Norwegian man named Arve Hjalmar Holmen has officially filed a complaint against OpenAI, the developer of ChatGPT. This complaint stems from a damaging “hallucination” produced by the AI, wherein it erroneously claimed that Holmen had killed his two sons and was sentenced to 21 years in prison. Frightened by the ramifications of such misinformation, Mr. Holmen has reached out to the Norwegian Data Protection Authority, demanding accountability and an inquiry into OpenAI’s practices.
The term “hallucination” in this context refers to the phenomenon wherein AI systems, like ChatGPT, fabricate information that is presented as factual. Instances of AI misrepresentation are becoming increasingly common, causing concern over the reliability and accuracy of AI outputs. Holmen is particularly disturbed by the implication that any onlookers might believe the generated content could indeed reflect reality. “Some think that there is no smoke without fire – the fact that someone could read this output and believe it is true is what scares me the most,” he expressed.
Holmen had initially sought out information on himself by asking ChatGPT, “Who is Arve Hjalmar Holmen?” The response he received included troubling details about a supposed tragedy involving his children: two young boys ages seven and ten, who were allegedly found dead in a pond near their home in Trondheim, Norway. Although Holmen is a father to three sons, the AI’s fabricated story affected him deeply, emphasizing the underlying issues of how AI handles personal data and the potential damage that can result from erroneous statements.
To support Holmen’s complaint, the digital rights organization Noyb stepped in. They emphasized the serious nature of the misinformation, asserting that Holmen has never been accused or convicted of any crime and is a responsible citizen. Their complaint claims that the AI-generated content violates European data protection regulations concerning the accuracy of personal data. They highlight that the disclaimer included with ChatGPT’s responses—stating that “ChatGPT can make mistakes. Check important info.”—is insufficient. Lawyer Joakim Söderberg, representing Noyb, argued, “You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true.”
The issue of AI hallucinations is one that has gained traction among computer scientists and technology developers. This incident is not isolated; other AI systems, like Apple’s news summarizing tool, were suspended earlier in the year after fabricating headlines presented as real news. Similarly, Google’s AI, Gemini, attracted criticism when it suggested ludicrous actions, like using glue to stick cheese on pizza. Such instances raise important questions about the trustworthiness and accountability of AI systems as they become increasingly integrated into everyday life.
Mr. Holmen had made other queries on the same day, including searching for his brother’s name within ChatGPT, which produced several erroneous accounts. Noyb has acknowledged that previous searches may have contributed to the misleading response concerning Holmen’s children. Yet they lament the AI’s “black box” nature, indicating that it remains challenging to understand how data is processed or which specific information is utilized to generate responses. Since Mr. Holmen’s inquiry in August 2024, OpenAI has upgraded ChatGPT’s model, allowing it to source current information from news articles, thereby potentially reducing the likelihood of such hallucinations.
Through this unsettling circumstance, Mr. Holmen’s experience underscores critical concerns about the ethics and practices of AI developers. The tension between technological advancement and user safety is palpable, raising essential questions about how misinformation can impact individuals’ lives and reputations. As AI continues to evolve, addressing the risks and responsibilities associated with its use will be vital to ensuring trust and safety in digital interactions.