OpenAI Sued For Libel Over False Claims By ChatGPT
A radio host is suing OpenAI for what he calls an "AI hallucination."
OpenAI, the company behind the generative AI platform ChatGPT, is being sued for libel after a man was allegedly a victim of an “AI hallucination.” As reported by Gizmodo, a journalist asked the AI chatbot to provide a summary of the case The Second Amendment Foundation v. Robert Ferguson. In its reply, the AI allegedly stated that a radio host named Mark Walters was accused of embezzling money from The Second Amendment Foundation (SAF) even though Walters wasn’t involved in the case whatsoever.
In response, Walters is suing OpenAI for ChatGPT’s mistake, accusing the company of libel and damaging his reputation. This lawsuit is the first of its kind, and it could very well shape how the legal system views these kinds of libel suits against generative AI in the future. However, Gizmodo did speak to a legal expert who mentioned that the merits of this particular case are shaky, though that doesn’t mean that there won’t be stronger suits in the future.
The embezzlement claim isn’t the only falsehood allegedly provided by OpenAI’s ChatGPT in response to firearm journalist Fred Riehl’s prompt about The Second Amendment Foundation v. Robert Ferguson case. The platform allegedly stated that Walters was the chief financial officer and treasurer of the Second Amendment Foundation, which is what allowed him to embezzle funds for “personal expenses” along with manipulating “financial records and bank statements to conceal his activities” and failing “to provide accurate and timely financial reports.” As Walters’ lawsuit claims, none of this could be true since Walters was never the CFO or treasurer, nor was he in the employ of the foundation at all.
The 30-page SAF vs. Ferguson complaint doesn’t mention Walters at all, and Walters alleged that OpenAI’s ChatGPT model doubled down when prompted to clarify Walters’ role in the lawsuit. It allegedly would go on to cite a non-existent passage from the complaint and also got the case number wrong in the process. Riehl contacted the attorneys involved in the case to avoid getting the facts wrong and left Walters’ name out of his eventual story.
OpenAI’s founder Sam Altman has previously stated that ChatGPT’s “hallucinations” are an issue that is being actively worked on to improve the model’s accuracy. Still, the alleged damage is done, and one of Walters’ attorneys said that the mistake could harm Walters’ reputation, “exposing him to public hatred, contempt, or ridicule.” The only question that remains is if this lawsuit has any teeth.
Gizmodo also consulted University of California Los Angeles Law School professor Eugene Volokh on the libel case, and he seems to think it’s not the strongest. Volokh is currently authoring a law journal about the legal liability over AI models’ output, and he points out that Walters’ suit doesn’t show what actual damage has been done to his reputation. If Walters is seeking damages from OpenAI, he would have to prove that ChatGPT acted with “knowledge of falsehood or reckless disregard of the possibility of falsehood,” which would likely be difficult to prove since the large language model is just an AI.
However, this isn’t the first time that OpenAI’s ChatGPT has had a “hallucination,” and it likely won’t be the last. Volokh noted that it’s certainly possible for someone to win a libel case against the company in the future if they can prove that they lost monetary compensation or a job due to a “hallucination” produced by the AI. For now, we’ll just have to wait and see how this lawsuit plays out for Walters.