Chat-GPT Is Constantly Lying And AI Experts Say It’s Going To Get Worse

By Jason Collins | Updated

For all their worth, various AI models and tools are known for producing inaccurate information to users’ inquiries. This, unfortunately, isn’t limited to chatbots such as ChatGPT alone, but to image-generating AI as well, with many users criticizing gaming companies for using AI content in their games—Duke Nukem was one of the recent examples, in which an image-generating AI inaccurately drew Duke’s hands.

The biggest problem lies in the chatbot’s hallucination problem and its ability to make inaccurate information sound accurate.

With that said, image-generating AI isn’t as widespread as chatbots, with the latter being used for everything from giving out medical advice to writing scientific papers and legal documents. The biggest problem, as reported by AP News, lies in the chatbot’s hallucination problem and its ability to make inaccurate information sound accurate. What’s even worse is the fact that some experts in the field believe that the problem isn’t fixable and that AI chatbots will forever output falsehoods.

And therein lies the problem; having your paper written by an AI bot that’s equally capable of inventing facts and mixing them up with accurate information is a recipe for disaster.

Daniela Amodei, co-founder and president of Anthropic, stated that, in her opinion, there aren’t any current AI models that don’t suffer from some hallucination, as they’re mostly designed to predict the next word. There will always be some rate at which AI models do that with some degree of inaccuracy.

artificial intelligence

This doesn’t sound encouraging, considering that many companies have embraced the use of AI text-generators as their driving force for document and written content creation. Some publications even restructured their entire corporate structure around the use of AI, using humans as mere editors of AI-generated text.

The latest example comes from the world of gaming when World of Warcraft players purposefully generated enough false information to trigger an AI-powered news aggregation website to publish a whole article about something that didn’t actually happen.

Experts in the field believe that the problem isn’t fixable and that AI chatbots will forever output falsehoods.

It’s important to note that there’s some wordplay here. Sure, there will always be some rate at which AI models provide inaccurate information. There will also be some rate at which charlatans pass for experts. The key is to minimize those “failure” rates, and some AI experts, such as OpenAI CEO Sam Altman, believe that the hallucination problem will be significantly reduced as time goes by—saying that the first noticeable results might appear in as early as 18 months.

Others, such as Emily Bender, a linguistics professor and director of the University of Washington’s Computational Linguistics Laboratory, believe that the problem isn’t fixable at all due to an inherent mismatch between the AI technology and the proposed use cases.

This is really an issue, considering that Google is trying to pitch its news-writing AI product to news organizations, for which factual accuracy is paramount. Furthermore, most AIs are currently aggregating and processing data, which could potentially lead to a fruit of a poisonous tree or fake news.

Having your paper written by an AI bot that’s equally capable of inventing facts and mixing them up with accurate information is a recipe for disaster.

Massive AI-developing companies are currently caught in an uphill battle, trying to stop their virtual children from lying to the masses. The battle won’t be an easy one, and the certainty of a positive outcome is questionable.