What are AI Hallucinations and how to prevent them?

Noa Azaria
Noa Azaria
7 min read Jan 15, 2024

You’re likely already aware of AI hallucinations. Perhaps you’ve seen a funny chatbot response blasted on Twitter. While they may be amusing, they do create real risks to AI integrity. Imagine asking an AI for a recipe and it suggests chlorine gas or any poison (yes this actually happened). Not ideal, right?

In this article, we’re going to cover everything you need to know about AI hallucinations, from causes and types to mitigation techniques.  

What are AI hallucinations?

An AI hallucination is when AI systems, such as chatbots, generate responses that are inaccurate or completely fabricated. This happens because AI tools like ChatGPT learn to guess the words that fit best with what you’re asking. But they don’t really know how to think logically or critically. This often leads to confusion and misinformation which is also called “AI hallucinations”. 

What are the main causes of AI hallucinations?

It’s important to distinguish between hallucinations in foundational models and those in retrieval-augmented generation (RAG) models (a more popular LLM use case). The former stems from developments by creators like OpenAI, Google, and Meta, and is beyond user control. Conversely, RAG model issues can be more practically addressed. Understanding this difference is key to tackling the unique challenges and enhancing the performance of each system.

Foundational models

AI hallucinations in foundational models such as Gemini, ChatGPT, BERT, and others stem from a variety of issues. These directly impact how foundational models interpret and generate responses. Here, we specifically focus on the causes behind hallucinations in these foundational models:

  • Bad training data: The quality of data significantly affects AI performance. Poor-quality or biased data can lead to skewed outputs, while insufficient data may result in inaccurate responses.
  • Overfitting: When AI becomes too aligned with its training data, it struggles with new, unseen data. This overfitting results in errors during content generation.
  • Encoding and decoding errors: Mistakes in processing information can lead to irrelevant or incorrect outputs, as the system misinterprets the input data.
  • Adversarial attacks: Inputs that are specifically designed to confuse AI can exploit vulnerabilities. Addressing these requires robust security measures to prevent misleading the system.

RAG (retrieval-augmented generation) models

In the commercial realm of LLM-based products, organizations tend to use RAG. While many claim that using RAG can reduce the hallucination problem, it does the exact opposite. RAG just doesn’t solve hallucinations. This section outlines the causes for hallucinations specifically associated with the use of RAG:

  • Inaccurate context retrieval: When the retrieval mechanism fetches irrelevant or low-quality information, it directly impacts the quality of the generated output, leading to hallucinations or misleading responses.
  • Ineffective queries: Poorly formulated prompts can mislead the retrieval process, resulting in the generation of responses based on incorrect or inappropriate context.
  • Complex language challenges: Challenges in understanding idioms, slang, or accurately processing non-English languages can cause the system to generate incorrect or nonsensical responses. This is compounded when the retrieval component struggles to find or interpret contextually relevant information in these languages.

AI Hallucinations vs AI Biases: What’s the Difference?

It’s important to separate between AI hallucinations and biases. Biases in AI result from training that leads to consistent error patterns. For example, if an AI frequently misidentifies wildlife photos because it was mostly trained on city images. Hallucinations, on the other hand, are when AI makes up information out of thin air. Both are issues that need addressing, but they stem from different root causes.

Researchers at USC have identified bias in a substantial 38.6% of the ‘facts’ employed by AI.

5 Types of AI Hallucinations

  1. Fabricated content: AI creates entirely false data, like making up a news story or a historical fact.
  2. Inaccurate facts: AI gets the facts wrong, such as misquoting a law. 
  3. Weird and off-topic outputs: AI gives answers that are unrelated to the question. This leads to bizarre or confusing responses.
  4. Harmful misinformation: Without prompting it, AI might produce offensive or harmful content. 
  5. Invalid LLM-generated code: When tasked with generating code, AI might produce flawed or completely wrong code.
AI hallucinations illustration

Why AI Hallucinations are a big problem

When AI gets things wrong, it’s not just a small mistake—it can lead to ethical problems. This is a big issue because it makes us question our trust in AI. It’s especially tricky in key industries like healthcare or finance, where wrong info can cause real harm. 

Here’s why AI hallucinations matter a lot:

  • Misinformation spread: This can mislead users and perpetuate discrimination and fake news. 
  • Trust erosion: Frequent hallucinations erode trust in AI. This leads to skepticism about its reliability.
  • Reliability concerns: This raises doubts about the AI’s capability to consistently provide accurate and reliable outputs.
  • Ethical implications: They may amplify biases or lead to questionable ethical outcomes.

In the commercial context, AI hallucinations present additional threats to defend:

  • Brand reputation: AI hallucinations can harm a company’s reputation, reducing customer trust and loyalty.
  • Product liability: Inaccuracies in critical industries could lead to serious legal issues.
  • User experience degradation: Unreliable AI outputs frustrate users, affecting engagement and adoption.
  • Competitive disadvantage: Companies with more reliable AI solutions have a market advantage over those with hallucination-prone products.
  • Increased costs: Addressing AI hallucinations involves additional expenses, from technical fixes to customer service.

How to mitigate AI Hallucinations?

Reducing the occurrence of AI hallucinations involves several strategies:

1. Implement AI Guardrails: Proactive measures that filter and correct AI outputs in real-time to mitigate hallucinations and prevent malicious attacks. Guardrails ensure in real time the reliability of interactions, safeguarding brand reputation and user trust.

2. Enhance AI knowledge base: Broadening the AI’s training data to include a wider variety of sources can reduce inaccuracies.

3. Robust Testing: Regularly testing AI against new and diverse scenarios ensures it remains accurate and up-to-date.

4. Encourage proof: Users should be encouraged to verify AI-generated information, fostering a healthy skepticism towards AI responses.

Real examples of hallucinations

Google’s Bard hallucination

Google’s Bard AI made a factual hallucination about the James Webb Space Telescope, causing a significant drop in Alphabet Inc.’s market value.

Microsoft Bing’s misinformation

Microsoft’s Bing chatbot provided incorrect information about election-related questions.

ChatGPT’s creative fiction

Instances where ChatGPT generated entirely fake bibliographies or legal citations.

Final thoughts

AI hallucinations present a significant challenge, not just for casual users but for technology leaders striving to make generative AI reliable and trustworthy. Solutions like Aporia Guardrails are key in ensuring AI applications remain accurate, enhancing both user trust and the overall AI experience. By understanding and addressing the causes of AI hallucinations, we can pave the way for more dependable and ethical AI applications.

Want to learn more about mitigating hallucinations in real time?

Get a live demo and see Aporia Guardrails in action. 

Relevant articles:

Your best AI hallucination questions

What are AI Hallucinations?

AI hallucinations happen when AI, like LLMs, produces false or nonsense information as facts. This is due to the AI following learned patterns from its training, not actual facts or logic.

Why are AI hallucinations a problem?

They spread false info, causing confusion or misinformation. In critical areas like healthcare, this can lead to serious issues, reducing AI reliability and trust.

How can you mitigate AI hallucinations?

  • Add proactive guardrails to ensure safety and trust.
  • Improve training data quality and diversity.
  • Use strong validation and error checks.
  • Update AI models with new, correct information.
  • Use user feedback for improvements.

What is an example of generative AI hallucinations?

Generative AI might invent a fake study with false data as real. 

What is an example of a hallucination in ChatGPT?

ChatGPT might wrongly claim a book or person exists, creating fake quotes or facts.

How often does an AI chatbot hallucinate? 

This varies by the AI’s complexity and data. Every kind of LLM is prone to hallucinations. 

How can you tell if an AI chatbot is hallucinating?

Check for factual errors, inconsistencies, or unlikely details. Verify against trusted sources. Overly specific answers in vague areas might also indicate hallucinations.


Green Background

Control All your GenAI Apps in minutes