Back to all tech blogs

Mind Tricks of AI: Fiction vs. Reality

  • Backend
5 min read
Understanding AI hallucinations: The fascinating phenomenon of fabricated responses

In the strongly hyped, evolving world of Artificial Intelligence, the concept of “hallucinations” has emerged as a serious concern. There is also a curious parallelism to human experiences (albeit in a digital, non-sensory context). Much like how a person might see or hear things that aren’t actually there, AI can produce outputs that, while seemingly plausible, have no basis in reality. This blog post describes the fascinating phenomenon of AI hallucinations, exploring their nature, causes and how we might deal with them.

At Adevinta, we leverage the power of generative AI not just through integrating tools like GitHub Copilot into our IDEs, but also with our own in-house AI assistant, Ada. Ada is a versatile tool capable of reformulating or translating messages, summarising long documents or Slack threads, drafting emails, reports and much more.

Ada has been designed and trained to minimise the challenge of AI hallucinations, by providing information based on its own training data, admitting when it doesn’t know something rather than making up an answer, and prioritising and providing accurate and reliable information, although no AI system is entirely flawless.

What do we mean when we talk about hallucinations?

In human terms, hallucinations are sensory experiences that seem real but are completely made-up by the mind. They can involve seeing, hearing and other sensory experiences of things that do not exist in the real world. Commonly linked with mental health conditions like schizophrenia, the influence of drugs, and other extreme situations, hallucinations can be vivid and often indistinguishable from real perceptions.

When it comes to generative AI, hallucinations refers to fabricated outputs (text, images, commands etc.) that do not accurately reflect the AI’s training data or real world facts. Instead, the AI generates content that appears coherent and plausible but is, in fact, incorrect or entirely fictional.

For example, a language model might confidently produce an answer that includes invented statistics or fake references to non-existent sources. In software development, it’s common to ask about integrating existing code with a specific service. However, this is where AI sometimes hallucinates, suggesting deprecated or unsupported libraries, non-existent API endpoints, or methods that don’t actually exist. Models often struggle to distinguish reliable official documentation sources from forums, tutorials or inaccurate examples, leading them to occasionally provide “hallucinated” information that sounds correct but isn’t.

Why do hallucinations occur?

AI hallucinations originate from various causes, such as gaps or inaccuracies in the training data, overfitting to certain data patterns or the inherent probabilistic nature of AI models. Given that generative models do not usually have direct feedback loops to correct their outputs or verify information against a reliable source, they are prone to creating inaccurate or nonsense responses.

The significance of AI hallucinations depends largely on the application. In high-stakes scenarios, such as healthcare or legal advice, inaccuracies can be highly problematic, leading to misinformation and real-world consequences. However, in more creative applications, such as brainstorming, storytelling or artistic generation, hallucinations might be less concerning or even desirable as part of the creative process.

In Natural Language Processing, probabilistic methods are applied for sequence generation on tasks such as machine translation, text summarisation or speech recognition. In these applications the model selects the most likely next token in a sequence (like a word in a sentence), and this may, unintentionally, lead to hallucinations. 

Generative models don’t possess true knowledge or understanding, they merely approximate responses that statistically align with the patterns they have learned. But models can be adjusted and, based on the intended use case, their parameters can be fine-tuned. One of these adjustable parameters is called “temperature”. Temperature controls the randomness of the model’s answers. Models with a low temperature tend to generate more predictable responses, making them more suitable for tasks that require precise or fact-based answers. Models with a high temperature tend to generate more diverse and imaginative responses, which might be useful for creative writing or brainstorming, although may lead to less coherent or factually incorrect outputs. In the following video from IBM Technology you can find a clear and pedagogical explanation of why LLM models hallucinate.

A cure for hallucinations

Researchers are actively exploring different strategies to reduce AI hallucinations. Techniques include: Reinforcement Learning with Human Feedback (RLHF), fine-tuning models with domain-specific data, incorporating fact-checking mechanisms, and using Retrieval Augmented Generation (RAG) where the AI checks its outputs against a knowledge base. As explained in the blog post Entropy, Finally A Real Cure To Hallucinations?, a new method called Entropix tries to address the problem and reduce hallucinations in Large Language Models by using uncertainty modelling, a feature previously overlooked.

Entropix takes a different approach by measuring and leveraging entropy in model predictions. Instead of forcing the model to choose a word immediately, this method allows the model to pause and evaluate its certainty before selecting the next token, potentially leading to more accurate outputs. The approach has sparked significant interest in the AI community and may revolutionise how LLMs handle hallucinations.

Despite all these efforts, completely eliminating hallucinations remains a challenge due to the fundamental design of generative models and the unpredictable nature of language.

Can we mitigate hallucinations as users?

From a user perspective, the best defence against AI hallucinations is verification. Detecting hallucinations can be approached by double checking information from reliable sources, especially when the output is critical. 

Certain red flags can quickly alert us of the possibility of an AI-generated hallucination. Be wary of overly specific claims without citations, statements that seem too good (or too bad) to be true, inconsistencies, contradictions or answers in areas where the AI might lack sufficient training data. 

We must treat AI as a useful tool, but not as an infallible one, and provide feedback to AI platforms when hallucinations are encountered. This can help improve the model’s performance over time.

We often treat generative AI tools like a search engine. For years, we’ve become accustomed to learning new things by typing a brief query into Google and fetching answers. However, the way we craft AI prompts can significantly influence how likely hallucinations are to appear. Clear, specific and well-structured prompts help guide the model towards more reliable answers. Avoiding ambiguous or broad prompts, including relevant context, and asking for citations or sources can further aid in obtaining accurate answers and mitigate the risk of hallucinations. 

The idea is that we can guide the AI with data we provide, rather than expecting it to explore unknown territories. This approach allows the AI to analyse and work by focusing on existing information which helps minimise hallucinations, keeping them to a negligible level.

Conclusions

AI hallucinations are an intriguing and complex challenge in the realm of generative AI. By understanding their origins and learning to detect and mitigate them, we can improve our interactions with these systems. While ongoing research continues to enhance the reliability of AI, human oversight remains crucial. As we harness the potential of AI, it’s essential to stay critical and engaged, recognising both the powerful capabilities and the current limitations of these technologies.

Related techblogs

Discover all techblogs

Ninja-Commits: A Silent Saboteur

Read more about Ninja-Commits: A Silent Saboteur
How sneaky code changes can undermine your team’s delivery and performance

Leveraging A/B Testing to “soft disable” unused features and reduce unnecessary calls

Read more about Leveraging A/B Testing to “soft disable” unused features and reduce unnecessary calls
Sharing our user-centric approach to reducing emissions through informed decisions

The 300 Bytes That Saved Millions: Optimising Logging at Scale

Read more about The 300 Bytes That Saved Millions: Optimising Logging at Scale
The 300 Bytes That Saved Millions: Optimising Logging at Scale