Hallucination
Definition
A hallucination is when an AI gives an answer that sounds correct but is actually wrong or made up. This happens when the model fills in gaps or guesses without knowing the real answer.
Example
An AI that says Abraham Lincoln was born in 1950 is hallucinating.
How It’s Used in AI
Hallucinations can happen in language models like ChatGPT or Claude when the AI tries to respond confidently without enough knowledge. They're common in content generation, summarization, or Q&A tools. Developers try to reduce them with better data, prompt design, or fine-tuning.
Brief History
The term “hallucination” became popular around 2020 with the rise of large language models. As these tools became more widely used, users noticed how often they could confidently return wrong answers.
Key Tools or Models
Hallucinations are studied in models like GPT-4, Bard, Claude, and Gemini. Tools like RLHF, fact-checking APIs, and context-aware prompting are used to minimize them.
Pro Tip
Always double-check AI answers when accuracy matters. AI sounds confident—even when it’s wrong.
Related Terms
LLM (Large Language Model), Prompt Engineering, AI Alignment