>

Hallucination

Hallucination

A hallucination is when an AI model generates something that sounds confident and plausible, but is actually false, made up, or misleading.

For example, the model might invent a quote, cite a non-existent article, or confidently give the wrong answer to a factual question. This happens because the model is predicting text based on patterns in its training data, not verifying facts in real time.

Hallucinations are one of the biggest challenges in AI today, especially in high-stakes areas like healthcare, law, and education. Using techniques like RAG (retrieval-augmented generation) or adding clear context in your prompt can help reduce them.

© 2025 Kumospace, Inc. d/b/a Fonzi