AI hallucination is the phenomenon where a large language model generates plausible-sounding but factually incorrect or fabricated information. Hallucinations occur when the model fills gaps in its knowledge with confident-sounding inference rather than verified facts.
Hallucination is the primary accuracy risk in AI search for brands. An AI system that hallucinates about a brand — inventing founding dates, misattributing services, fabricating locations — damages brand representation in ways that are invisible to the brand unless actively monitored. Hallucination mitigation strategies — structured data, Wikidata entries, authoritative third-party coverage — reduce the gaps in AI knowledge that hallucinations fill. Regular LLM probing to detect hallucinations is part of a complete AI search management program.