Skip to main content

Documentation Index

Fetch the complete documentation index at: https://wiki.platelunchcollective.com/llms.txt

Use this file to discover all available pages before exploring further.

Core concept · Emerging

Definition

Context poisoning is a form of adversarial attack on AI systems in which malicious content is injected into the retrieval context — through prompt injection in retrieved documents, manipulated knowledge base entries, or contaminated external sources — to cause the AI system to generate false, misleading, or harmful outputs. Context poisoning is primarily a security concern rather than an optimization concern, but it has practical implications for brands. A brand’s Wikidata entry, Wikipedia article, or key third-party descriptions are potential vectors for context poisoning if they can be edited by malicious actors. Monitoring these sources for unauthorized or inaccurate edits — and ensuring that the brand’s authoritative sources are well-maintained — reduces the risk of poisoned context corrupting AI representations of the brand.

Knowledge graph poisoning

Entity injection

Brand grounding

Hallucination mitigation

Synthetic brand signal

Relevant Plate Lunch Collective Services

Entity SEO Context Map