Skip to main content
Core concept · AI Search Infrastructure

Definition

LLM decay is the gradual degradation of the accuracy of a model’s parametric knowledge over time as the world changes and the model’s training data ages. The model’s weights do not update between training runs, so the gap between what the model believes and what is currently true widens continuously until the next training run occurs. LLM decay is the mechanism behind training cutoff risk for brands. A brand that was accurately represented in training data at cutoff will become progressively less accurately represented as time passes and the brand evolves. Repositioned brands, brands that have launched new products or services, and brands that have corrected past inaccuracies in their public record are all subject to decay. The retrieval layer partially compensates for decay on queries where retrieval is triggered — but for the share of queries answered from parametric memory, the decayed belief is what the model expresses.

Common Misconception

LLM decay affects all brands equally. It does not — it disproportionately affects brands undergoing change. A stable, well-established brand with an accurate parametric representation decays slowly because the underlying facts are not changing. A repositioning brand or a fast-moving category experiences decay faster because the gap between the model’s fixed belief and current reality widens faster.

Parametric knowledge

Training cutoff

Parametric inertia

Knowledge conflict

Parametric belief

Relevant Plate Lunch Collective Services

Entity SEO Context Map AI Search Visibility Assessment