Skip to main content
Technical implementation · AI Search Infrastructure

Definition

In the context of language models, weights are the numerical parameters learned during training that encode the model’s knowledge, associations, and behavioral patterns. They are stored in the model’s neural network and determine how the model responds to any given input. Model weights are where brand knowledge lives in LLMs. A brand that was well-represented in training data has its entity, associations, and attributes encoded in the model’s weights — accessible without retrieval. A brand absent from training data has no weight-based representation and depends entirely on real-time retrieval to appear in AI responses. Understanding weights helps explain why different AI models represent the same brand differently, and why training data presence is a long-term brand asset.

Training corpus

Pre-training

LLM brand memory

Foundation model

Inference

Relevant PLC Services

AI SEO Entity SEO