Just Think AIStart thinking

GlossaryTerm

Groundedness

Whether a model's answer is supported by the provided source documents.

Groundedness measures whether the claims in a model's output are traceable to the source documents it was given — not to the model's training knowledge. A grounded answer cites only what's in the context; an ungrounded one introduces facts from training data or invents them.

This is the primary quality metric for RAG systems, where the whole point is to answer from your documents, not from general knowledge. You can measure it automatically with an LLM-as-judge checking each claim against the retrieved context, or with specialized grounding models (Azure AI Language has a built-in groundedness detection API).

Low groundedness almost always has one of two causes: the retrieval didn't find the relevant document (a retrieval problem), or the model defaulted to its training knowledge when retrieved context was insufficient (a prompting problem — "if the answer is not in the documents, say so").

Bring this to your business

Knowing the term is one thing. Shipping it is another.

We do two-week AI Sprints — one term, one workflow, into production by Day 10.