Just Think AIStart thinking

GlossaryTerm

Hallucination

When a model confidently states something that isn't true.

Hallucination is when a model produces a plausible-sounding but factually wrong output — a fake citation, an invented API method, a name and date that never existed. It happens because models predict likely text, not true text; "likely" and "true" are correlated but not identical.

The reliable fixes (in order): (1) Grounding — give the model the source documents and tell it to answer from them only. (2) Structured outputs — JSON schemas with required fields make the model commit to specific answers it can be checked against. (3) Verification — for high-stakes answers, have a second model or a deterministic check verify before showing the user. Telling the model "don't hallucinate" does almost nothing. Designing the system so it can't hallucinate does almost everything.

Bring this to your business

Knowing the term is one thing. Shipping it is another.

We do two-week AI Sprints — one term, one workflow, into production by Day 10.