Glossary · AI

What is
Hallucination?

When an LLM generates plausible-sounding but factually incorrect information.

By Anish· Founder · Vedwix
·

Definition

Hallucinations are confident but wrong outputs — fabricated citations, made-up case law, invented APIs. They're the central reliability problem in production LLM apps. Mitigations: RAG with citations, structured output, retrieval augmentation, eval harnesses, refusal training, and explicit "I don't know" prompting.

Example

An LLM asked about a real lawyer cites a court case that doesn't exist, with a fully formatted citation.

How Vedwix uses Hallucination in client work

We design systems to refuse rather than hallucinate. Every output that affects a user gets a citation or a fallback.

Building with Hallucination?

We ship this.

If you're building with Hallucination in production, we can help — from architecture review to full implementation.

Brief us

Working on a Hallucination project?

Brief Vedwix in three sentences or fewer.

Start a project