← Blog

How LexCodex avoids hallucinations — verified citations and primary sources

Published 4 May 2026 · 7 min read · By GD · LexCodex

Hallucinations are the single biggest risk when using generative AI in legal work. When an LLM invents a case that doesn't exist, cites the wrong section of a statute, or fabricates a judge's name — and presents it with high confidence — the worst-case consequence is a lawyer filing a brief built on imaginary precedent.

It has already happened. In New York in 2023, attorney Steven Schwartz was sanctioned after citing six fabricated ChatGPT cases in a brief against Avianca Airlines. Similar incidents have since been reported in the UK, Australia, Canada and Sweden.

For legal AI, credibility is binary: one hallucinated reference out of a hundred is a hundred times worse than no references at all, because the user can't tell which is which. This article explains how LexCodex is built to minimise that risk.

What is a hallucination — in practice?

Hallucinations in LLM contexts aren't random errors. They are statistically plausible but factually false outputs. The model produces text that looks like real cases — correct citation format, plausible judge names, reasonable year — but where the actual case doesn't exist.

Three common classes of hallucination in legal AI:

  1. Fabricated cases: "NJA 2018 p. 423" that doesn't exist, or where the citation is real but the substance is wrong
  2. Incorrect statutory references: The AI cites "Contracts Act § 36(2)" where the section exists but subsection (2) doesn't apply
  3. Made-up doctrine quotes: Quotes from established legal authors that sound plausible but don't appear in any actual text

The underlying cause: LLMs are not databases. They are probabilistic language models. When you ask a raw LLM about Swedish tort law, you get the most likely sequence of words based on training data — not search results from a verified legal source.

LexCodex's three-layer protection

To handle this, LexCodex uses three complementary techniques. None of them is enough alone — the combination is the point.

Layer 1: Every legal claim is bound to a verified primary source

When LexCodex generates an analysis, every legal claim points to a specific URL in a verified primary source. The URL patterns are predefined in code — the AI cannot invent a URL unless it matches a verified pattern.

lagen.nuSwedish statutes + travaux préparatoires
Sveriges DomstolarSwedish Supreme/appellate courts
RiksdagenSwedish bills, committee reports
ArbetsdomstolenSwedish Labour Court
EUR-LexEU law + CJEU
IMYSwedish DPA decisions (GDPR)
JOParliamentary Ombudsman
FinansinspektionenSwedish financial supervision
KonkurrensverketSwedish competition authority
LovdataNorwegian statutes
StortingetNorwegian parliament + NOU
DatatilsynetNorwegian DPA

When you click on a cited source, you land directly on lagen.nu or Sveriges Domstolar — not on a LexCodex-cached version. You verify against the original yourself. No intermediate step where hallucinations could sneak in.

Layer 2: Instructed uncertainty — "say 'I don't know' rather than fabricate"

The system prompt that controls the AI contains explicit instructions: when the model is uncertain, it should say "I'm uncertain — consult the primary source" rather than fabricate a reference. This goes against an LLM's default behaviour, which is to produce fluent text even when the underlying knowledge is thin.

Concretely, this means that as a user you more often see:

It's not the AI "giving up". It's the AI being instructed to distinguish between "I'm confident about this" and "I'm uncertain". That distinction is what separates a usable legal AI from a dangerous one.

Layer 3: Extended thinking — multi-step reasoning before answering

LexCodex uses Anthropic Claude's extended thinking feature. This means the AI reasons through several steps internally before giving an answer. For legal analysis this is critical: a contractual clause may have three different legal effects depending on context, and misinterpretations most often happen when the AI jumps directly from question to answer without intermediate analysis.

In practice, extended thinking isn't visible in the output — you only see the final answer. But the reasoning process that produced it is more rigorous. For complex questions (compliance analysis, AI Act classification) it makes a substantial difference in quality.

What the layers do NOT protect against

Let's be honest: the three layers minimise fabricated citations but do not eliminate them. And they don't protect against every type of AI error.

Specifically, they don't protect against:

This is why we consistently write "AI-assisted analysis — verify against the primary source before deciding" in disclaimer banners. It's not legal cover — it's an accurate description of what the tool does.

How to verify the protections work

Tests you can run yourself when evaluating LexCodex (or any legal AI):

  1. Ask about an obscure point. Something where published case law probably doesn't exist. A good AI should say "I don't have specific case law on this" — not invent one.
  2. Click every link. Each cited source must lead to a verifiable URL — not 404, not a hub page, but the exact document being cited.
  3. Ask the same question with different phrasings. If the answer flips between opposing recommendations, the AI is uncertain — it should say so.
  4. Ask for interpretation of a known doctrine passage. Quotes from established legal authors are easy to verify. The AI should either cite correctly or say "I can't reproduce specific passages".

Summary

Hallucination prevention isn't a silver bullet — it's an architectural question. LexCodex's three layers (verified primary sources, instructed uncertainty, extended thinking) reduce risk substantially without eliminating it. The rest is on you as the user: verify, click, check.

For legal AI used in client work, hallucination protection isn't a "nice-to-have" — it's the precondition for the tool being usable at all. We treat it accordingly.

Want to test it yourself? Create a free account — 3 analyses per month permanently, no card details required. Try with your own questions and evaluate the references.

Read more

EU AI Act for lawyers — case study where LexCodex classified itself · Swedish legal AI 2026 — market overview

Explore the tools → Back to blog

⚠ AI-assisted analysis is not legal advice. Always verify against the primary source before acting.