← Blog

EU AI Act for lawyers: what you need to know before 2 August 2026

Published 3 May 2026 · 8 min read · LexCodex editorial team

The EU AI Act (Regulation (EU) 2024/1689) entered into force on 1 August 2024 but applies in stages. The biggest milestone for law firms and in-house teams is 2 August 2026, when transparency requirements and high-risk-system obligations begin to apply.

For lawyers, this raises two concrete questions:

  1. Are the AI tools I use (ChatGPT, Claude, Legora, Juno, LexCodex) "AI systems" under the regulation — and do I as a user have to comply with anything?
  2. How do I classify my clients' AI systems when they ask?

This article answers both. We also ran our own AI Act tool on LexCodex itself and share the result as a case study.

Timeline: what applies when?

DateWhat starts to apply
1 Aug 2024AI Act entered into force
2 Feb 2025Prohibited AI practices (Art. 5)
2 Aug 2025GPAI obligations (Art. 51-55)
2 Aug 2026High-risk systems + transparency obligations (Art. 50)
2 Aug 2027High-risk systems under Annex I products

For most lawyers, 2 August 2026 is the relevant milestone — that's when all limited-risk systems must meet the Art. 50 transparency requirements.

The risk classes: a quick walkthrough

The AI Act sorts AI systems into four risk classes:

  1. Prohibited (Art. 5): Subliminal manipulation, social scoring by public authorities, real-time biometric identification in public spaces, emotion recognition in workplaces or schools, etc. Penalty: up to EUR 35 million or 7% of global turnover.
  2. High-risk (Annex III): AI systems in recruitment, education, credit scoring, critical infrastructure, law enforcement, the judiciary (but only for courts), border control. Requires extensive documentation, risk management, human oversight, registration in an EU database. Penalty: up to EUR 15 million or 3% of turnover.
  3. Limited risk (Art. 50): AI systems that interact directly with natural persons or generate synthetic content. Requirements: inform the user about AI interaction, label AI-generated content. No other mandatory requirements.
  4. Minimal risk: Everything else. No AI Act-specific obligations (but GDPR and other legislation still apply).
Important nuance about Annex III point 8 (judiciary): The text is specific — it covers "AI systems intended to be used by a judicial authority or on their behalf". That means courts and judicial authorities, not private law firms or in-house lawyers. Your AI tool that helps you analyse contracts is not automatically high-risk just because the subject matter is legal.

For lawyers: is your AI tool an AI system?

Short answer: yes, but the provider carries the main responsibility.

When you, as an attorney or in-house lawyer, use ChatGPT, Claude, Legora, Juno or LexCodex, you are a "deployer" under the regulation. The provider (OpenAI, Anthropic, the Legora company, Norstedts, LexCodex/Nordicbysight) carries the heavy obligations around making the system available.

As a deployer you have:

In practice: if you use a legal AI tool that helps you review contracts, draft NDAs, or research case law — it is most likely limited risk. You only need to make sure the provider's transparency information is in place (which is the provider's job).

For lawyers: how do you classify clients' AI systems?

This is where it gets interesting. Clients will ask: "is our AI system high-risk?" and you need to be able to answer.

A four-step process:

  1. Is it prohibited? Walk through the Art. 5 list. If yes: stop the system.
  2. Does it fall under Annex III? Walk through Annex III points 1-8. Common relevant areas: HR/recruitment (point 4), credit scoring (point 5), education (point 3).
  3. Is it a system that interacts with humans or generates synthetic content? Then it is limited risk with Art. 50 requirements.
  4. Otherwise: minimal risk.

The most common pitfall we see: clients classify their AI system as high-risk "to be safe". That's expensive — high-risk means extensive documentation requirements, a risk management system, quality management, registration in an EU database and ongoing monitoring. Classify correctly, not conservatively.

Case study: we classified LexCodex ourselves

We thought it was reasonable to eat our own dog food. On 3 May 2026 we ran LexCodex's own AI Act tool on LexCodex as an AI system. The result:

Classification: Limited Risk (Art. 6)
Applicable obligations: Art. 50 transparency requirements (inform users about AI interaction + label AI-generated content). Art. 25 downstream value chain (provide users with necessary information).
Documentation: Formal classification report (10 sections following an auditor-friendly structure).

Why not high-risk? LexCodex is a legal AI tool aimed at private lawyers and in-house teams — not at courts or judicial authorities. Annex III point 8 ("AI systems intended for judicial authorities") therefore does not apply. None of Annex III points 1-7 (recruitment, education, credit etc.) are relevant either.

Why limited risk and not minimal? LexCodex generates synthetic textual content (legal analysis) and interacts directly with users. Art. 50 therefore applies. We already meet the requirements: a disclaimer banner above every tool form, a footer disclaimer on every page, all output marked as AI-generated.

GPAI conditions: LexCodex is built on Anthropic Claude (a GPAI). The GPAI obligations under Art. 51-55 sit with Anthropic as the provider, not with LexCodex as a downstream user. Our obligation is to follow Art. 25 — which we do through transparency disclaimers.

Concrete checklist for lawyers

Before 2 August 2026, work through the following:

  1. Inventory the AI tools you use. ChatGPT, Claude, Legora, Juno, LexCodex, Lexnova, JP Infonet, Harvey — what licences does the firm have? What are they used for?
  2. Verify the provider's transparency information. Are you and your clients told that the result is AI-generated? Is there a disclaimer in clear form?
  3. Review client agreements. Do you have clauses about AI use? Clients will start asking.
  4. Update information to clients. When you use AI for their matter, inform them (privileged documents should not pass through AI without ZDR + DPA).
  5. Classify clients' AI systems when they ask. Use an AI Act tool (ours or equivalent) for formal classification. Document the result.
  6. Train colleagues. Many in the firm don't know what the AI Act means. A 30-minute walkthrough goes a long way.
  7. Monitor the Commission's delegated acts. Specific technical requirements (labelling, audit logging) will be set out in delegated acts during 2026-2027.

Summary

The EU AI Act is not the end of AI in legal practice — it is a framework for responsible AI use. For most law firms and in-house teams using established legal AI tools, the situation is relatively simple:

If you want to test how the classification works in practice — try our AI Act tool. It's free to start and takes ~10 minutes to fill in. The result is a formal report you can share with clients or use internally.

Frequently asked questions

When does the EU AI Act take effect?

The AI Act entered into force on 1 August 2024. Prohibited AI practices began applying on 2 February 2025. GPAI obligations apply from 2 August 2025. High-risk systems plus transparency obligations apply from 2 August 2026. High-risk systems under Annex I products apply from 2 August 2027.

Is my law firm's AI use an "AI system" under the AI Act?

As a user of an AI tool you are a "deployer" under the regulation. The obligations sit primarily with the provider. As a deployer you have Art. 26 obligations only if you use a high-risk system — which most legal AI tools are NOT.

How do I classify an AI system under the AI Act?

Four steps: (1) Is the system prohibited under Art. 5? (2) Does it fall under Annex III (high-risk)? (3) Does it fall under Art. 50 transparency requirements (limited risk)? (4) Otherwise it is minimal risk with no extra requirements. For lawyer-facing AI tools the most common outcome is limited risk.

What does "limited risk" mean under the AI Act?

Limited-risk systems have transparency requirements under Art. 50: users must be informed that they are interacting with AI, and AI-generated content must be labelled. No other mandatory requirements. For legal AI tools that produce analysis, limited risk is the most common outcome.

Classify an AI system with our tool

3-minute question flow, formal classification report on demand. Free during the Pro trial.

Open the AI Act tool → Back to blog

⚠ This is a general overview of the EU AI Act, not legal advice. For specific classification questions consult a qualified lawyer or use a structured classification tool that documents your assumptions.