EU AI Act for lawyers: what you need to know before 2 August 2026
The EU AI Act (Regulation (EU) 2024/1689) entered into force on 1 August 2024 but applies in stages. The biggest milestone for law firms and in-house teams is 2 August 2026, when transparency requirements and high-risk-system obligations begin to apply.
For lawyers, this raises two concrete questions:
- Are the AI tools I use (ChatGPT, Claude, Legora, Juno, LexCodex) "AI systems" under the regulation — and do I as a user have to comply with anything?
- How do I classify my clients' AI systems when they ask?
This article answers both. We also ran our own AI Act tool on LexCodex itself and share the result as a case study.
Timeline: what applies when?
| Date | What starts to apply |
|---|---|
| 1 Aug 2024 | AI Act entered into force |
| 2 Feb 2025 | Prohibited AI practices (Art. 5) |
| 2 Aug 2025 | GPAI obligations (Art. 51-55) |
| 2 Aug 2026 | High-risk systems + transparency obligations (Art. 50) |
| 2 Aug 2027 | High-risk systems under Annex I products |
For most lawyers, 2 August 2026 is the relevant milestone — that's when all limited-risk systems must meet the Art. 50 transparency requirements.
The risk classes: a quick walkthrough
The AI Act sorts AI systems into four risk classes:
- Prohibited (Art. 5): Subliminal manipulation, social scoring by public authorities, real-time biometric identification in public spaces, emotion recognition in workplaces or schools, etc. Penalty: up to EUR 35 million or 7% of global turnover.
- High-risk (Annex III): AI systems in recruitment, education, credit scoring, critical infrastructure, law enforcement, the judiciary (but only for courts), border control. Requires extensive documentation, risk management, human oversight, registration in an EU database. Penalty: up to EUR 15 million or 3% of turnover.
- Limited risk (Art. 50): AI systems that interact directly with natural persons or generate synthetic content. Requirements: inform the user about AI interaction, label AI-generated content. No other mandatory requirements.
- Minimal risk: Everything else. No AI Act-specific obligations (but GDPR and other legislation still apply).
For lawyers: is your AI tool an AI system?
Short answer: yes, but the provider carries the main responsibility.
When you, as an attorney or in-house lawyer, use ChatGPT, Claude, Legora, Juno or LexCodex, you are a "deployer" under the regulation. The provider (OpenAI, Anthropic, the Legora company, Norstedts, LexCodex/Nordicbysight) carries the heavy obligations around making the system available.
As a deployer you have:
- For minimal/limited-risk systems: essentially no AI Act-specific obligations. But GDPR, confidentiality, professional ethics and similar rules apply, of course.
- For high-risk systems (if you use them): Art. 26 imposes requirements on monitoring, documentation, human oversight and incident reporting. However, most legal AI tools are not high-risk.
In practice: if you use a legal AI tool that helps you review contracts, draft NDAs, or research case law — it is most likely limited risk. You only need to make sure the provider's transparency information is in place (which is the provider's job).
For lawyers: how do you classify clients' AI systems?
This is where it gets interesting. Clients will ask: "is our AI system high-risk?" and you need to be able to answer.
A four-step process:
- Is it prohibited? Walk through the Art. 5 list. If yes: stop the system.
- Does it fall under Annex III? Walk through Annex III points 1-8. Common relevant areas: HR/recruitment (point 4), credit scoring (point 5), education (point 3).
- Is it a system that interacts with humans or generates synthetic content? Then it is limited risk with Art. 50 requirements.
- Otherwise: minimal risk.
The most common pitfall we see: clients classify their AI system as high-risk "to be safe". That's expensive — high-risk means extensive documentation requirements, a risk management system, quality management, registration in an EU database and ongoing monitoring. Classify correctly, not conservatively.
Case study: we classified LexCodex ourselves
We thought it was reasonable to eat our own dog food. On 3 May 2026 we ran LexCodex's own AI Act tool on LexCodex as an AI system. The result:
Applicable obligations: Art. 50 transparency requirements (inform users about AI interaction + label AI-generated content). Art. 25 downstream value chain (provide users with necessary information).
Documentation: Formal classification report (10 sections following an auditor-friendly structure).
Why not high-risk? LexCodex is a legal AI tool aimed at private lawyers and in-house teams — not at courts or judicial authorities. Annex III point 8 ("AI systems intended for judicial authorities") therefore does not apply. None of Annex III points 1-7 (recruitment, education, credit etc.) are relevant either.
Why limited risk and not minimal? LexCodex generates synthetic textual content (legal analysis) and interacts directly with users. Art. 50 therefore applies. We already meet the requirements: a disclaimer banner above every tool form, a footer disclaimer on every page, all output marked as AI-generated.
GPAI conditions: LexCodex is built on Anthropic Claude (a GPAI). The GPAI obligations under Art. 51-55 sit with Anthropic as the provider, not with LexCodex as a downstream user. Our obligation is to follow Art. 25 — which we do through transparency disclaimers.
Concrete checklist for lawyers
Before 2 August 2026, work through the following:
- Inventory the AI tools you use. ChatGPT, Claude, Legora, Juno, LexCodex, Lexnova, JP Infonet, Harvey — what licences does the firm have? What are they used for?
- Verify the provider's transparency information. Are you and your clients told that the result is AI-generated? Is there a disclaimer in clear form?
- Review client agreements. Do you have clauses about AI use? Clients will start asking.
- Update information to clients. When you use AI for their matter, inform them (privileged documents should not pass through AI without ZDR + DPA).
- Classify clients' AI systems when they ask. Use an AI Act tool (ours or equivalent) for formal classification. Document the result.
- Train colleagues. Many in the firm don't know what the AI Act means. A 30-minute walkthrough goes a long way.
- Monitor the Commission's delegated acts. Specific technical requirements (labelling, audit logging) will be set out in delegated acts during 2026-2027.
Summary
The EU AI Act is not the end of AI in legal practice — it is a framework for responsible AI use. For most law firms and in-house teams using established legal AI tools, the situation is relatively simple:
- The tool is most likely limited risk or minimal risk
- The provider carries the main responsibility for compliance
- You as a deployer have lighter obligations (inform clients, document use)
- When clients ask about their own AI systems: use a structured classification method instead of guessing
If you want to test how the classification works in practice — try our AI Act tool. It's free to start and takes ~10 minutes to fill in. The result is a formal report you can share with clients or use internally.
Frequently asked questions
When does the EU AI Act take effect?
The AI Act entered into force on 1 August 2024. Prohibited AI practices began applying on 2 February 2025. GPAI obligations apply from 2 August 2025. High-risk systems plus transparency obligations apply from 2 August 2026. High-risk systems under Annex I products apply from 2 August 2027.
Is my law firm's AI use an "AI system" under the AI Act?
As a user of an AI tool you are a "deployer" under the regulation. The obligations sit primarily with the provider. As a deployer you have Art. 26 obligations only if you use a high-risk system — which most legal AI tools are NOT.
How do I classify an AI system under the AI Act?
Four steps: (1) Is the system prohibited under Art. 5? (2) Does it fall under Annex III (high-risk)? (3) Does it fall under Art. 50 transparency requirements (limited risk)? (4) Otherwise it is minimal risk with no extra requirements. For lawyer-facing AI tools the most common outcome is limited risk.
What does "limited risk" mean under the AI Act?
Limited-risk systems have transparency requirements under Art. 50: users must be informed that they are interacting with AI, and AI-generated content must be labelled. No other mandatory requirements. For legal AI tools that produce analysis, limited risk is the most common outcome.
Classify an AI system with our tool
3-minute question flow, formal classification report on demand. Free during the Pro trial.
Open the AI Act tool → Back to blog⚠ This is a general overview of the EU AI Act, not legal advice. For specific classification questions consult a qualified lawyer or use a structured classification tool that documents your assumptions.