The EU AI Act Explained: What Businesses Need to Know in 2026
The EU AI Act entered into force on 1 August 2024. Since then, obligations have been rolling in on a staggered timeline. The most significant deadline for most businesses, full compliance for high-risk AI systems, lands on 2 August 2026.
Most SMEs using AI tools right now are not aware of what the law requires. Some are unknowingly using tools that are already non-compliant. Others are using third-party AI tools in HR, customer service, or operations without checking whether those tools meet the Act's standards.
This guide explains what the EU AI Act is, which risk category your AI use likely falls into, what the fines look like, what is already banned, and what practical steps to take before August 2026.
This is not legal advice. For specific guidance on your situation, consult a qualified legal professional. This is a plain-language overview that will help you understand your obligations and have a more informed conversation with your legal team and your technology partners.
.png)
What is the EU AI Act?
The EU AI Act is the world's first comprehensive legal framework specifically regulating artificial intelligence. It applies to any organisation that develops, deploys, imports, or distributes AI systems that affect people in the European Union, regardless of where the organisation itself is based.
That last point is important. If you are a Dutch business using an AI tool built by a company anywhere in the world to process job applications, generate client recommendations, assess credit risk, or make decisions that affect people, the Act applies to you as the deployer, not just to the company that built the tool.
The Act works on a risk-based model. The higher the potential harm of an AI system, the more obligations are placed on the businesses using or building it. A playlist recommendation engine sits at minimal risk with almost no specific requirements. An AI system that influences whether someone gets a job, a loan, or medical care is classified as high risk and carries significant compliance obligations.
What are the key EU AI Act deadlines?
The Act does not apply all at once. It rolls out in phases, and understanding which phase applies to your business is the first practical step.
1 August 2024 was when the Act entered into force across all EU member states including the Netherlands.
2 February 2025 is when the prohibitions came into effect. A defined list of AI practices became illegal from this date. These are already active now.
2 August 2025 is when obligations for general-purpose AI model providers began. Companies building on top of foundation models like GPT or Claude need to understand what their model provider is responsible for and what passes to them.
2 August 2026 is the critical deadline for most businesses. This is when high-risk AI systems must be fully compliant with the Act's requirements. This deadline is less than three months away from today.
2 August 2027 is the deadline for high-risk AI that is embedded into regulated products such as medical devices, machinery, and lifts.
What are the 4 risk tiers in the EU AI Act?
Understanding your risk classification is the single most important step in EU AI Act compliance.
Unacceptable risk - banned outright
These AI practices are prohibited and have been since 2 February 2025. This includes AI systems that manipulate people through subliminal techniques, exploit vulnerabilities of specific groups, enable social scoring by public or private organisations, perform real-time biometric identification in public spaces, infer emotions in workplaces or educational settings, and predict criminality based on profiling alone. If any tool you use falls into this category, you are already non-compliant.
High risk - significant obligations from August 2026
This is where most Dutch businesses using AI in meaningful processes need to pay close attention. According to the Dutch government's business portal, high-risk AI systems include those used in hiring and HR decisions, credit scoring and financial assessment, access to essential services, educational assessment, law enforcement, migration and border control, and administration of justice. If you use AI to screen job candidates, assess creditworthiness, or support decisions that significantly affect someone's life, this tier most likely applies to you.
Limited risk - transparency obligations only
AI systems that interact with people or generate content, including chatbots and AI-generated text, must clearly disclose that AI is involved. From August 2026, customer service chatbots must tell users they are talking to an AI. This obligation is light compared to high risk but is already partially active.
Minimal risk - no specific requirements
AI used for spam filters, games, and manufacturing quality control falls here. No specific EU AI Act obligations apply beyond general good practice.
What is already banned since February 2025?
The prohibitions section of the EU AI Act came into force on 2 February 2025. Certain AI practices are illegal in the EU right now, today.
The banned practices include AI that uses subliminal techniques to distort someone's behaviour in a harmful way, AI that exploits the vulnerabilities of specific groups such as children or people with disabilities, government and private sector social scoring systems, AI that predicts criminal behaviour based solely on profiling or personality traits, biometric categorisation systems that infer sensitive attributes like political opinions or sexual orientation from biometric data, and real-time remote biometric identification in public spaces by law enforcement, with narrow exceptions.
A practical check is worth doing here. Many modern HR tools, marketing personalisation platforms, and risk scoring systems embed AI in ways that their users do not fully understand. The fact that a vendor describes their tool as "AI-powered" without calling it high risk does not protect you as the deployer. You are responsible for checking what the tool actually does.
What are the EU AI Act fines?
The penalty structure under the EU AI Act is steeper than GDPR. Article 99 of the Act sets out a 3-tier fine structure.
Using prohibited AI practices carries fines of up to 35 million euros or 7% of total worldwide annual turnover, whichever is higher. Non-compliance with high-risk system obligations carries fines of up to 15 million euros or 3% of global turnover. Providing incorrect or misleading information to authorities carries fines of up to 7.5 million euros or 1.5% of turnover.
For SMEs, fines are capped at the lower of the percentage and absolute thresholds. A startup with 2 million euros in annual revenue faces a maximum Tier 1 fine of around 140,000 euros rather than 35 million. That is still a material number for a small business, and the reputational damage from a public enforcement action is harder to quantify and potentially more costly.
In the Netherlands, the Dutch Data Protection Authority has been designated as a key enforcement body. Enforcement is expected to begin in earnest from August 2026, though prohibitions are already enforceable now.
.png)
Who does the EU AI Act apply to?
The Act distinguishes between different players in the AI supply chain, each with different obligations.
Providers develop and place AI systems on the market. If you are building an AI-powered product or SaaS platform and selling or licensing it to others, you are a provider. Providers carry the heaviest obligations: technical documentation, conformity assessments, CE marking for high-risk systems, and registration in the EU database.
Deployers are organisations that use AI systems in the course of their professional activities. This is most SMEs. If you use an AI hiring tool, a credit scoring system, or an AI-powered document processor in your business, you are a deployer. Deployers are responsible for using systems only in their intended way, ensuring human oversight where required, and for public sector organisations, conducting fundamental rights impact assessments.
Importers and distributors who bring AI systems into the EU from outside or distribute them to others also carry specific obligations.
The key point for most SMEs: even if you did not build the AI tool and are just using a SaaS subscription from a vendor, you still bear compliance obligations as the deployer. Checking your vendor contracts is one of the first practical steps to take.
How to prepare for the EU AI Act: 6 practical steps
Step 1: Inventory all AI tools in use
Many organisations do not have a complete picture of the AI running in their business. This includes third-party SaaS tools, AI features embedded in CRM or HR platforms, and any custom-built AI systems. Start by listing every tool that uses AI in any business function.
Step 2: Classify each tool by risk tier
For each tool, determine which risk tier applies. The central question is whether the AI influences a decision that significantly affects a person's access to opportunities, services, or rights. If yes, it is almost certainly high risk.
Step 3: Update your vendor contracts
For every high-risk AI tool from a third-party vendor, your contract should now include clauses addressing EU AI Act compliance. If it does not, engage your vendors and update those agreements before August 2026. As a deployer you are exposed if a vendor's system is non-compliant.
Step 4: Implement human oversight for high-risk decisions
High-risk AI systems require meaningful human oversight. A human must be able to understand, review, and override the AI's output before it affects someone. Document how this is implemented for each high-risk system.
Step 5: Establish AI literacy in your organisation
The obligation for staff to be AI literate has been active since February 2025. This does not mean everyone needs to understand machine learning. People who work with AI tools should understand how they work, what their limitations are, and what risks are involved.
Step 6: Build your compliance documentation
For high-risk systems you develop or deploy, maintain records demonstrating compliance. You should be able to show, if asked by an authority, that you assessed the risk and took appropriate measures.
.png)
What does the EU AI Act mean if you are building AI software?
If you are building an AI automation system or an AI-powered SaaS platform, the EU AI Act has direct implications for how you design and build.
The classification of your platform matters first. If you are building a subsidy analysis tool, a credit assessment platform, a hiring assistant, or any system that influences significant decisions affecting people, you are likely building a high-risk AI system. That means designing for compliance from the start: meaningful human oversight, transparent outputs that users can understand and question, proper logging and documentation, and data governance that meets the Act's requirements.
Compliance is also cheaper to build in than to retrofit. A platform designed with risk documentation, human oversight controls, and explainable outputs will cost less to keep compliant than one that needs restructuring after launch. The implementation timeline for organisations with any complexity is typically 12 to 18 months, which is why starting now matters.
For the businesses your platform serves, EU AI Act compliance can be a practical selling point in enterprise and public sector sales. A platform that makes it straightforward for clients to demonstrate human oversight of AI recommendations is more valuable than one that does not.
How Codelevate approaches this
At Codelevate, we include an EU AI Act compliance review as part of the discovery process on every AI platform build. This means we help you classify what you are building, identify which obligations apply to you as the provider, and design the architecture to address them from the start.
Specifically, this includes discussing with you where human oversight should sit in the user flow, how to structure logging and audit trails for high-risk decisions, what transparency disclosures need to be visible to users, and what documentation should be produced and maintained as part of the project.
We are not a law firm and we do not provide legal advice. But we understand the technical requirements the Act places on software builders, and we make sure those requirements are thought through before a line of code is written. You can read more about how we work on our process-to-product service page.
What happens if you wait?
The temptation for most businesses is to wait and see. To monitor what other companies do, watch whether the deadline shifts, and act when enforcement starts rather than when the law applies.
The problem with this is that compliance takes time. For a high-risk AI system, the process of conducting a risk assessment, implementing oversight, creating technical documentation, updating vendor contracts, and training your team is not a one-week exercise.
There is also a secondary risk that is harder to quantify but real: enterprise clients, investors, and public sector partners are increasingly including AI Act compliance status in their due diligence and procurement processes. A business that cannot demonstrate a credible compliance posture is at a growing disadvantage in those conversations.
The businesses that start now have time to build compliance thoughtfully. The businesses that wait until August 2026 will be managing a crisis on a short timeline.
Conclusion
The EU AI Act is not something coming in the future. It is an active legal framework with live prohibitions, rolling deadlines, and real financial penalties. The most significant deadline for most Dutch businesses is 2 August 2026.
The right response is a structured review of what AI you are using, how it is classified, what your vendors are responsible for, and what you need to do. Most of that work is straightforward once you start. The mistake is not starting.
If you are building an AI platform and want to make sure compliance is part of the build from day one, book a free scan with us. We will map what you are building, flag where the EU AI Act applies to your situation, and give you a written plan.
Book your free scan here.
.png)
.png)
.png)
