Guide to the EU AI ACT for businesses outside the EU
A comprehensive overview of the 2024 EU AI Act, its key provisions and the timeline for compliance
Key contacts
Basics of the EU AI Act
When?
1 August 2024 to 1 Aug 2027
The EU AI Act came into force on 1 August 2024. However, the application of the Act to AI Systems and businesses will be implemented in phases from 2 February 2025 to its nearly complete full implementation in August 2027. By 2 August 2026, most of its key provisions will be in effect.
What?
This is a new law that applies directly in all EU member states and in addition to other EU regulations and directives. It is intended to govern the USE, DISTRIBUTION, DEVELOPMENT and DEPLOYMENT of AI Systems, a term which is defined very broadly.
How?
The Act classifies AI Systems into four risk categories:
The Act imposes restrictions on AI systems according to the risk level of the AI System. Unacceptable Risk AI Systems are in principle completely prohibited. High-Risk AI Systems are in the regulatory focus of the law with numerous requirements and obligations. Other AI systems are mainly subject to obligations relating to AI literacy and, in some cases, transparency. Therefore, a clear identification of the risk level of each AI System is critical. General Purpose AI models (GPAI models) are separately regulated under the EU AI Act and are subject to many technical and organizational obligations.
Impact on NON-EU Business
The EU AI Act is extraterritorial and applies to business outside the EU, including in Asia, in wide range of instances.
Businesses in Asia with no presence in the EU will be subject to the EU AI Act in the following circumstances:
- providers placing on the market or putting into service AI systems or placing on the market GPAI models in the EU;
- providers and deployers of AI systems that are located in Asia, where the output produced by the AI system is used in the EU;
- product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trade mark;
- importers located in the EU making AI systems available in the EU for first use;
- distributors making AI systems available in the EU for distribution or use.
Examples of when the EU AI Act may apply to businesses outside the EU
- A Korean company puts an AI system on the marketplace online that creates corporate photographs of individuals, and it is accessed by users in the EU who upload photos of themselves and create images using the AI system.
- A company based in Singapore that develops a corporate training system that uses AI for personalized training and is hired by a Thai multinational company with some employees in the EU.
- An Indian company that provides HR services to a US company including recruitment and that uses AI systems for first level screening of applicants including applicants that may be based in the EU applying for jobs in Asia.
- A car manufacturer in Japan that sells cars under its brand around the world including in the EU that uses an AI-enabled assisted braking system.
- A Taiwanese company that distributes an AI credit rating system that it bought or licensed from a developer and sells it to businesses in the EU.
- A company based in India that purchases a customer support AI system already available in the EU and further licenses it within the EU.
What do businesses need to do?
1. Assess your existing technology and systems to identify any AI applications you may be using. Are there AI Systems used in your services or products?
2. Classify the risk associated with the type of AI System you are dealing with: The EU AI Act classifies AI Systems into four risk categories: Unacceptable Risk, High Risk, Limited Risk, Minimal Risk.
3. Develop a compliance plan (including a gap analysis and gap “closing”). The AI Act is geared towards regulation of High-Risk AI Systems and if you provide or deploy High-Risk AI systems, you need to immediately assess and prepare for compliance with the upcoming requirements. If you deploy a low risk or minimal risk AI system, you need to meet transparency and AI literacy requirements and potentially codes of conduct and if you provide or deploy unacceptable risk AI Systems, these cannot be offered or used in the EU.
- Identification of responsibilities
- Identification of processes
- Identification of AI systems
- Identification of Agreements
- Qualification of AI systems
- Identification of applicable legal framework
- Identification of compliance requirements (“TOMs, transparency”)
- Identification of gaps
- Definition of priorities
- Closing of gaps
- Trainings
- Transparency notices
- Technical implementation
- Organizational measures
- Documentation
- Policies, standards
- Periodic review
- Adjustments
What are the penalties for non-compliance?
There are penalties applicable for non-compliance with the different prohibitions or provisions that range from administrative fines of 7.5m EUR to 35m EUR or, in the alternative, 1%-7% of total worldwide annual group turnover for the preceding financial year, whichever is higher.
What is an Unacceptable Risk AI System?
These are AI systems that contravene the fundamental values of the EU. Examples include:
- AI systems that manipulate human behaviour and cause harm
- Social scoring or rating systems carried out by public organizations
- Predicting criminal propensity of individuals based on scoring
- AI systems that can categorize people according to certain traits eg race, religion or political beliefs
What is a High-Risk AI System?
Products
AI systems that are used in any product or that are themselves the product and where such products are regulated by EU product safety regulations may be considered High-Risk AI systems, as listed in Annex I of the Act. Some of these products include medical devices, recreational craft and personal watercraft, lifts, toys, equipment, unmanned aircraft, personal protective equipment, vehicles, etc.
AI systems in the following areas, as listed in Annex III of the Act, may be considered High-Risk AI Systems
- Biometrics
- Critical infrastructure
- Education and vocational training
- Employment, workers management and access to self-employment
- Access to and enjoyment of essential private services and essential public services and benefits
- Law enforcement
- Migration, asylum and border control management
- Administration of justice and democratic processes
What are the requirements relating to High Risk AI Systems?
- Risk Assessment and Management must be implemented, documented and maintained as a continuous iterative process to run through the life cycle of the AI System.
- Data Governance and quality criteria must be adopted with respect to data sets that are used for training, validation and/or testing of AI models.
- Technical documentation meeting the specified requirements must be drawn up before the AI System is placed on the market or put into service and must be kept up to date.
- Record-Keeping – High risk AI Systems must technically allow for the automatic recording of events and log-keeping for the entire life of the system.
- Transparency and provision of information to deployers to ensure the operation of the system is sufficiently transparent to enable deployers to interpret the system’s output and use it appropriately including information on the characteristics, capabilities and limitations of performance.
- Human Oversight - High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools to prevent or minimise the risks to health, safety or fundamental rights that may emerge.
- Accuracy, Robustness and Cybersecurity – systems must be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness, and cybersecurity, meeting the standards and methodologies to be developed for this purpose.
What are the key obligations of Providers of High-Risk AI Systems?
- ensure that their High-Risk AI Systems are compliant with the requirements, in particular technical and organizational requirements;
- indicate on the High-Risk AI System, their name, registered trade mark, and address;
- maintain a quality management system in place;
- maintain documentation relating to technical specifications, quality management and conformity, etc;
- maintain the logs automatically generated by the AI system;
- ensure that the High-Risk AI System undergoes the relevant conformity assessment procedure, prior to its being placed on the market or put into service;
- make a declaration of conformity;
- affix the CE marking to the High-Risk AI System;
- comply with registration obligations to register on the EU database;
What are the key obligations of Deployers of High-Risk AI Systems?
- Deployers of High-Risk AI Systems must take appropriate technical and organisational measures to ensure the systems are used in accordance with the instructions.
- Deployers must provide competent human oversight by natural persons who have training, authority and support.
- The deployer should ensure that input data is relevant and sufficiently representative taking into account the intended purpose of the High-Risk AI System.
- Deployers should monitor the operation of the High-Risk AI System on the basis of the instructions for use and inform providers in accordance with the requirements. Where deployers consider that the use of the High-Risk AI System may result in certain specified risks, they shall notify the provider and the relevant authority, suspend the use of that system and immediately notify if any serious incident occurs.
- Deployers should keep the logs automatically generated by that High-Risk AI System.
- Before use at the workplace, employers must inform affected workers that they will be subject to the use of the High-Risk AI System.
- Deployers of high-risk AI systems that make decisions related to individuals shall inform the persons that they are subject to the use of the High-Risk AI System.
What are the key obligations of Importers and Distributors?
Distributors as well as Importers of High-Risk AI Systems mainly must verify the compliance of the Provider with its obligations and restore and inform the Provider and competent authorities of any non-conformity.
Final point – Don’t forget compliance with local requirements
Many countries are putting in place guidelines, frameworks and recommendations relating to the use and deployment of AI Systems. Some countries have independent laws and regulations applicable to the use of AI Systems. It is important to ensure that you also understand what local law requirements you may be subject to, in addition to the EU AI Act.
Future
The European Commission is continuing to work on standards and detailed regulations, and we expect to see more rules and regulations being issued.