On 01.08.2024, the new EU Regulation laid down harmonized rules on artificial intelligence (AI Regulation – EUR-Lex) came into force. With the AI Regulation, the European Union is aiming for comprehensive risk-based regulation of artificial intelligence systems. With complex and sometimes differentiated regulations, the EU aims to minimize potential AI risks and protect the safety and rights of EU citizens. The AI Regulation is a product safety legislation with a graduated risk-based approach. The higher the risks of the AI system are classified, the more extensive and stricter the regulatory requirements for its addressees, in particular providers and operators of AI systems.
Companies using AI systems must perform a number of AI compliance measures to implement the AI Regulation. These initially include an internal recording, documentation and analysis of AI systems in use, training measures to create AI expertise for the use of AI systems within the company, a risk assessment of AI systems in use and the creation of user information and risk management measures.
I. Gradual applicability of the AI Regulation over time
Once the AI Regulation enters into force, its provisions will gradually become applicable.
The general provisions of the AI Regulation, Art. 1-4, including the obligation to provide AI competence, i.e. training, as well as the prohibition of AI systems with unreasonable risks, Art. 5, apply from 02.02.2025.
The obligations relating to general purpose AI models (GPAI) apply from 02.08.2025. In addition, the AI Regulation is generally applicable from two years after its entry into force, i.e. from 02.08.2026. The more extensive regulations on the high-risk systems listed in Annex I of the AI Regulation will only apply after three years, i.e. from 02.08.2027.
Regardless of the timing, companies are advised to familiarize themselves with the regulations applicable to their AI systems as early as possible and to take the necessary measures for AI compliance.
II. Definition of the AI system, Art. 3 No. 1 AI Regulation
The material applicability of the AI Regulation depends crucially on whether the tool used falls under the definition of an AI system.
According to Art. 3 No. 1 of the AI Regulation, an
“AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”
Recital 12 of the AI Regulation provides further explanations of this complex definition. A central element of the new definition is the ability of such systems to derive certain results, such as content, predictions or models using machine learning techniques or algorithms, on the basis of input received. By enabling autonomous learning, reasoning and modeling processes, the AI system goes beyond simpler conventional data processing systems, which are based exclusively on rules defined by natural persons for the automatic execution of operations. Some argue that the input of training data into the system is decisive for qualification as an AI system.
Beyond well-known systems such as ChatGPT, DeepL, Gemini or Alexa, it will not always be easy to qualify applications as AI systems within the meaning of the AI Regulation. For example, are word processing programs that display possible grammatical errors already to be regarded as AI systems, or are programs that make text suggestions based on previous user input or a database AI systems? Before such questions are clarified by the practical application of the AI Regulation, users are advised to interpret the definition broadly.
It should also be noted that certain AI systems (see Art. 2 of the AI Regulation) are excluded from the scope of the AI Regulation. These include AI systems for purely private use, for national security, defense and military, research and development, AI systems under development before they are placed on the market and, to a certain extent, open source systems or AI systems that are grandfathered under Art. 111 of the AI Regulation.
III. Addressees of the AI Regulation
The AI Regulation addresses certain “actors”, Art. 3 No. 8 AI Regulation, including operators, suppliers, product manufacturers, authorized representatives, importers or distributors.
The most important ones in practice include providers, Art. 3 No. 3 AI Regulation. These include persons, authorities, institutions or other bodies that develop and have developed an AI system or an AI model with a general purpose and place it on the market under their own name or trademark or put the AI system into operation under their own name or trademark, whether for a fee or free of charge.
On the other hand, this includes operators, Art. 3 No. 4 of the AI Regulation. These are persons, authorities, institutions or other bodies that use an AI system under their own responsibility. Operators within the meaning of this definition do not include those who use the AI system exclusively in the context of a personal and non-professional activity.
The AI Regulation imposes specific, differentiated obligations on providers and operators.
It should be noted that not only actors based within the EU, but also actors in third countries (e.g. USA, UK, China, etc.) can be addressees of the AI Regulation if they place their products on the EU market, for example by placing an AI system on the market in the Union, or if outputs generated by their AI systems are used as intended in the EU. As in the area of data protection and the GDPR, the market place principle applies here as well.
IV. Categorization and classification of AI systems
a) AI systems and AI models
In addition to the key definition of the AI system, the AI Regulation is linked to the concept of the AI model. An AI model is the functionality behind the system, i.e. the mathematical or algorithmic core of the AI system. It is a trained statistical function or a neural network that uses machine learning to recognize patterns and make predictions based on data.
For providers (not operators) of general purpose AI models (GPAI), e.g. large language models (LLM) such as ChatGPT, special regulatory requirements apply in accordance with Art. 51 et seq. AI Regulation.
b) Risk classes
A central element of the AI Regulation is the categorization of AI systems into risk classes. A distinction is made between prohibited AI systems, high-risk AI systems, AI systems with limited risk and AI systems with minimal risk. Depending on the classification of the AI system, there are graduated regulatory requirements of the AI Regulation, ranging from a general ban to detailed regulations, certain transparency obligations and freedom from regulation.
ba) Prohibited AI systems, Art. 5 AI Regulation
Art. 5 of the AI Regulation prohibits placing on the market, commissioning and use of certain AI systems associated with unacceptable risks in various circumstances.
This includes AI systems with the aim of manipulating people subliminally and thus causing significant harm (Art. 5 a) AI Regulation) or AI systems with the aim of exploiting the weakness or vulnerability of certain persons or groups of persons to cause them significant harm (Art. 5 b) AI Regulation).
Art. 5 c) of the AI Regulation prohibits AI systems for social scoring applications that lead to discriminatory results, unjustified disadvantage or exclusion of certain natural persons or groups of persons.
Other prohibitions in (Art. 5 d)-h) of the AI Regulation, together with certain exceptions, include prohibited risk assessment systems, real-time biometric identification systems and real-time emotion analysis systems.
Prohibited AI systems within the meaning of the already effective provision of Art. 5 of the AI Regulation must be discontinued immediately. This can be done, for example, by deleting or destroying such a system or modifying it in such a way that the AI system is no longer prohibited. Contracts for prohibited AI systems are likely to be null and void due to the violation of the law.
bb) High-risk AI systems, Art. 6 et seq. AI Regulation
Regulations on high-risk AI systems form the central and most comprehensive part of the AI Regulation. Such AI systems, which in the view of the legislator pose a high risk to health, safety or fundamental rights, may only be placed on the market, put into operation or used within the EU, if a number of mandatory requirements, including a conformity test, have been met.
bba) Classification
Art. 6 of the AI Regulation distinguishes between two categories of high-risk AI systems.
On the one hand, according to Art. 6 (1) of the AI Regulation, a high-risk AI system exists if this system is used as a safety component for a product listed in Annex I of the AI Regulation according to its intended purpose or is itself such a product. On the other hand, the product safety regulations listed in Annex I must require a third-party conformity assessment for the product. According to Art. 43 para. 3 of the AI Regulation, the additional requirements of the AI Regulation for the high-risk AI system must be examined as part of the existing conformity assessment procedure applicable via Annex I. It remains to be seen to what extent these requirements will be implemented in practice.
On the other hand, pursuant to Art. 6 (2) of the AI Regulation, high-risk AI systems are to be assumed for stand-alone systems that fall under Annex III of the AI Regulation. These include biometric systems, AI systems as security components in the area of critical infrastructure, certain AI systems in the area of education, employment and personnel management, accessibility and use of basic private public services and benefits, authorized law enforcement, migration and asylum as well as the administration of justice and democratic processes (e.g. elections).
The classification and categorization as a high-risk AI system will pose practical difficulties in many cases. After all, Art. 96 of the AI Regulation stipulates that the EU Commission will issue guidelines on the practical implementation of Art. 6 of the AI Regulation and other provisions.
bbb) Regulatory requirements for high-risk AI systems
If an AI system is to be classified as a high-risk AI system, Articles 8-15 of the AI Regulation stipulate extensive requirements for such a system that must be met before it is placed on the market. These include
All of these requirements will entail considerable additional work, especially for providers and, in some cases, operators of high-risk AI systems. Further clarifications are to be provided by supplementary technical standards, Art. 40 AI Regulation.
cc) AI systems with limited risk
If AI systems are neither prohibited nor subject to the strict requirements for high-risk AI systems described above, they can be classified as AI systems with limited risk. This applies to systems such as chatbots in customer service, AI-generated content such as deepfakes or simple development systems. Such systems are subject to the transparency obligations under Art. 50 of the AI Regulation. Reasonably well-informed and attentive users must be informed that they are interacting with an AI. Artificially generated audio or video content must be labeled as AI-generated.
dd) AI systems with minimal risk
Such AI systems are excluded from the scope of the AI Regulation. These are likely to include spam filters or video games with AI elements, for example. Providers or operators of such systems can voluntarily submit to certain codes of conduct in the future (see also Art. 95 of the AI Regulation).
V. Administration and sanctions
The AI Regulation provides for the establishment of various authorities at EU and national level. At EU level, these include the AI Office and an AI Committee made up of representatives of the EU Member States. The process of setting up these authorities has not yet been completed. On a national level in Germany, the Federal Network Agency will presumably be designated as the national supervisory authority for enforcing the AI Regulation.
From 02.08.2025, the sanctions regime of the AI Regulation will also come into force with the exception of Art. 101. Depending on the type and severity of the violations, among other things, the imposition of severe fines is possible. In addition, infringements of the AI Regulation can trigger further claims, for example claims under the GDPR or civil law claims for damages. In some cases, violations of the AI Regulation will also justify the defectiveness of the AI system offered.
VI. Outlook
Any actor that places or uses AI systems with more than minimal risk on the market in the EU will be subject to additional compliance obligations to a certain extent. These include at least the prompt creation of AI competence and transparency obligations.
The first step in the debate will be to take an inventory of AI systems in a directory that systematically records and documents the AI used in the company. Among other things, this will include the name of the tool, its functionality and intended areas of application, participants in the company, data sources and any processing of personal data.
On that basis, an initial categorization can be made, including the determination of the resulting compliance obligations. Even if the AI Regulation does not necessarily require the appointment of an AI officer, unlike the appointment of a data protection officer in data protection, for example, it is recommendable to appoint a trained AI manager responsible within the company with an interface to legal, data protection and IT. If the categorization reveals that high-risk AI systems or even prohibited AI systems are present in the company, extensive compliance measures must be taken to meet the requirements of the AI Regulation.
Taking into account the results of the inventory and an initial categorization of the AI systems used, it is already necessary to create sufficient AI competence, Art. 4 AI Regulation, through adequate training of affected employees in the company.