L&M Finance Group

European Parliament adopts the world's first law on artificial intelligence

AI is a technology that allows computers and machines to imitate human intelligence through simple algorithmic actions, using a multi-level neural network for cyclic (continuous) training of the system, while their ability to solve problems of any scale is significantly higher than the human brain. Does it sound complicated?

Let's consider another, alternative option, agreed upon on the basis of the Organization for Economic Cooperation and Development (OECD) definition, which is enshrined in the text of the AI Law.

An AI system means a machine system that, in order to achieve explicit or implicit goals, draws conclusions based on the input it receives about how to generate output, such as predictions, content, recommendations, or decisions that may affect a physical or virtual environment. Different AI systems differ in their level of autonomy and adaptability after deployment.

This definition focuses on two key points: AI operates with different levels of autonomy (automatic processing of data of different volumes) and draws conclusions based on the input data (learns independently on an ongoing basis) it receives to generate results.

In principle, this is the position of the AI definition in the text of the AI Law, because the ability of an AI system to draw conclusions goes beyond basic data processing, enabling learning, reasoning, or modeling. However, the text also contains conditions under which the use of certain AI is permitted, including that all AI systems developed, distributed and used in the EU must be secure, transparent, traceable, non-discriminatory and environmentally friendly.

Regulating AI and ensuring human oversight of AI systems is part of the EU's digital strategy.

Let's turn to the text of the Law to determine what exactly it will regulate.

SUBJECT OF THE LAW

This law regulates all EU Member States to fulfill the basic conditions set forth in the text, namely:
  • harmonization of the rules for placing on the market, introducing the operation and use of AI systems in the EU;
  • harmonization of transparency rules for certain AI systems classified as powerful;
  • prohibition of AI practices (training, use, development) that are prohibited by the text of the Law according to the risk classifier;
  • defining special requirements for high-risk AI systems and obligations for operators of such systems to fulfill such conditions;
  • defining the rules of market monitoring, supervision and management;
  • implementation of measures to support innovations in the development of AI systems in accordance with the requirements of the text of the Law.

SCOPE OF THE LAW
  • applies to suppliers, i.e. organizations that develop AI systems for the purpose of placing them on the market or putting them into operation under their own name or trademark, whether for a fee or free of charge;
  • applies to importers and distributors of AI systems in the European Union;
  • applies to developers, defined as natural or legal persons who use AI under their own direction in the course of their professional activities;
  • applies to suppliers, developers, and deployers of AI systems that are domiciled or located in a third country where the results produced by the AI system are used in the European Union;
  • applies to Importers and Distributors of AI systems;
  • applies to Product Manufacturers who place on the market or put into operation an AI system together with their product and under their own name or trademark;
  • applies to Authorized Representatives of suppliers that are not registered in the European Union;
  • applies to Authorized Authorities of EU Member States;
  • applies to AIs that meet the basic requirements of the Law in terms of safety, transparency, traceability, non-discrimination and environmental friendliness;
  • applies to AI created exclusively for military, defense, or national security purposes, but used temporarily or permanently for other purposes, such as civilian or humanitarian purposes, law enforcement purposes, or public safety;
  • applies to Affected Persons located in the European Union.

Therefore, it can be concluded that the AI law has an extraterritorial impact on most of the entities to which it applies, but its main effect is directed to the EU territory where the output of the AI system is used.

PROHIBITIONS

  • does not apply to AI specifically designed and put into operation for the sole purpose of research and development;
  • does not apply to any research, testing, and development activities with respect to AI prior to market introduction or commercialization, but this exemption does not apply to real-world testing;
  • does not apply to systems released under free and open source licenses, unless such systems qualify as high-risk, prohibited, or generating AI;
  • does not apply to AI systems used exclusively for military, defense, or national security purposes, regardless of the type of entity engaged in such activities;
  • does not apply to AI that does not meet the basic requirements of the Law regarding safety, transparency, traceability, non-discrimination and environmental friendliness;
  • does not apply to AI created for civilian or humanitarian purposes, law enforcement purposes or public safety, but used temporarily or permanently for other purposes, such as military, defense or national security;
  • prohibits certain AI applications that threaten the rights of citizens, including biometric categorization systems based on sensitive characteristics and the inappropriate collection of facial images from the Internet or CCTV footage to create facial recognition databases;
  • prohibits the placement on the market, commissioning or use of certain AI systems designed to distort human behavior, which may lead to physical or psychological harm;
  • prohibits the recognition of emotions in the workplace and schools, social assessment, forecasting with the participation of law enforcement agencies (if it is based solely on profiling a person or assessing his or her characteristics), as well as manipulating human behavior or exploiting human vulnerabilities.

Thus, it follows that the law restricts the use of AI for ethical reasons, prohibiting actions that violate the rights of citizens, such as the collection of biometric data without permission and the use of facial recognition to create databases. It is also noted that it is not allowed to use AI to recognize emotions, social evaluation or manipulate human behavior. These restrictions are aimed at protecting the privacy and rights of citizens in the context of AI development and application.

However, the law allows law enforcement agencies to use biometric identification systems only in exceptional situations with a court order and time and place restrictions. The deployment of real-time AI is allowed only for important reasons, such as searching for missing persons or preventing terrorist attacks. The use of biometric identification systems through AI is considered a high risk and requires court authorization when linked to criminal activities.

RISK-BASED APPROACH

The AI Law uses a risk-based approach that categorizes AI programs into various levels of risk, determining the degree of regulatory oversight required. Thus, the law introduces a four-tiered structure for classifying AI risks:

Unacceptable risk. Classified as unacceptable risks are systems and certain AI methods that are considered a clear threat to security, livelihoods, and human rights, and will be banned. The relevant list in the AI Law includes AI systems that manipulate human behavior or exploit human vulnerabilities (e.g., age or disability) with the purpose or result of distorting their behavior. Other examples of prohibited AI include biometric systems such as emotion recognition systems in the workplace or real-time categorization of people. Categorized as unacceptable risks, these are systems that are considered a clear threat to human security, livelihoods, and rights that would be prohibited. Such examples range from social scores by governments to voice-assisted toys that encourage dangerous behavior.

Legal regulation: Prohibited.


High risk. AI systems designated as high-risk will have to meet strict requirements, including risk mitigation systems, high-quality datasets, activity logs, detailed documentation, clear user information, human oversight, and an elevated level of reliability, accuracy, and cybersecurity. Given their intended use, they pose a high risk of harm to human health and safety or fundamental rights, taking into account both the severity of the potential harm and the likelihood of its occurrence, and are used in a number of specifically defined areas. Examples of high-risk AI systems include critical infrastructures such as energy and transportation; medical devices; and systems that determine access to educational institutions or jobs, a resume scanning tool that ranks job applicants based on automated algorithms, or remote biometric identification systems.

Legal regulation: Permitted after a pre- and post-marketing risk assessment.


Limited risk. Systems that pose a limited risk to individuals and are therefore subject to only certain transparency obligations. Providers should ensure that AI systems designed to interact directly with individuals are designed and programmed in such a way that people are informed that they are interacting with an AI system. Examples include chatbots, which means that the user must be informed that they are interacting with a machine and not a human. As a rule, developers of AI systems that generate or manipulate deepfakes must disclose that the content has been artificially created or manipulated.

Legal regulation: Allowed with minimal transparency requirements.


Minimal risk. No-risk or minimal-risk AI systems are systems that will not have additional obligations for these AI systems, such as AI-enabled video games or spam filters. The EU Commission has stated that most AI systems fall into this category. For example, generative AI (such as ChatGPT). There are no restrictions for AI systems with minimal risk, such as AI-enabled video games or spam filters. Organizations can adopt their own voluntary codes of conduct if they wish.

Legal regulation: Permitted without obligation.


GOVERNING BODY

In February 2024, the AI Office was established to oversee the implementation and enforcement of the AI Law together with Member States. The Office aims to create an environment where AI technologies respect human rights and dignity. It also promotes cooperation, innovation, and research in the field of AI among stakeholders. In addition, it engages in international dialogue and cooperation on AI, recognizing the need for global harmonization of the governance of these technologies.

The AI Office, in its role as an overseer of the Law, will sit within the Commission to monitor effective implementation and compliance.

All complaints, claims, and proposals regarding violations of the Law or recommendations for changes and/or improvements to the regulator shall be submitted to the AI Office.

The competence of the AI Office also includes assessing the AI system, in particular:

- compliance assessment, if the information collected in accordance with its powers to request information is insufficient (in this way, the Office checks whether the information collected meets the requirements set for the request for information);

- additional study of systemic risks (conducted after a qualified report by a scientific group of independent experts, the Office analyzes possible dangers to the system).

Legal proceedings may be delegated at the national level of each EU member state where the victim or violator is a citizen.

PENALTIES

Violations of the prohibited AI applications will be subject to fines of EUR 35 million or 7% of global annual turnover (see footnote to paragraph for the greater of the two).

Failure by any person to provide accurate information about AI systems may result in a fine of 7.5 million euros or 1.5% of global annual turnover (see the footnote to the paragraph for the greater of the two).

Non-compliance of AI systems may result in fines of EUR 15 million or 3% of global annual turnover (see the note to the clause for the greater of the two), including

  • failure to ensure that the AI Supplier meets the requirements of high-risk applications;
  • failure to fulfill the obligations imposed on the Authorized Representative to place AI systems on the market;
  • failure to fulfill the obligations imposed on the Importer to place AI systems on the market;
  • failure to fulfill the obligations imposed on the Distributor to place AI systems on the market;
  • failure to comply with the obligations imposed on AI Deployers to conduct a data protection impact assessment within the framework of Directive (EU) 2016/680 for the purposes of preventing, investigating, detecting or prosecuting criminal offenses;
  • failure to fulfill the duties and requirements imposed on the Authorized Bodies (established at the national level of each Member State);
  • non-compliance with transparency conditions for suppliers, developers, and users.

The law also provides for fines for small and medium-sized enterprises and startups that violate AI legislation. The fines will be determined in the same way as in the previous three paragraphs, but in the opposite order (according to the text, the note to the paragraph reads "whichever is less").

In addition, the law provides for the possibility of considering an administrative penalty with a change in its amount on a case-by-case basis, considering the material circumstances that may arise, including:

  • the nature, severity and duration of the infringement and its consequences, taking into account the purpose of the AI system and, where applicable, the number of persons affected and the level of harm suffered;
  • whether administrative fines have already been imposed by other market surveillance authorities of one or more Member States on the same operator for the same infringement;
  • whether administrative fines have already been imposed by other authorities on the same operator for violations of other EU or national legislation, if such violations are the result of the same activity or omission that constitutes the relevant violation of this Law;
  • the size, annual turnover and market share of the operator that committed the violation;
  • any other aggravating or mitigating factor applicable to the circumstances of the case, such as financial benefits received or losses avoided, directly or indirectly, from the violation;
  • any other aggravating or mitigating circumstances applicable to the circumstances of the case, such as financial benefits received or losses avoided, directly or indirectly, as a result of the violation;
  • the degree of cooperation with the national competent authorities in order to eliminate the violation and mitigate possible negative consequences of the violation;
  • the degree of responsibility of the operator, taking into account the technical and organizational measures taken by it;
  • the manner in which the breach became known to the national competent authorities, in particular, whether the operator reported the breach and, if so, to what extent;
  • the intentional or reckless nature of the breach;
  • any actions taken by the operator to mitigate the harm caused to the affected persons.

CAUTIONS AND RISKS OF USE

First of all, it should be understood that AI systems must meet certain transparency requirements, including compliance with European Union copyright law and publication of detailed information about the content used to train models. For example, the law requires that AI systems designed to interact directly with humans be clearly labeled as such, unless it is obvious under the circumstances. Other models that may pose systemic risks according to the degree of risk are subject to stricter requirements, including performing model assessments, assessing, and mitigating systemic risks, and reporting incidents. For example, high-powered GPAI models must consider systemic risks and perform model assessment, risk assessment and mitigation, and incident reporting. Additionally, artificial, or fake content should be clearly labeled as "dipshits." In addition, artificial or processed images, as well as audio or video content, must be clearly labeled as artificial, created by an AI system.

The law also stipulates that regulatory, so-called "sandboxes" for real-time testing should be established at the national level to develop and train new AI tools before they are placed on the market.

For entities, Companies should conduct a gap analysis of their AI systems to assess them to identify possible areas that need to be worked on to remain compliant with the AI law.

Organizations should also conduct due diligence on their systems to ensure that they have quality data sets and, in particular, to verify that all necessary safeguards related to data protection and security are properly deployed and meet the basic requirements of EU data security law.

Organizations should also allocate responsibilities related to compliance with the AI Law and designate a separate department for AI activities, organization, and responsibilities.

As for the developers, their obligations, based on the risk level of the AI system, are to comply with and implement the algorithm of actions defined in the law. In particular, developers must register in the EU's centralized database and obtain the status of an AI developer; maintain relevant documentation and keep AI data logs; undergo conformity assessment before and after placing on the market, and/or if they retain the AI product, under their own name or trademark; comply with restrictions on the use of high-risk AI during their development and after supervision; ensure compliance with regulatory requirements and be ready to demonstrate such compliance upon request.

INTERACTION IN PRACTICE

Under the AI Law, an AI Office will be established to ensure compliance, implementation, and enforcement of these provisions. EU citizens will be able to file complaints against AI systems if they believe they have been affected by their actions and receive explanations for the decisions made by these systems.

In addition, the law provides for the possibility of transferring the authority to national courts of member states to hear cases of violation of the rights of EU citizens.

The same applies to administrative cases concerning penalties in case of violation of statutory obligations by entities. Such cases are also authorized to be heard by the national courts of the Member States, which will help to ensure that the security and user rights protection system works more effectively.

On the other hand, companies are obliged to further mark any content created on the basis of AI, including diplomats, chatbots, and other products, as content created with the help of AI, and to notify human users who interact with a chatbot or other AI system. In addition, companies must, when creating media files of any type, value, or format, mark them in such a way that such a product can be detected. This will help to distinguish different types of information from those that misinform users by examining the content in depth.

In addition, by making their AI systems transparent, companies can be exempt from the basic obligations set forth in the law regarding security, transparency, non-discrimination, etc. For example, creating an AI system based on open-source protocols.

STAGES OF ENTRY INTO FORCE

As noted by the press service of the European Parliament, this law will be finally enacted upon its entry into force, namely twenty days after its publication, and will be applied in 24 months, with the exception of:

  • the exception of prohibited practices (6 months after the date of entry into force).

This means that AI manufacturers must prepare their risk assessments and determine by this time whether their AI systems pose an unacceptable risk and whether there is an exception to the ban;

  • codes of practice (9 months after entry into force).

These documents will provide the market with additional guidance on the implementation of the AI Law. The adoption will be facilitated by invited representatives of stakeholders, academia, civil society and industry;

  • general purpose AI rules, including governance (12 months after entry into force).

Providers of generic AI systems must fulfill several obligations, including preparing technical documentation for the AI system, establishing copyright policies, and collecting training data used to train the AI system. There are also obligations on the regulatory side that apply on the anniversary of the adoption of the AI Act, as this is the deadline for Member States to designate competent authorities under the AI Act, as well as to apply the rules on sanctions. The Commission is obliged to review the list of prohibited AI systems annually and amend it if necessary;

  • obligations for high-risk systems, post-market monitoring (18 months after entry into force).

The Commission will adopt an implementing act on post-monitoring of high-risk AI systems. Post-monitoring will be conducted mainly by AI providers on the basis of a post-monitoring plan, which consists of systematic collection, documentation and analysis of information from various sources to ensure continuous monitoring of AI systems.

As the AI law has an extraterritorial scope, it applies not only to the 27 EU Member States, but also to any AI provider worldwide whose AI systems are placed on the market or put into operation in the EU.