L&M Finance Group

Digital transformation: vector development of the European Union

Over the past period of time, the market in the EU has changed significantly in light of the overall digitalization of the sector and new types of financial services. As part of the European Strategy aimed at improving the legislative framework in the EU, the main objectives were set to strengthen the mechanisms for effective regulation of market processes to increase their accessibility, transparency, and security.

For the EU, this is a huge leap forward, both because of the pandemic, which accelerated the digital transformation to a new level of economic space - the digital economy, setting new AML/CFT standards, and because of the creation of a huge number of virtual means of payment, which accelerated the creation of a regulator for the regulation of the crypto asset market (MICA), and, in particular, based on the emergence of such rapid digitalization, the EU adopted the Data Act of 27.11.2023.

Briefly summarizing each of the above-mentioned regulations, the following can be determined:

The AML/CFT standards, or in other words, the rules on combating money laundering and terrorist financing, are three new Directives adopted on March 28, 2023. Their purpose is to identify any and all transactions that are potentially risky and pose a direct threat in accordance with the level of risk within the EU. At the same time, each EU member state must establish a financial intelligence unit (FIU) to prevent, report and combat money laundering and terrorist financing. In addition, organizations such as banks, asset and crypto asset managers, real estate and virtual real estate agents, and other high-risk entities will be required to verify and establish the identity of their customers, what they own, and who the controller of the legal entity is. Organizations will also be required to identify detailed types of money laundering and terrorist financing risks in their field of activity and submit the relevant information to the Central Register of Ultimate Beneficial Owners (CRUBO). This is one of the steps that the EU is taking to improve the fight against crime and terrorism, as well as to ensure the stability and security of the financial system in the EU member states and, in general, to measure the current EU attitude towards the digital asset space. Although the final discussion on the regulation of crypto asset markets (MiCA), as well as the finalization of the AML/CFT package by the European Parliament, is still pending, it is safe to say that they are already at their final stage of harmonization.

The Crypto Asset Market Law (MiCA regulation) is the first regulation of the crypto market, which is based on the control of the activities of virtual asset providers, the definition of the basic conditions and requirements for conducting such activities, as well as liability for their non-compliance, which will be monitored through the competent national authorities of the Member States and reported to the European Banking Authority (EBA) and the European Securities and Markets Authority (ESMA). Thanks to the unified licensing of crypto asset providers, thereby making them fully verified by the competent authorities, businesses will be able to increase their credibility with financial institutions, in particular banks, by making their economic activities transparent. It is expected that this law will come into force at the end of 2024, and will be fully implemented with all amendments in 2025.

The Data Act is the next step in the regulation of online data, driven by the data sharing obligations of data owners, including both business-to-consumer (B2C) and business-to-business (B2B), including business-to-government (B2G). After the adoption of the Data Management Law, the Data Law will facilitate the voluntary exchange of data between businesses (individuals and the public sector). In this context, the Data Law is the second key piece of legislation that aims to make the generated data more accessible for reuse by users. It aims to increase the competitiveness of businesses, develop the innovations they generate, and encourage more participants in the data (information) market. Despite the fact that this law applies to non-personal data as well as personal data, it in no way violates the GDPR (and other national and EU laws on personal data protection and privacy), including the powers and competencies of supervisory authorities and the rights of data subjects. If personal data is generated through related products or related services, both the Data Law and the GDPR must be complied with. The law is expected to enter into force in 2026.

In addition, the Artificial Intelligence (AI) Law was recently adopted (link), which prioritizes ensuring that all AI systems developed, deployed and used in the EU are secure, transparent, traceable and non-discriminatory. Regulating AI and ensuring human oversight of AI systems is part of the EU's digital strategy. The term "artificial intelligence system" is central to the European legal framework. Its legal definition can be found in our previous article covering the new AI law proposed by the European Commission with reference to the well-known Organization for Economic Cooperation and Development (OECD) definition: "a machine system that is designed to operate with varying levels of autonomy, that can demonstrate adaptability after deployment, and that, for explicit or implicit purposes, makes inferences based on the input it receives about how to generate outputs, such as predictions, content, recommendations, or decisions that may affect a physical or virtual environment."

Accordingly, the AI Law aims to balance innovation while protecting the rights and security of citizens in the EU. To this end, the Law proposes to categorize AI systems into risk categories, setting requirements for them in accordance with the risk level of AI systems, prohibits certain AI practices that could potentially pose a direct threat to the rights and safety of citizens (users), and establishes regulatory oversight that provides for penalties for non-compliance with the Law. The AI Law is relatively partially a form of product safety legislation. Such legislation is aimed at preventing and reducing risks by setting safety standards. One of its main goals is to minimize the risks associated with AI systems before they are placed on the market or deployed for use on the network. Product safety legislation is usually accompanied by liability legislation. Not all risks arising from the use of a product can be avoided, and therefore liability legislation aims to ensure that those affected are able to obtain appropriate compensation.

This law is not considered to be the only regulatory framework for the regulation of AI. In the context of the approach to ensuring effective regulation of new technologies, the European Commission proposed to improve the law with two directives, as the Commission identified a potential gap in the liability law system, namely the lack of possibility for victims to receive effective compensation in case of damage caused by AI. Both of these directives aim to harmonize and strengthen the product liability regime, with the aim of ensuring that persons suffering harm caused by the use of AI systems are adequately compensated. This issue is of great importance as it ensures that end users of AI systems will be more confident in using the technology, knowing that they are entitled to certain basic remedies in case of harm. It will also provide greater certainty for companies involved in the supply or deployment of AI systems, as they will know what liability risks they face.

AI liability directive (AILD)

While the AI law is aimed at preventing harm caused by AI, the AILD directive aims to regulate compensation for harm caused by AI through the application of liability law.

Through the AILD Directive, the EU Commission seeks to regulate compensation for damage caused intentionally or negligently by AI systems. As noted by the EU Commission, even if the risk to the safety and fundamental rights of people decreases since the entry into force of the AI Law, AI will not stop using and processing the information it receives, and therefore, the residual risk of potential harm (indirectly or directly) caused by AI will continue to exist. Therefore, the Commission seeks to establish common rules on civil liability for damage caused by the use of AI systems.

The AILD has an extraterritorial effect, as it applies to suppliers and/or users of AI systems available or operating on the EU market. These liability rules will have a leverage to promote the operation of trustworthy AI and the full utilization of its benefits in the internal market. Such action will ensure that protection against harm caused by AI has the same equal protection as for harm caused by any other technology. The AILD will create a rebuttable presumption of causation, thereby reducing the burden of proof for the injured person who suffers harm.

In addition, the AILD will regulate the powers of national courts to require disclosure of the "dark side" of high-risk AI systems in order to obtain evidence. This directive is limited to claims work. If damage is caused by an AI system or the inability of an AI system to produce a certain result, the injured party files a claim. To ensure consistency in its definition, AILD consists of two key procedural devices.

The first concerns the ability of aggrieved parties to access relevant evidence. Claimants may request that domestic courts order the disclosure or preservation of evidence from the relevant parties in the case of high-risk AI systems suspected of causing harm. Parties that fail to comply are subject to a presumption of non-compliance, which makes the procedure easier for the plaintiff and encourages the relevant parties to comply with the orders.

The second procedural change introduced by AILD will make it easier to prove the causal link between the fault of the relevant party and the result of the AI system by introducing a number of rebuttable presumptions.

These rules give injured parties a procedural advantage in proving their case. However, they will still have to prove all the essential elements of their claim in accordance with the laws of each Member State.

Once adopted, the AI Law and AILD will become the world's first AI regulations. Once enacted, organizations using and producing AI systems will have 24 months to comply with all requirements and implement them.

Product Liability Directive (PLD)

This PLD, like the previous AILD, is an integral part of the regulatory framework for AI regulation in combination with the AI Law.

Unlike the new AILD, the PLD was adopted in 1985 and introduced a strict liability regime for material damage caused by the use of defective products. However, time passes, and the need to update the directive for the digital era has already come. Therefore, in December last year, a reform of the directive was proposed, going beyond a simple update to take into account AI technology. These changes include a general expansion of the scope of application, including a broader definition of products and responsible parties. Among such reforms, the following can be identified:

- explicitly recognizing that AI systems are subject to the PLD through the inclusion of "software" in the definition of "product." Under this expanded scope, AI system providers will potentially be held liable for any defective AI systems placed on the market. The Directive also covers any AI systems integrated into products, blurring the traditional distinction between tangible and intangible products;

- although the PLD will not apply to free open source software supplied outside of commercial activities, manufacturers who integrate any free open source software into their products will potentially be liable for any defects that arise as a result;

- the PLD directive provides for compensation for any material damage caused by a product defect, while compensation for non-material damage is subject to the laws of each member state;

- the concept of "damage" in the PLD has been expanded in several ways. Loss or corruption of data can now be restored, but not if the data is used for professional purposes. The destruction or corruption of data does not automatically result in pecuniary loss if the injured party can still obtain the data free of charge. Damage to any property that is not used for professional purposes is still not recoverable;

- updated the concept of defectiveness in the PLD to take into account AI systems. Products can also be considered "defective" due to cybersecurity vulnerabilities, which will be particularly relevant in the context of the use and deployment of AI systems. The updated text states that manufacturers who develop products with the ability to develop unexpected behavior remain liable for the behavior that causes harm. In the context of AI systems, this would mean that the system's ability to act would not be enough to relieve the developer of liability. The actions or omissions of third parties also do not relieve AI system vendors from liability for a defect in the product. This may be the case when a third party exploits a cybersecurity vulnerability in an AI system that results in damage. Conversely, the liability of AI vendors may be reduced or eliminated if the injured party itself contributed to the damage, for example, by failing to install an update to the application (program) or update the AI system;

- the traditional defense under the PLD for a defect in a product arising after placement on the market has a new exception that explains the fact that products may still be under the control of the manufacturer after placement on the market. This means that AI system vendors will not be able to rely on this defense if the AI system defect is caused by some aspect of the product that remains under their control, including the provision of software updates or upgrades or any material modifications.

This reform provides for a 24-month adaptation period, similar to the previous directive, for Member States to implement it into national law. This means that companies and individuals should expect local laws to be implemented around 2026 (if the PLD comes into force this year). This deadline would be in line with the AI Law, which is also 24 months after its entry into force.

It should be noted that the approval of directives prepared for the implementation of the AI Law will have to be transferred to the local level of each Member State, while the AI Law, as a Regulation, will be applied directly when it is launched.

Recently, AI has become increasingly important in the European Union as the technology continues to develop at a rapid pace. The EU has recognized the potential benefits and challenges that AI can bring to various sectors of society. From healthcare to transportation, from finance to agriculture, AI is increasingly being integrated into everyday activities, driving innovation, increasing efficiency and competitiveness.

In general, according to the text of the Artificial Intelligence Law, AI systems can be used in the following activities:

1) Critical Infrastructure: security components in the management and operation of critical digital infrastructure, road traffic, and the supply of water, gas, heating, and electricity.

One of the key areas where AI has a significant impact in the EU is healthcare. AI-based tools and applications are being developed to help healthcare professionals diagnose diseases, predict treatment outcomes, and personalize treatment plans. Using AI, healthcare professionals can improve patient care, reduce medical errors, and optimize resource allocation.

In the transportation sector, AI is used to improve safety, efficiency, and sustainability. Autonomous vehicles powered by AI algorithms are being tested and deployed in various EU countries to improve road safety and reduce traffic congestion. In addition, artificial intelligence is used to optimize public transportation systems, predict routes, and implement smart traffic management solutions.

2) Education and training: development of AI systems for determining access, admission, or assignment to educational institutions at all levels. Evaluation of learning outcomes, including their use to manage the student learning process. Determining the level of education of a person. Control and detection of prohibited behavior of students during tests.

3) Employment: AI systems are used to recruit and select employees, especially in targeted job ads, analyze and filter applications, and evaluate candidates. They are also used for promotions and terminations, assigning tasks based on personal traits or characteristics and behaviors, and monitoring and evaluating employee performance.

4) Public services: AI systems used by public authorities to determine eligibility for benefits and services, including their distribution, reduction, cancellation, or reinstatement. Determination of creditworthiness, except for financial fraud detection. Analysis and classification of emergency calls, including prioritization for police, fire, medical and emergency triage dispatchers. Risk assessment and pricing in health and life insurance. Assessment of illegal migration or health risks. Reviewing asylum, visa and residence permit applications and related eligibility appeals. Detection, recognition or identification of persons, except for the verification of travel documents.

5) Law enforcement agencies: AI systems to assess the risk of a person becoming a victim of a crime. They also use polygraphs to assess the reliability of evidence during a criminal investigation or prosecution. Assessing an individual's risk of committing an offense or reoffending is not only based on profiling or assessing personal traits or past criminal behavior. Profiling is also carried out during the detection of crimes, investigation or prosecution. Unprohibited biometrics refers to remote biometric identification systems that allow for the establishment of a person's identity, subject to exceptions to biometric verification, which confirms that the person is who he or she claims to be. In addition, it also covers biometric categorization systems that identify sensitive or protected attributes or characteristics, as well as emotion recognition systems.

6) Justice: The application of AI to justice and democratic processes involves the use of systems to analyze and interpret facts, apply the law to specific situations, and find alternative ways to resolve conflicts. It can also influence the outcome of elections, referendums, and voting behavior, except in situations that do not interact directly with people, such as tools to optimize political campaigns.

7) Finance: In the financial industry, AI is transforming the way banking and investment services are delivered. AI-powered chatbots and virtual assistants are being used to provide personalized customer service, and machine learning algorithms are being applied to detect fraud, assess credit risk, and automate trading strategies. By leveraging the power of AI, financial institutions in the EU can optimize operations, improve decision-making, and enhance customer service.

8) Agriculture: The AI revolution is enabling precision and efficient farming. By analyzing data from sensors, drones, and satellites, AI algorithms can provide farmers with valuable information about crops, soil, and weather conditions. This data-driven approach allows farmers to optimize yields, reduce resource waste, and minimize environmental impact.

The adoption of AI in the EU offers numerous opportunities for economic growth and social development, but also raises concerns about ethics, privacy, and regulation. The EU is actively working to address these issues by developing guidelines and regulations to ensure the ethical and responsible use of AI technologies. The European Commission has issued Ethical Guidelines for Trustworthy AI to promote the development of human-centered AI that respects fundamental rights, values diversity, and ensures transparency and accountability.

The EU law on the protection of personal data, privacy and confidentiality of communications will apply to the processing of personal data in connection with the AI Law, which will not directly affect the GDPR. One of the main differences between the AI Law and the GDPR is the scope of their application. The AI Law applies to suppliers, users, and other participants in the AI value chain (e.g., importers and distributors) placed on or used in the EU market, regardless of their location. Unlike the Law, the GDPR applies to controllers and processors who process personal data in the context of the controller's or processor's establishment in the EU, or who offer goods or services or control the behavior of data subjects in the EU. This means that AI systems that do not process personal data of non-EU data subjects or vice versa may be subject to the AI Law, but not to the GDPR. In addition, the GDPR is based on the fundamental right to privacy. Data subjects can exercise their rights against parties that process their personal data. On the other hand, the law focuses on AI as a product, and in doing so, seeks to implement a "human-centered approach" by regulating AI through the concept of product regulation. This means that humans are indirectly protected from faulty AI systems and do not have a clearly defined role in the AI law. In other words, stopping an illegal AI system that uses personal data is done under the AI Law, but the exercise of data subjects' rights regarding their personal data is done under the GDPR. Another difference between the AI Law and the GDPR is the requirement for vendors to implement human supervisory interface tools to enable human supervision, and measures must be taken to ensure human supervision. However, the AI Law does not specifically define what measures should be taken and, in particular, there is no additional guidance on this aspect. Decision makers should receive clear instructions and training, such as how the AI system works, what data will be used for input, what outcome to expect, and how to evaluate the AI system's recommendations. It is worth noting that providers have an obligation to provide their users with instructions on how to operate AI systems, as well as an obligation for users to inform and train their decision makers on the elements of the AI system. This is necessary to ensure that decision makers can make meaningful, informed decisions. If human oversight lacks a certain degree of certainty due to the lack of appropriate training of decision makers, this may result in the AI system not being considered partially automated and thus the GDPR and its relevant obligations will apply.

Overall, AI in the EU is poised to transform various sectors of society by stimulating innovation, improving efficiency, and increasing competitiveness.

According to the European Parliament's press release, the Artificial Intelligence (AI) law was approved with 523 positive votes. The final approval of the law was reached after a preliminary agreement, the Pact, by the European Parliament and Council in December (CPR, December 11, 2023). According to this political agreement between the European Parliament and the Council, the main goal of cooperation was to focus on ensuring legal certainty and transparency in the opening up of the latest innovative AI-based technologies, taking into account all possible risks. This project agreement implemented an adaptation period for all those willing and interested in participating in the accelerated learning process in the use of AI. Under this Pact, it was agreed to support startups in the development of AI and to allocate more than €1 billion in investments from the Horizon Europe and Digital Europe programs in AI research and innovation.

By adopting AI technologies and adhering to ethical standards and regulatory frameworks, the EU can harness the full potential of AI to create a more sustainable and inclusive future for its citizens.