Twitter, Human Technology FoundationLinkedIn, Human Technology Foundation
AI Act: Five Key Insights for Navigating AI Regulation in Europe

Karin Tafur is dedicated to advancing responsible AI practices, providing legal and ethical advice with a focus on risk assessments within the Innovation/Portfolio Lab of the Inter-American Development Bank (USA). Additionally, Tafur serves as an expert assisting the "European Research Council" on ethical reviews, particularly in AI, for the Horizon Europe programs (Europe). With affiliations at Milan's ISLC (Italy), Z-Inspection® (Europe), and previous teaching experience at EM Lyon Business School (France), her expertise spans various business applications. Moreover, Karin Tafur delivers lectures on ethics and governance, with a specific focus on European AI regulation, at prestigious institutions such as the IA Institut by Epita & ISG (France). She is an active publishing author, with her AI-related works featured in both scientific and non-scientific journals across Europe and the UK.

Finally, Ms.Tafur also provides AI training for companies and startups, emphasizing AI Act compliance and promoting AI literacy (for providers and deployers in Europe).

 

In March, the AI Act was officially approved in Europe, who must comply with this regulation and does this regulation extend beyond Europe?

 

The European AI Act applies to all companies, whether they are providers, deployers, importers, distributors, or any other involved party. No matter where these companies are based, if they are placing on the market or putting into service AI systems or general-purpose AI models within the European Union, they are subject to this regulation. This regulation does not stop at Europe's borders, extend beyond.

 

What are the key requirements of the European AI Act impacting companies' operations, and how can companies ensure compliance with these mandates?

 

Now, when it comes to compliance, one of the crucial aspects is understanding how AI systems are classified under the Act. There are four classifications (“Unacceptable risk”; “High risk”, “Limited risk”; and “Minimal risk”). The classification of a system determines the level of compliance needed, depending on where it falls within these categories.

It is kind of like a sliding scale - some systems might have more obligations to meet, while others might have fewer. And, in some cases, certain AI systems might even be prohibited from being used or sold in the European market. It is worth noting that even systems previously considered "Minimal Risk", which did not require any obligations, now have obligations to fulfill due to the latest updates in the regulation. In this case, the AI Act requires providers and users of AI systems to ensure that their staff and others involved in using AI are well informed about how AI works.

Now, let us talk about the types of systems that most companies might use. Many of these could be categorized as either "High Risk" or "Limited Risk".

"High Risk” systems, such as evaluation of eligibility to credit, health or life insurance or public benefits; analyses of job applications or evaluation of candidates; product safety components; among others, come with obligations for both developers and companies utilizing them. It is essential for these systems to conduct thorough conformity assessments, ensure transparency in their AI operations, and implement effective human oversight, among other measures. In the “Limited Risk” category, we find Chatbots, emotion-recognition, and biometric-categorization systems, which must adhere to transparency obligations. However, some AI systems or models, while not classified as high-risk, still carry risks like impersonation or deception. For instance, consider Generative AI systems like OpenAI’s GPT-4 and Google’sGemini, categorized as general-purpose AI (GPAI) by the AI Act. Developers of these systems must provide technical documentation to both the European Commission's AI Office and national regulators. Additionally, if a GPAI model is highly potent, it may pose a systemic risk, requiring risk mitigation by the developer. An exception exists for fully open-sourced GPAI models, such as Meta's Llama, unless they are exceptionally powerful.

 

Based on the details discussed, what steps should companies consider to ensure they comply with the European AI Act, particularly concerning high-risk AI systems?

 

Based on what we have discussed, companies need to be proactive in ensuring compliance with the European AI Act, especially when it comes to high-risk AI systems. This means they should implement solid risk management processes, conduct thorough impact assessments, and meticulously document their compliance efforts. Transparency and accountability are key here, allowing for clear explanations of AI-driven decisions and helping to prevent biases and discrimination. It is essential for companies to establish clear procedures and allocate resources effectively to meet these regulatory requirements. So, by setting up solid internal procedures and keeping thorough records, companies can make sure they are following the rules and avoid any potential penalties. It is all about being proactive and staying organized.

 

What are the potential consequences of non-compliance with the European AI Act, both in terms of financial penalties and reputational damage, based on the AI Act and your experience?

 

In fact, non-compliance with the European AI Act carries significant risks for companies, encompassing hefty fines and damage to their reputation. Regulatory authorities possess the authority to levy penalties equivalent to a percentage of a company's annual revenue for violations. For instance, breaching the prohibitions outlined in Article 5, known as Prohibited AI Practices, could lead to substantial administrative fines, potentially reaching up to 35 million euros or 7% of a company's total global annual turnover, depending on which amount is higher. Similarly, infractions related to operator obligations may result in fines of up to 15 million euros or 3% of the company's total worldwide annual turnover. Additionally, the consequences extend beyond monetary penalties. For example, the non-compliant company risks substantial damage to its trademark faces potential legal actions, and public scrutiny. This can ultimately erode consumer and stakeholder confidence, making it even more challenging for the company to maintain its competitive position in the market.

 

From your perspective, how do companies leverage the European AI Act as an opportunity to build trust and confidence among consumers and stakeholders in AI field?

 

I understand that complying with the AI Act can be challenging, especially for companies developing and using high-risk AI systems. However, the European AI Act offers companies the opportunity to showcase their dedication to ethical and trustworthy AI development, which, in turn, helps build trust and confidence among consumers and stakeholders. By proactively adhering to the Act's provisions, companies can position themselves as leaders in ethical AI practices and bolster their reputation in the market. Transparency, accountability, and a strong commitment to ethical principles essential for building trust in the AI technologies. As a result, companies that prioritize these values can benefit from the European AI Act. By following its guidelines, they not only improve their reputation but also build stronger relationships with consumers and stakeholders.

 

 

Further readings

 

European Commission. Proposal for a Regulation of the European Parliament and of the Council laying down harmonized rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts, COM(2021) 206 final, 2021/0106(COD). 21 April 2024.  https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206

Reid Blackman and Ingrid Vasiliu-Feltes. The EU’s AI Act and How Companies Can Achieve Compliance.  February 22, 2024. https://hbr.org/2024/02/the-eus-ai-act-and-how-companies-can-achieve-compliance

Jure Globocnik. The European AI Act. Active Mind Legal,  8 January2024. https://www.activemind.legal/guides/ai-act/

Daniel Castro. The EU’s AI Act Creates Regulatory Complexity for Open-Source AI. Center for Data Innovation, March 4, 2024. https://datainnovation.org/2024/03/the-eus-ai-act-creates-regulatory-complexity-for-open-source-ai/

Related Articles