Twitter, Human Technology FoundationLinkedIn, Human Technology Foundation
Responsible AI: European and North American Perspectives on the new EU draft AI regulation

21th June 2021

In this cross-Atlantic discussion Charles Morgan, National Co-Lead Cyber/Data Group of McCarthy Tétrault LLP, moderated a conversation on the key themes of a draft European Union regulation around responsible artificial intelligence (AI). Introduced by Anne-Marie Hubert, Chair of the Institut de la technologie pour l’humain, the dialogue was grounded in the organization’s purpose: serve as a platform for discussion around disruptive technology that respects human rights, and benefits the greater good. 


Panellists:

Kilian Gross (Head of Unit on Artificial Intelligence Policy Development and Coordination DG CONNECT, European Commission)

Alpesh Shah (Sr. Director, IEEE Standards Association)

Nye Thomas (Executive Director, Law Commission of Ontario)

Quick snapshot of the draft regulation

As an architect of the draft regulation, Kilian emphasized the overarching objective: position AI as an opportunity—not a problem—and create a solid legal framework in which AI can flourish, in turn cultivating trust in the market. 

That meant first defining AI clearly, but also broadly enough to cover all existing—and potential future—systems. Leveraging the OECD definition, they created a risk-based approach to regulation. It’s grounded in the principle that the vast majority of AI systems don’t constitute a significant risk requiring regulation, controls or legal constraints. Recognizing this helps ensure that the draft regulation doesn’t demotivate anyone from going beyond what’s included to expand on it voluntarily. The draft regulation outlines a five-step process for providers to follow:

  1. Determine whether its AI system is classified as high-risk under the new AI regulation
  2. Ensure design that the quality management system is in compliance with the AI regulation
  3. Carry out a conformity assessment procedure, aimed at assessing and documenting compliance
  4. Affix the CE marking to the system and sign a declaration of conformity
  5. Place it on the market or put it into service

That first step—determining whether the system itself is high-risk—is critical in unlocking the benefits of the overall framework. The regulation groups AI use cases into a colour-coded system of risk, green (lowest risk) through red. As use cases move through the yellow and orange traches, users should be aware that they’re interacting with AI and not a human being, and systems should comply with key requirements before entering the market. This zone represents the minority of AI use cases, but the core of the draft regulation. 

At the very top of the pyramid, red risks are deemed inherently unacceptable, said not to create social benefits, and flagged for prohibition through a specific series of steps. Any AI that contradicts EU values (i.e. subliminal manipulation, exploitation of children or the mentally disabled, general purpose scoring, etc.) is prohibited under the draft legislation, including:

  • Biometric identification and categorization of natural persons
  • Management and operation of critical infrastructure
  • Education and vocational training
  • Employment and worker management, access to self-employment
  • Access to, and employment of, essential private services and public services and benefits
  • Law enforcement
  • Migration, asylum and border control management
  • Administration of justice and processes

Those who comply with these industry standards can then benefit from the assumption of compliance with the regulation. 

What did we ask? 

Highlights of Q&A

What are the advantages and disadvantages of this comprehensive, fit for purpose regulation of AI as opposed to a sector-specific approach to regulation? 

Kilian: The underlying problems or issues are the same, not withstanding where and how AI us used. Whether talking about an autonomous car or a medical device, the specific problems that AI brings are similar, regardless of the individual use case. If the problems are largely identical because of the basic features of AI, then it makes sense to go about it horizontally. It also lends flexibility to add in other sectors or use cases if new challenges emerge, as opposed to taking a purely sectorial approach which could take years to overcome. 

Alpesh: Harmonization is key with these principles. Beginning with a comprehensive, fit for purpose approach provides the ability to set ground rules and provide risk assessment that cuts across multiple scenarios and use cases. If effort was spent on specific sectors, it might help one space, but others would continue to innovate in a different way. 

Should public and private sectors be subjected to distinct regulatory regimes in relation to AI, or does this unified approach make sense?

Nye: They should be consistent, but different. The issue arises because as we know the European commission proposal regulates both private sector AI systems as well as public across the EU. That couldn’t be possible in Canada or the United States because of jurisdictional boundaries and constitutional rules. Should there be uniform rules? Think about the scope, type and effect of potential public sector implications of AI applications. Consider some of the things that public sector includes (policing, national security, border and asylum, benefits decisions, etc.). There are no private sector analogs to these kinds of activities and different rules apply, especially in North America. In order to be thoughtful, regulation has to account for these factors. A simple unified rule will apply on the public sector to its detriment. There’s also the question of what rules should apply when public sector actors use private sector technology? The answer to that hasn’t been settled in law yet. 

Kilian: These are valid concerns. Our regulation in a way addresses this. One set of five requirements are applicable to all high-risk AI systems. The risk category depends on the intended use. The public authority has particular prerogatives and therefore it’s justified to make certain of their uses high risk for example if they use thisAI to detect deep fake in a court procedure, compared to a private operator someone who wants to filter out deep fakes. By looking at the intended purpose for the use of the individual AI, we make a distinction between private and public use Can you then use the same device for public and private purposes if the public use would be high-risk? This depends on the instructions for the provider. If the instructions aren’t for use by public administration, they cant use the AI unless they undertake a conformity assessment themselves.

Not everything needs to be regulated or have a huge statutory fine. How do you feel about the interplay between industry standards, best practices and actual regulation with sanctions?

Alpesh: Standards help improve market access by increasing competitiveness, efficiency, trading costs, contractual agreements and quality. Market-driven standardization provides mechanisms to help accelerate regulatory implementations and, in certain cases, inspire new regulation as well. For some time, the IEEE has been working on a variety of technical and socio-technical standards, training and an AI Ethics Systems certification program (ECPAIS). For example, the recently approved IEEE 7000 Standard, titled IEEE Approved Draft Model Process for Addressing Ethical Concerns During System Design, establishes a set of processes by which organizations can include consideration of human ethical values throughout the stages of concept exploration and development. The IEEE’s ECPAIS program extends this standards effort with a focus on certifying AI Systems around the criteria of privacy, accountability, transparency and algorithmic bias. In these cases the work directly links to the draft regulation where it outlines the  ‘human in the loop’ processes, as well as from the aspect of design at the forefront. When there’s a standard focus on a set of processes for all these factors, it makes sense. 

The additional benefit of industry-based standards is when it comes prior to the draft regulation, it’s a great opportunity to tap into those experts, understand the pace, the desire, how far along things are and to engage them into the system. It provides an ideal example of interplay between regulation and  industry standards. There are times when regulation has to be very strong and there are times where it doesn’t, and voluntary standards can play a role. I see a very strong, complementary approach between what the international standardization organizations have been doing with what the draft regulation has outlined.


How important were industry standards to the EU regulation drafting process, and why did you feel it was necessary to add some pretty big ‘sticks’ on top of that industry standard approach through the regulatory framework?

Kilian: There are certain uses where AI has a significant impact on human life or on safety and there, we must make sure that this functions. Diligent developers would always check AI before using it in a sensitive area, like a medical device or autonomous car. What we’re proposing isn’t very revolutionary. In reality, it’s something rather obvious for those who are diligent and behaving like a responsible market operator should. We really want to rely on industry standards because our regulation is rather high level. We define the benchmark and the areas of requirements, but technical details will have to come with standards and can only work with these standards. We want to work with international standardization organizations and we’re optimistic it will be possible to develop standards that will become legally binding. We think that our regulation should boost and incentivize the development of the needed standards in this area, and in the end the standards will be the deciding element for the operators. 

Some have described the draft as the ‘GDPR of AI regulation’ from a policy implementation perspective. What are the differences in approach between the two regulatory texts that you’d highlight?

Kilian: IIn certain ways, both texts will be complementary. If you use and process personal data you have to comply with GDPR. Where they diverge in approach is that we take a product-based approach, largely inspired by modern product legislation. We treat AI as a product and we want to make this product safe  before it hits the market. Once safety is ensured, it gets enabled, then it should rather easily be applied as long as you follow the instructions. That’s the difference to GDPR.

We’ve spoken before about risk-based approach to AI regulation. There are advantages and disadvantages to this approach. What concerns would you flag in relation to a risk-based approach to AI regulation?

Nye: The primary advantage is you can tailor regulations to circumstances. Not every AI system is the same, and a risk-based model allows you to adapt your mitigation strategies to the risks that are present. In that respect, it is fairer. Another advantage is you also have public and transparent risk criteria, which bring a degree of public accountability and consistency to a regulatory system. 

The disadvantages are really unanswered questions at this point. How do you assess risk? Who assesses risk? What is the result of that risk assessment and what are the obligations coming out of that? There are a lot of different models to assess risk and no international standard or convention for how you do this. In Canada, the federal government’s Algorithmic Impact Assessment includes to 60 different questions for the risk assessment. That can get very complicated very quickly.  Finally, regulators need to consider the impact of compliance issues. 

To what extent is it necessary to ensure that humans are always in the loop when you’re talking about automated decision systems?

Alpesh: Human-centred design processes help to enhance confidence in systems. That provides an increased trust for the users and the accountability of the system. It also provides further opportunities to explain the AI system, and its behaviours. Right now, many AI Systems may be considered opaque and not readily understood for a number of reasons. Using human-centred design allows us to better understand how these systems are being applied to us, and for us. Knowing that allows us to understand what rights and needs we have for seeking clarification, explanation, rational for the behaviours and proper mechanisms to pursue justice. This provides us a path to preserving agency and identity. In addition to the aforementioned IEEE 7000 standard, our  IEEE 7010 titled A New Standard for Assessing the Well-being Implications of Artificial Intelligence provides a framework that focuses on human-well being in the context of responsible autonomous and intelligent system innovations.

Kilian: The underlying idea is that AI should be at the service of the citizen, and not the other way around. People fear decisions will be made over their heads when exposed to decisions made by a machine, which is contradictory to human dignity. The worry is no one will be held responsible for what the device has decided. There must always be a human checking this and making the last decision. 


Next up? 

The Institute’s next webinar will explore the industry and private aspects of this draft AI EU regulation. Watch for that event in Fall 2021.


Related Articles