Twitter, Human Technology FoundationLinkedIn, Human Technology Foundation
In conversation with... Kilian Gross, Alpesh Shah & Nye Thomas

In conversation with…

Kilian Gross, a leader with the European Commission and architect of the EU’s draft regulation around responsible artificial intelligence (AI), joins Alpesh Shah (IEEE Standards Association) and Nye Thomas (Law Commission of Ontario) to discuss the ins and outs of the proposal, and their broader public and private applicability. 


There’s no silver bullet strategy for regulating artificial intelligence (AI). Getting it right means working backwards from the challenge the regulation itself is designed to solve. That’s something Kilian Gross can attest to.

As Head of the Unit for Artificial Intelligence Policy Development and Coordination at the European Commission, Gross played a central role in creating the EU’s new draft AI regulation. That process began by setting an overarching objective: position AI as an opportunity—not a problem—and create a solid legal framework that allow it to flourish, while cultivating trust in the market. 

“We want to create a market for AI,” explains Kilian Gross. “But, this should be a market for trustworthy AI.” 

They set out by defining AI clearly,  but also broadly enough to cover all existing and potential future systems. Leveraging the OECD definition of AI, the Commission created a risk-based approach to regulation. That’s key to ensuring regulation doesn’t demotivate anyone from going beyond what’s included, to expand on it voluntarily. 

What are the brass tacks of the draft in its current form?

The draft regulation outlines a five-step process for providers to follow:

  1. Determine whether its AI system is classified as high-risk under the new AI regulation
  2. Ensure design and development and quality management systems are in compliance with the AI regulation
  3. Carry out a conformity assessment procedure, aimed at assessing and documenting compliance
  4. Affix the CE marking to the system and sign a declaration of conformity
  5. Place it on the market or put it into service

That first step is critical to unlocking the benefits of the overall framework. The regulation groups AI use cases into a colour-coded system of risk; green (lowest risk) through red. The higher an AI system climbs in the pyramid, the more important it is for users to understand they’re not interacting with another human.

“High-risk cases are, of course, the core of this regulation. [They] are nevertheless a minority of the AI use cases which are either embedded in a product like, in a medical device, or on a stand-alone basis,” says Gross. “We basically require an ex-ante compliance check. You must make sure before you enter the market or before you put it into use that they comply with our requirements.”

At the very top of the pyramid, red risks are deemed inherently unacceptable, said not to create social benefits, and flagged for prohibition through a specific series of steps. The draft regulation explicitly prohibits any AI that contradicts EU values. 

How will the regulations work in practice? 

Whether talking about an autonomous car or a medical device, the problems that AI brings are similar, regardless of use case. Gross says this principle bolsters the Commission’s strategy of approaching the draft regulation from a horizontal perspective, as opposed to a sector lens.

That’s something that Alpesh Shah, Senior Director of Global Business Strategy and Intelligence, IEEE Standards Association, can understand.

“Beginning with a comprehensive fit-for-purpose approach provides the ability to set ground rules and manage risk association with AI systems designed and developed in manners that cut across multiple industries and use cases,” explains Shah. “This is reflected in the draft regulation as it states that best practice elements of accountability, transparency and explain ability, and human-in-the-loop design.”

Shah points out that if the initiative had focused on sector-specific regulations, it might help one space while others continued to innovate in a different way. “Harmonization is key with these principles.”

True, too, for building flexibility that allows for other sectors or use cases to fall seamlessly under the regulations as new challenges emerge. Even so, questions abound. For instance, whether public and private sectors should be subjected to distinct regulatory regimes for AI. 

Gross says the draft regulation seeks to address the public/private nuance deliberately. The EU’s single set of five requirements are distinguished according to use. If the instructions aren’t for public administration and they’d like to use it beyond what’s outlined, the public actor becomes a provider themselves. They’d have to undertake the ‘under conformity assessment’ to put it on the market. 

Nye Thomas, Executive Director of the Law Commission of Ontario, is quick to point out that jurisdictional boundaries and constitutional rules in Canada and the United States would make a similar approach difficult to implement in North America. 

“Should there be uniform rules for private sector and public sector applications? My answer is that they should be consistent, but different,” Thomas explains. “In the public sector, you’ve got policing, public safety, border or asylum cases. You have benefits determination. You have judicial decision making. There are no private sector analogs to these kinds of activities and one of the consequences is that there are actually different legal rules applying to them as well.”

To be thoughtful, he says, AI regulation must account for these complexities. 

How can regulation strike the right balance between carrot and stick?

The interplay between industry standards, best practices and actual regulation with sanctions represents a lot of complexity. 

“There are times when regulation has to be very strong and there are times when it doesn’t,” adds Shah. “Voluntary standards can play a role. In this case, I see a very strong, complementary approach between what the international standardization organizations have been doing and what the draft regulation has outlined.”

That collaborative spirit is music to Gross’ ears. He points out that any time AI has a significant impact on human life or safety, regulators must over-rotate to ensure the technology functions as promised. He considers complementary industry standards as the opportunity to take the relatively high-level regulations the Commission has drafted down to a granular level.

“We want to work with international standardization organizations. They have already done a lot of groundwork,” Gross says. “We think that our regulation should boost and incentivize the development of the needed standards in this area, and in the end the standards will be the deciding element for the operators. “ 

Where does AI regulation go from here? 

A risk-based approach to regulating AI offers various advantages. Chief among them: the ability to tailor regulations to specific circumstances. Not every AI system is the same, and a risk-based model allows you to adapt your mitigation strategies to the risks that are present.

Nye says that’s inherently fairer. “It allows you to play your roles discretely, which I think is a good thing. Another advantage is that you have public risk criteria, which brings a degree of public accountability and consistency to a regulatory system.” 

The downside lies in more of the unknowns. For instance, there are a lot of different models to assess risk, and no international standard or convention for who should do it, or how. In the face of those evolving factors, Gross, Shah and Thomas agree that using-human centred design is absolutely critical to the evolution of the regulations going forward. 

Keeping humans in the loop allows regulators to better understand how AI systems are being applied to us and for us. It also clarifies users’ rights and needs, as well as the rational for certain behaviours, and proper mechanisms for pursuing justice. “This provides a path to preserving agency, and identity,” affirms Shah.

And that comes back to the overarching objective which will continue to guide the EU’s approach to regulating AI going forward. Pairing the right regulatory framework with the right human touchpoints will be continue to be essential as regulations—and AI itself—change and evolve. 

“The underlying idea is that AI should be at the service of the citizen, and not the other way around,” says Gross. 

Video clips to feature

Kilian Gross

Nye Thomas

Alpesh Shah


Related Articles