Twitter, Human Technology FoundationLinkedIn, Human Technology Foundation
Intelligence on trial : Interview of Yannick Meneceur

Yannick Meneceur, a specialist in digital transformation and artificial intelligence at the Council of Europe, builds bridges between the worlds of justice and information technology. In his latest book, L'intelligence en procès, Plaidoyer pour un cadre juridique international et européen de l'intelligence artificielle, he analyses in this interview how AI endangers democracy and how to remedy it. 


- Several States put it this way: it is war against Covid-19. From exceptional circumstances would justify exceptional measures. Is anything allowed?

The French authorities, for example, have adopted this vocabulary to mark spirits: we are living through a major crisis. In addition to the stakes the maintenance of the essential functions of the sanitary facilities must be guaranteed. States (police, justice, education) and manage the social and economic consequences of the economic. The first challenge is to ensure that the rule of law continues to prevail in the different countries: exceptional powers to governments, yes, but on a time and with continued parliamentary scrutiny. Secondly, democracy, even if it is « under quarantine" (report of elections or reforms), must be ensured: this involves providing information on the transparent, clear and objective, and taking steps to ensure that the misinformation do not undermine collective efforts. Finally, the rights of the human race remain essential, I am thinking in particular of the groups vulnerable: migrants, impoverished populations, detainees. Faced with this major crisis, all societal actors, including private actors, have a role to play in supporting the collective effort. We have seen the population to appear, in support of the containment policies, which pose many questions about the trivialisation of surveillance means, anonymised or not. In this effervescence, the knowledgeable, coming from the research or the private sector, must adopt an ethical approach and do not give in to the panic of the emergency in order to support the liberticidal technological solutions. At the same time, the legal frameworks national and supranational measures must play their part so that national and supranational measures continue to fully respect the rights of women and children fundamental, necessary to maintain confidence in the institutions.

- Is there a homogeneous approach to these issues in Europe and in the world?

On 10 April 2020, the Secretary General of the Council of Europe, Marija Pejčinović Burić stressed that it is important that the 47 member states of the Council 2 are based on enhanced cooperation and the legal frameworks of the Conventions ratified by them, in particular theEuropean Convention on Human Rights and Fundamental Freedoms of man (EDH). A "toolkit" has been published to guide the European governments to ensure that derogations from the HREConvention (authorised by its Article 15 in times of crisis) do not exceed a red line that would destroy the fundamental values that the Council of Europe stands for. That said, the early days of the pandemic have instead led states to close down their borders, take measures without real coordination with the States concerned neighbours, sometimes even requisitioning materials for other neighbours States. The health crisis is putting the European project to the test, whether it is about economic integration provided by the European Union, or cooperation strengthened legaland political framework of the Council of Europe. The same applies to the crisis, these organisations unfortunately do not seem to be in a position to play the role of metronome of a coordination that is nevertheless necessary. Some countries will come out of containment sooner than others, perhaps rightly so, but this will could also create new outbreaks of infection.

- And what about the technological solutions envisaged?

In this respect, coordination came first from research in Europe, not from the EU intergovernmental organisations. Thus the Fraunhofer Heinrich Hertz Institute in Berlin (in partnership with other partners such as Inria en France) have developed the PEPP-PT "proximity tracing" protocol, based on the anonymized historicization of the meetings carried out thanks to a well-established protocol known for proximity communication: bluetooth. Even if I am personally skeptical about the effectiveness of this particular type of solution technological (the proximity of 2mobile phones does not allow to deduce a technological contamination by Covid), it is interesting to see that civil society is in the process of capacity to mobilise to initiate European coordination.It would be necessary to however, these initiatives should be part of an overall multidisciplinary policy of unlockdown.

- What does your work at the Council of Europe consist of? and what is the role of theCouncil of Europe? Does the organisation wish to play a role in disruptive technologies?

I first worked on the impact assessment of the technologies of information on the functioning of the courts. It's up to us to understand its unintended effects and ensure that they guarantee a fair trial: using the videoconferencing, for example, is not insignificant and the way of framing, the In light of this, the difficulties sometimes encountered in getting along change the economy of a country lawsuit. Since 2018, I have been a digital transformation policy advisor, especially concerning AI (or more precisely statistical learning). These special algorithmic systems have certainly opened doors with the statistical learning and its excellent results on statistical recognition of sounds or images, but its use in functions of general interest (such as justice or health) is not trivial. Less than robotisation, it is to be feared of solutionism: the statistical analysis of a mass of case law does not necessarily produce meaning or may produce meaning with biases that designers may not necessarily be able to minimise. My essential role is to popularise technological concepts that may put off my legal colleagues and problematise them so that they can ask themselves the right questions in their own sectors. I organise conferences on a regular basis. It is in the logical continuity of the Council of Europe's positioning with regard totechnologies: support for their development, in compliance with its standards of human rights, democracy and the rule of law. Let us not forget that the first international convention on the protection of (Convention 108) was drafted under his hospices. The prohibition of human cloning or legal cooperation in the fight against human cloning cybercrime too. With regard to AI, an adhoc committee on intelligence (CAHAI) was commissioned in September 2019 to carry out a study of feasibility of a legal instrument to regulate AI applications to ensure that it respects what is at the heart of the Council's mandate: the rights fundamental.

- Why is it necessary to build an « international legal framework »? and European Artificial Intelligence" as you defend it in your book?

First of all, I believe that AI presents us with a fundamental technical problem. This technology is often overestimated, particularly by certain players economic activities that have purely mercantile aims. The exploits of AI are staged (e.g. at AlphaGo Zero) in order to construct a narrative (the AIsurpasses the man in areas that were thought to be reserved for him) whereas the case of preciseuse lends itself rather well to this feat: a closed environment, with fixed rules. So nothing to do with an open road, decision-making medical or judicial decision. And yet, entrepreneurs have to market products that are often not very mature, even though they have been that we know its limits: confusion between correlation and causality, effect "....black box" of automatic learning, adverse examples that weaken the image recognition, in decidability of systems1. Secondly, I believe that AI poses us a democratic problem and a problem of governmentality. This technology has been seized in my opinion by a strange project mixing Silicon Valley's libertarian ideology with the political, mixing the neo-liberal policies, to serve as a tool for the transformation of society. In this way, without understanding why, we are always trying to adapt, to be more efficient, to reduce costs in all things, with a vague feeling of widespread delay. The rule of law even comes to be challenged by the fact that I call a State of the Algorithms. I refer here directly to AntoinetteRouvroy's work on algorithmic governmentality, which I will continue here the primacy of the rule of law is in the minds of some people, and that the fully replaceable by a simple interest calculation. The idea of « justice Predictive" fits exactly into this ideology. Ethics itself has been too often convened not to serve as a compass for developments, but rather as a j u s t t o"whitewash" behaviours that were not necessarily so, and slow down the production of standards, with sanctions. A legal solution to these problems should therefore be found, the and impose an international framework to ensure the principles of the fundamental, necessary for our times. This should go beyond AI and apply to all algorithmic systems in general. In practice, we could adopt a risk-based approach (imposing constraints on the use of the on the most high-risk applications), replace this with a new one technology to its role as a tool so that it does not impose its truth, encourage innovation, but defer immature applications by adopting a transfer of the precautionary principle to algorithms (or when we do not know the scope of the consequences on a large scale, first tested on a small scale), implement a certification of algorithms and why not an organisation of the professions - in particular data scientists - with an oath and an order ensuring continuous training and discipline.

I am afraid that with some AI applications, we unfortunately find ourselves with a means that has somehow appropriated the ends. The problem, moreover does not come from technology, butfrom those who have taken it over to make a profit by sometimes dressing it up with a desire to make the world a better place better. This double discourse, these double aims, must be refuted ifwe are to really wants to create trust.

Interview by Lauriane Gorce

Yannick Menecoeur is Associate Researcher at IHEJ, member of the board scientist of the PRESAJE Institute and specialist in transformation issues and Artificial Intelligence at the Council of Europe

Related Articles