Twitter, Human Technology FoundationLinkedIn, Human Technology Foundation
GENERATIVE AI AND ETHICS : From principles to practice

Over the past year, through tools such as ChatGPT, Bard, Llama2 and Midjourney, generative AI has made its way into the economy and society. The speed with which they are being adopted reflects our contemporaries' appetite for a technology that is in the process of being democratized, and which holds great potential for value creation, but also for negative effects.

With the arrival of, among others, Google BERT in 2018, DALL-E or LaMDA in 2021, Midjourney, and above all ChatPGT in 2022, in just a few years, under the reductive term of generative AI, generative artificial intelligence systems (GeneAIS) have established themselves as a must-have subject for artificial intelligence experts, businesses and the public alike. Businesses are making no mistake, and are now committed to adopting these systems, widely regarded as potential productivity levers.

With a 1,310% growth in the number of companies using SaaS API LLMs (for Software as a Service Application Programming Interface Large Language Models) between the end of November 2022 and the beginning of May 2023 [1], and a growing desire "to adopt LLMs and generative AI" [2], GeneAIS are taking center stage.

Combined with the dynamics of AI use, this increase in the integration of GeneAIS, particularly in core business functions, is bound to both amplify existing concerns and generate new ones.

However, thinking about the impact of these systems is marking time, and the identification of the real issues remains unsatisfactory, casting the risk of inefficiency in governance processes.

There are several reasons for this impasse.

The ethical noise

As with artificial intelligence systems (AIS) in general, the advent of generative AI systems is accompanied by normative (legal and ethical) questions concerning the potential impact of this new AI family.

While ethics is widely considered to be an essential element in the framing of these technologies, for all that, and despite the mass of information and reflection available on ethical issues applied to AI, we are forced to note:

  1. a vast amount of information and analysis, some of it contradictory,
  2. its very uneven quality,
  3. a tendency to recycle existing thinking without adding new insights,
  4. and the presence of many non-expert players relaying them,

discussions surrounding generative AI are often more akin to ethical noise than to the necessary rigorous and constructive debate.

And yet, if we are to arrive at efficient, sustainable solutions, it is essential that ethical issues are removed from this noise, in order to gain in rigor and make a useful contribution to informing decision-making, and to proposing a mediation between humans and technology, to ensure that the latter is developed, deployed and used within a framework of moral acceptability.

The polarization of the debate

This ethical noise is based in particular on a hyper simplification of the debate surrounding ethics applied to artificial intelligence (EA2AI), manifested by a polarization of discussions opposing technophiles, promoters of technological utopias sometimes carrying excessive hopes, and technophobes who see AIs as the source of existential threats.

The existential threat discourse is extremely present in the debate, all the more so as it is conveyed by AI figures.

This artificial polarization hinders debate more than it contributes to it. By opposing technophobic and technophile discourses, the various players arbitrarily and artificially set the terms of the debate and the limits within which it must be conducted.

Yet, to encourage the consideration of intermediate and/or divergent perspectives, to reinvigorate debate, enrich and deepen analysis, and to address concerns in effective ways, the debate needs to be depolarized.

The intention-action gap

The polarization of the EA2AI discourse is facilitated by a fact highlighted by the World Economic Forum [3], namely that there is a differential between the good intentions of organizations in adopting ethical principles and the implementation of these principles by these organizations. This intention-action gap can be explained by a lack of reflexive depth in the application of ethics to AI.

IBM, in partnership with Oxford Economics, conducted a study comparing the level of adoption and operationalization of the requirements of the Independent High-Level Expert Group on AI (HLEG AI) [4], which helped to formalize this gap [5]. According to the study, between 50 and 59% of the companies surveyed had adopted the 7 European requirements, but only 13 to 26% had operationalized them.

The fundamental problem here is less a lack of will on the part of organizations, than the difficulty of transforming highly abstract requirements into concrete solutions.

Yet this inconsistency in the level of abstraction runs counter to the aim of the HLEG AI recommendations, which is to create trust, since the gap between intention and action leads to mistrust of companies that seem to claim to adopt the principles without actually applying them.

In any case, the operationalization of principles or ethical requirements inevitably requires adjustment to the right level of abstraction. This adjustment helps to reduce the intention-action differential and thus establish trust in AI systems.

What role for ethics?

Ethical questioning is the first victim of ethical noise, polarized debate and inconsistent levels of abstraction.

By presenting AI as an existential threat, certain players are influencing the perceptions of consumers, who are increasingly worried about the dangers posed by these technologies.

A tempered discourse, supported by ethics used as a mediator both between Humans and their environment and as a method of analysis to moderate excesses, would open the way to a balance facilitating constructive debate. A return to practical rationality in support of decision-making is needed today to free ourselves from the ethical noise generated and reinforced by this polarization of debate around overly general concepts.

In its mediating dimension, ethics enables us to move away from the polarization of the EA2AI debate and refine our understanding of the ethical issues involved in AI.

Ethical reflection also helps to bring AI back to what they are: objects and processes. Discourse on the existential threat often tends to anthropomorphize AIs to the point where they are perceived as agents endowed with reason and intent to harm humanity.

This “rethingification” needs to be accompanied by a refocusing on the human, which both nuances the technophobe/technophile polarization and repositions the question of individual and collective responsibility. As Eric Salobir points out, "responsibility is therefore in our hands" [6], and it's up to us to choose, individually and collectively, whether we want to set up AIs as demiurges of which we would be mere vassals, or whether we wish to maintain them as tools at the service of humans.

The essential contribution of ethics lies in the fact that it is a method of mediating our relationships with our environments, based on a process of deliberation. Ethics helps us to make the right decisions, so that people can live in harmony with their environment.

It is through this exercise in deliberation that ethics plays its full role as a compass, illuminating the decision by bringing it back to practical considerations as close as possible to the needs of organizations. This mediation also encourages reflection on our relationship with technology, and consequently on our responsibilities and their distribution.

 

10/16/2023

Related Articles