Artificial intelligence disrupts risk assessment. Such innovation has transformed sectors like that of private insurance, which relies on risk assessment. But what about the public sector?
China has implemented social credit and the United States has long been using credit scoring. What is the way forward for the European Union (EU)?
To clarify these questions, Jean-Marc Vittori, lead editorial writer at Les Echos and member of the Conseil national du numérique (CNNum), and Bruno Deffains, professor of economics at Université Panthéon Assas and member of Commission nationale consultative des droits de l'homme (CNCDH), share their thoughts with us.
This issue can be considered through the philosophical notion of the social contract. Jean-Jacques Rousseau's definition of the social contract, along with John Rawl's 1971 theory of justice lays the foundation for social and liberal democracy that is built on equal liberty, the difference principle and the principle of equal chances. These principles make up the social contract in the given framework: the veil of ignorance. In other words, individuals are to disregard their specific interests in order to establish rules of social organization. In one way or another, individuals are bound to impartiality.
With the emergence of new IT-based tools, tension rises between the veil of ignorance and the mass processing of data, which allows for enhanced transparency. Big data is likely to pose a risk selection problem that is incompatible with the social contract’s inclusive logic, given that it is founded on the principle of impartiality on which social insurance is based. This issue was raised in a report entitled "Artificial Intelligence, Insurance & Solidarity" published in January 2020 by the Foundation.
Do health risks need to be taken into account? If so, which ones? The answer to this question isn't obvious, especially given how much emphasis is placed on behaviour. Heavy drinking and little physical activity are personal choices that pose great risk to society. Why? How? Can one be forced to exercise? How would that happen? Today, legal answers to those questions exist. For example, in 2011, in the fight against discrimination, the Court of Justice of the European Union prohibited insurers from offering cheaper premiums to female drivers, even if they are involved in fewer accidents.
In some ways, however, digital causes us to revert back to the state of nature by reintroducing the “might makes right” principle. For example, big digital companies today are worth five to ten times more on the stock market than other companies.
To address these questions, economic and political answers are needed, not just legal ones. Digital must be politicized.
There are different models: on one hand, data is easily appropriated by very large companies in the United States, and on the other hand, data is monopolized by large companies and the State in China. In fact, China voted in favour of a device that limits the usage of data by private actors just last year. No limit is set with regard to public actors. Therefore, data fuels social control experiments, which are very powerful in a totalitarian political system.
In this context, the EU affirmed the right to individual ownership of data with the European Union's General Data Protection Regulation (GDPR) and the latest texts (Digital Services Act (DSA) and the Digital Markets Act (DMA).
However, the question of data for the public interest is not sufficiently considered. This leads us back to Jean-Jacques Rousseau's social contract, which outlines that each person must surrender their individual rights to obtain the equality of rights on which society is built. The European Union must reinforce the latter. (See Human Technology Foundation's report on Data Altruism, published in February 2022).
In the health sector, an aspect of the social contract is the use of data to serve the public interest. Why should I keep my data private if it can contribute to the progression of public health? In a sector in which several services are free, why not share my anonymous data to serve public interest? This is another conversation that must be had collectively.
"Data requires the reorganization of how we engage with one another".
Big data is both a deciding asset in favour of anticipating socioeconomic changes, and it reinforces the risk of subtle manipulation to guide behaviour (i.e., nudge). There's a fine line between serving the public interest and the risks associated with risk selection. Only appropriate institutional mechanisms backed by economic mechanisms, like economic incentives, can address this issue. It is also a question of trial-and-error, given that we are still in the early stages.
Confidence is core to the mass use of data. It can be eroded or reinforced, depending on institutional arrangements. Digital evolves at an impressive pace, and AI is a significant step. Given that nothing is decided by the tool itself, we will decide what it will become. It has certain limitations and possibilities that are very important to debate and discuss. Its scope is massive, so we must explore our options together.
"It's a pressing matter, but the political agenda is overloaded".
Digital and data are transforming the skills and knowledge allocation system. The political system can react defensively when faced with something that challenges the way it has functioned for centuries. Rousseau's social contract is from a different era in terms of the role that information plays. If we want a chance to enter into this social contract, we must each be able to grasp the ideas and concepts it was founded on, which requires much education and training. There is nothing better for our social institutions than the veil of ignorance because it does not compromise private interest and allows for a logic of impartiality. The difficulty lies in the fact that a logic of impartiality is not intrinsic to AI or big data.