Twitter, Human Technology FoundationLinkedIn, Human Technology Foundation
Bruno Deffains and Jean-Marc Vittori on "Is this the end of the principle of solidarity?"

Today's social contract is based on the veil of ignorance, which can be translated by the motto: liberty, equality, fraternity. Is it time to evolve these fundamentals? 

Artificial intelligence disrupts risk assessment. Such innovation has transformed sectors like that of private insurance, which relies on risk assessment. But what about the public sector? 

China has implemented social credit and the United States has long been using credit scoring. What is the way forward for the European Union (EU)? 

 To clarify these questions, Jean-Marc Vittori, lead editorial writer at Les Echos and member of the Conseil national du numérique (CNNum), and Bruno Deffains, professor of economics at Université Panthéon Assas and member of Commission nationale consultative des droits de l'homme (CNCDH), share their thoughts with us. 

Is this the end of the principle of solidarity? 

Bruno Deffains: 

This issue can be considered through the philosophical notion of the social contract. Jean-Jacques Rousseau's definition of the social contract, along with John Rawl's 1971 theory of justice lays the foundation for social and liberal democracy that is built on equal liberty, the difference principle and the principle of equal chances. These principles make up the social contract in the given framework: the veil of ignorance. In other words, individuals are to disregard their specific interests in order to establish rules of social organization. In one way or another, individuals are bound to impartiality. 

Beyond philosophical circles, this construct allows for collective rights to be legitimized, with the goal of focusing on the most fragile rights. It is precisely on this basis that most of our social safety nets were developed in the 20th century. The veil of ignorance puts impartiality at the heart of reasoning to calculate individual and collective rights. Nonetheless, retirement cannot be considered in the same way as health. 

With the emergence of new IT-based tools, tension rises between the veil of ignorance and the mass processing of data, which allows for enhanced transparency. Big data is likely to pose a risk selection problem that is incompatible with the social contract’s inclusive logic, given that it is founded on the principle of impartiality on which social insurance is based. This issue was raised in a report entitled "Artificial Intelligence, Insurance & Solidarity" published in January 2020 by the Foundation. 

In this context, the real question is: does the Welfare State, as defined in the 20th century, still have a role to play? My answer is: yes, but it has to be adapted.  

Jean-Marc Vittori: 

In The Social Contract, Jean-Jacques Rousseau allows for the transition from the state of nature to civil society, all to serve the public interest. In some ways, however, digital causes us to revert back to the state of nature by reintroducing the “might makes right” principle. For example, big digital companies today are worth five to ten times more on the stock market than other companies. A new social contract that integrates such details is needed, due to the presence of actors that are much more powerful now than at the time of Rousseau’s reflections.

The veil of ignorance is torn, and it is difficult to repair. Do health risks need to be taken into account? If so, which ones? The answer to this question isn't obvious, especially given how much emphasis is placed on behaviour. Heavy drinking and little physical activity are personal choices that pose great risk to society. Why? How? Can one be forced to exercise? How would that happen? Today, legal answers to those questions exist. For example, in 2011, in the fight against discrimination, the Court of Justice of the European Union prohibited insurers from offering cheaper premiums to female drivers, even if they are involved in fewer accidents. 

To address these questions, economic and political answers are needed, not just legal ones. Digital must be politicized. 

Which principle should replace solidarity? 

Bruno Deffains: 

While the Welfare State was built on transcendental social justice, it's with full knowledge of the facts that we must consider things today. This evolution questions society's organization. Should we tend towards a more participative democracy? Should citizens have to reflect on the conditions of the implementation of a social contract together? 

Given that we are witnessing the tearing of the veil of ignorance, the solidarity that is built on this basis no longer stands. That said, providing care to the poorest in society must still be addressed. Are we reverting to a system that is built on charity? Before institutions were established in the 20th century, addressing poverty was based on the principle of charity. Nevertheless, this principle has a major flaw: it relies on volunteerism. We decide who we want to help. Therefore, are we risking regression? 

Jean-Marc Vittori: 

Solidarity was first demonstrated at the local level. As cities were developed, bonds weakened and charity organizations were founded. The hospital was also created to provide care to society’s most vulnerable people. Links became even weaker in the 20th century with the implementation of the Welfare State and social protection to care for the poor. Solidarity has a national basis. Today, cracks in the national framework are caused by digital. The sense of belonging to the national community has weakened to the benefit of other forms of belonging to other communities, whether local or international. Today, a sense of multi-belonging exists and, in this context, solidarity is much harder to express.

Should the application of AI be limited to certain areas? 

Jean-Marc Vittori: 

First, it's a matter of distinguishing data from artificial intelligence (AI). AI targets data associated with computers and forms of machine learning, creating the opportunity for public decisions to be made by AI devices. This raises the question of biases, which was debated in the context of sexist or racist recruitment software. AI learns from the data it is fed, and the way that said data is processed. 

Humans are also biased in their decisions. For example, stock market prices increase on the new moon. How does this occur? Should AI be used in areas where it would be less biased than human intelligence? How could such a decision be made collectively? 

Bruno Deffains: 

Digital potential in the health and transportation sectors is undeniable. From the general interest standpoint, the value of data can't be argued. 

However, the politicization of digital is indispensable; it must be appropriated by politics. Powerful companies have got ahead and, today, market authorities struggle to catch up to them. For example, the Digital Markets Act is supposed to increase the regulation of their behaviours. 

Given the Chinese and American models, what is the way forward for Europe? 

Jean-Marc Vittori: 

There are different models: on one hand, data is easily appropriated by very large companies in the United States, and on the other hand, data is monopolized by large companies and the State in China. In fact, China voted in favour of a device that limits the usage of data by private actors just last year. No limit is set with regard to public actors. Therefore, data fuels social control experiments, which are very powerful in a totalitarian political system. 

In this context, the EU affirmed the right to individual ownership of data with the European Union's General Data Protection Regulation (GDPR) and the latest texts (Digital Services Act (DSA) and the Digital Markets Act (DMA). 

However, the question of data for the public interest is not sufficiently considered. This leads us back to Jean-Jacques Rousseau's social contract, which outlines that each person must surrender their individual rights to obtain the equality of rights on which society is built. The European Union must reinforce the latter. (See Human Technology Foundation's report on Data Altruism, published in February 2022). 

In the health sector, an aspect of the social contract is the use of data to serve the public interest. Why should I keep my data private if it can contribute to the progression of public health? In a sector in which several services are free, why not share my anonymous data to serve public interest? This is another conversation that must be had collectively.

"Data requires the reorganization of how we engage with one another". 

Bruno Deffains: 

The European Union favours co-construction between players and its member states. The Digital Markets Act nurtures that. Nevertheless, risk is inherent to the mass collection and processing of data. It is both a deciding asset in favour of territorial intelligence, including a territory's ability to anticipate socioeconomic changes, and it reinforces the risk of subtle manipulation to guide behaviour (i.e., nudge). There's a fine line between serving the public interest and the risks associated with risk selection. Only appropriate institutional mechanisms backed by economic mechanisms, like economic incentives, can address this issue. It is also a question of trial-and-error, given that we are still in the early stages.

Does AI in public decision-making reinforce or alter confidence? 

Bruno Deffains: 

Confidence is core to the mass use of data. It can be eroded or reinforced, depending on institutional arrangements. Digital evolves at an impressive pace, and AI is a significant step. Given that nothing is decided by the tool itself, we will decide what it will become. It has certain limitations and possibilities that are very important to debate and discuss. Its scope is massive, so we must explore our options together. 

In the current electoral climate, politics is dominated by short-term issues, given the sequence of crises, preventing it from responding to long-term issues. What room is there for issues as important as the viability of our social security system that is rooted in a principle that is being compromised? 

Private insurance is an example of what might happen with regard to risk selection. It raises the fundamental question of the alignment between the individual and collective interest to which public actors are slightly naive. This alignment calls for a minimum level of co-construction, and it requires the appropriate regulation that conforms to the limitations of the private sector as well as to those of the public interest.

"It's a pressing matter, but the political agenda is overloaded".

Jean-Marc Vittori: 

Representative democracy is based on the following relationship: power is delegated to people with knowledge by people without knowledge. In this way, parliamentary democracy is justified by a strong pyramid of skills. Digital and data are transforming the skills and knowledge allocation system. The political system can react defensively when faced with something that challenges the way it has functioned for centuries.

Rousseau's social contract is from a different era in terms of the role that information plays. If we want a chance to enter into this social contract, we must each be able to grasp the ideas and concepts it was founded on, which requires much education and training.

There is nothing better for our social institutions than the veil of ignorance because it does not compromise private interest and allows for a logic of impartiality. The difficulty lies in the fact that a logic of impartiality is not intrinsic to AI or big data. 

How can we preserve the notion of impartiality when dealing with predictive logic? What can it be replaced by? And in terms of trusted third parties, how can we guarantee their reliability?

Related Articles

No items found.