In conversation with... Brian Green

Being human in the age of AI: the anthropological issue

What does it mean to be human in the age of artificial intelligence? How is this powerful and versatile technology transforming the way we live, relate to others, view ourselves and express our humanity? On November 30, Jordan Joseph Wales, Kuczmarski Professor of Theology at Hillsdale College in Michigan, spoke with Brian Patrick Green, Director of Technology Ethics at the Markkula Center for Applied Ethics at Santa Clara University (California).

J. J. Wales: How might AI affect our relationship with others and with work?

B. P. Green: When we produce technology, it changes us in return and makes us much more powerful. But the further we go, the more important it is to define good ways of applying these technologies. In the future, many tasks will be automated, allowing us to spend more time with family and friends. However, that poses a problem because the structure of our economy does not allow for the circulation of wealth if there is no work. That’s why we talk about universal income. But it’s not just about money: for many people, occupation is also an important part of identity. That being said, AI gives us the opportunity to become better people by allowing us the time to do more human “work”: caring for children, the elderly, the disabled....

Certain essays from the 1950s saw the cities of the future as a paradise where people could devote themselves to art, philosophy, family. But today, people are more overwhelmed than happy...

Indeed! Things have not evolved at all in that direction, but rather towards entertainment. We should remember that life is not just about entertainment. More interaction with technology can lead to interaction with other people. But there is something fundamentally different between interacting on social networks and interacting face to face. Face to face, mirror neurons allow us to empathize, which brings us back to the issue of human nature. We’re creatures of love. We need to express love and to experience that love ourselves. And we need to ensure that technology facilitates ethical development of this human nature. Yet, in their approach to technology, many business models instead rely more on vice by playing on our dopamine and neurotransmitters.

When we talk about automation, we often think of the work environment. But from an anthropological perspective, what does it mean to be human in the age of AI?

For most of human history, we worked to survive. And this is still the case for the vast majority of people in the world. If you take away the basic reasons for working, what’s left? Striving to become a good human being. We have the capacity to apply mental and physical skills to change the world. When we build a house, we change our environment. And we use techniques, when we create technological products, that themselves fundamentally change the world, for better or for worse. Our energy system changed the composition of our atmosphere. Was it a mistake to expend effort on this? In the past, 99% of the population produced food. Today, that figure is less than 2% in the US, and the decline will continue. Automation will create unemployment, but also very high demand for certain technological capabilities. So the technologically literate will have control over food, energy, everything… while everyone else will be excluded from the economy, which will create inequality.

It is often said that thanks to social networks we have the possibility to form more relationships not constrained by distance. Is there a risk of losing something?

Humans evolved in an environment where they were used to interacting with a small number of individuals, estimated at around 150 people, according to several studies. But other research in the United States in recent decades has raised the issue of the number of strong relationships. Sixty years ago, people said they had five strong friendships, but now that number has dropped to two, or one or even zero. While the number of strong friendships has decreased, access to many weak relationships has increased. We can exchange more easily with people on different continents. This openness is a gift. But it cannot be at the expense of those who are close to us, as that would damage our human nature.

AI can also be considered as an interlocutor. How would you characterize these relationships with technology today?

AI is constantly feeding us by recommending media it thinks is relevant to us, which makes us addicted to the technology. In the future, we could imagine a virtual coach or school that would push us towards a better actualization of our abilities. But if that were geared towards our worst faults, it would create laziness and loss of bearings. When we interact with intelligent systems, we need them to push us towards the best, not the worst.

If AI were used to fulfil all our needs, could we lose some of our humanity?

Life is not just about comfort. We are here to make the world a better place. The situations of deep despair that we experience in the United States — suicides and drug addictions, for example — are rooted in very negative life experience. Something has gone wrong in our societies. If we pursue pleasure directly, it will always escape us, said St. Augustine. Happiness must be achieved by pursuing something greater, through love and care for others.

Is it better to have a real child that complains of a stomach ache than an artificial child that never has a stomach ache?

The deliberate contrast in this question speaks to the issue of the true nature of AI. There is a tendency to anthropomorphize AI because we want it to be like us. However, there are problems with thinking that AI has a purpose in itself. All human beings lead their own existence, based on their biology, their psychology, their mind... So having a child is a wonderful thing, because a child will grow up and become a full human being. AI can only simulate that. The artificial intelligence community talks about consciousness or artificial evolution, which could happen one day. But I think it’s a huge step to take. Because there are good reasons to think that technology is a mirror of ourselves, rather than an entity with a fundamental existence.

If AI technology is geared towards maximizing profit, how can it include human considerations?

Do we want to live in a world where compassion is ignored? I think the answer is clearly no. Many people have realized that we are moving away from utopia and towards dystopia. This is a first step. For the World Economic Forum, I wrote a few articles in which I talked about the actions of Microsoft and IBM in terms of ethics. They are producing more tools to ensure that AI is used in a positive way. I think every company should have ethics and social committees, so that they don’t just address these issues in reaction to an ethical disaster. We need to think more deeply about how technology transforms us, right from the outset of developing it.

Beyond efficiency, can we use AI to increase resilience and the human dimension?

If we have a centralized system, damage can easily spread everywhere. This has been the case over the last year with the global supply chain problems. But we can also talk about resilience in terms of human relations. You don’t have to lay people off when you start transitioning to automation. You can keep employees, so they can check that the automation is working properly and make sure that the technology is being used for the common good.

AI opens us up to the world and at the same time restricts the way we see it. How can we keep a broad view while using technology?

There is a strong tendency to use AI to make us passive observers of society. But it can also empower us. I’m thinking in particular of applications that help us manage our time, choose what we eat or our sports activities. I think the next step will be to help us make moral decisions. With the risk, of course, that we’ll become addicted to those technologies... Education aims to create autonomous individuals, capable of thinking for themselves and acting in different situations. AI has many problems with those aspects of diversity and context. It’s very complicated to ask very broad ethical questions in a very concrete situation. Maybe AI can help in the field of education. But as far as decision-makers are concerned, it shouldn’t always be listened to. There are many discussions about the use of AI by experts. If you’re a surgeon, and AI tells you to cut in one place and not another, there’s a real incentive to follow its advice. But faced with moral situations, such as a judge, we’re designed to make decisions. It’s in our DNA.

Related Articles