Philosophy, Neurosciences and IA: The New Frontiers

Susanna Schellenberg, Professor of Philosophy and Cognitive Sciences at Rutgers University, analyses how the perception of the world around us is based on discriminating abilities, which are a matter for the neurosciences. Hence the need to develop a philosophy of perception. Interview at the frontier of science and phenomenology.

- What is the basis of perception, and how, according to this basis, do we gain knowledge of our environment?

I would argue that perception is based on the use of capacities discriminatory, which are abilities to discriminate against objects, objects properties and events in our environment. We have the perceptive ability to distinguish red from blue spots, the high notes of low notes, loud noises of low noises, the smell of the cheese from the smell of coffee, and millions of other things. To perceive a red coffee cup on a wooden desk, you have to distinguish the colour the red of the coffee cup the brown colour of the desk, the shape of the cup forms that surround it, and many other such discriminations. In employing such skills, we see the coffee cup.

In neuroscience, it is normal to analyse perception fundamentally as a matter of discrimination. But in philosophy, this key aspect of perception was hardly taken into account. I am trying to change that.

Seeing a red cup, for example, is about using abilities discriminatory, i.e. the ability to distinguish red from the other red colours and the ability to distinguish the shapes of the cup from other shapes.

However, experience can lead us astray: our perception is riddled with prejudices of all kinds and we can suffer from hallucinations and illusions. Thus, given the unreliability of the perception, it is important to be aware of the ask how perception could bring us knowledge. On the other hand, it is clear that if one thing can give us Knowledge is perception.

Perceptual abilities enable us to distinguish and isolate objects, properties and events of a specific type. The capacity to discriminate and isolate red fulfils its function if it is used to discriminate and isolate something red in the environment.

I would argue that perception is based on the use of capacities. For example, if I see a red cup using my perceptual ability to discriminate red, then the function of this ability is fulfilled. In this case, I gain knowledge of the cup. In addition, my perceptive state gives a justification for believing that there is a red cup in front of me.

What about cases of hallucination and illusion? From the point of view of the subject who in experience, they may be indistinguishable from cases of perception. Take the case of a person hallucinating a red cup. It seems to him so that there is a red cup in front of it, even if there is no red cup at this location. To make the case extreme, suppose that for this hallucination is indistinguishable from perception. I contend that in such a case, it uses the same perceptual abilities as those of the that she would use if she perceived. But since there is no red cup, she cannot distinguish a red cup. Thus, its ability to employees are not fulfilling their function. Because of this failure, hallucination does not allow him to know his environment. And it does not wouldn't even know that she doesn't know.

In cases of perception, hallucination and illusion, the same mechanism is activated in our minds: the use of discriminatory abilities. This explains why, from our point of view, these experiences may seem to be exactly the same. However, there are differences. In the perception, the abilities employed fulfil their function: we let us discriminate what we feel is present. In cases of hallucinations and illusion, they do not: we do not discriminate against what seems to be be present. Perceptual abilities are therefore the basis of a theory unified perception.

- In this case, can we also consider artificial intelligence combined with sensors as perceptive systems?

I think that artificial intelligence systems combined with sensors are a kind of perceptive system. Like our perceptual systems, they absorb information from their environment through mechanisms of discrimination. If there are significant differences in the implementation of the between AI systems and human beings, the physical mechanism between AI systems and humans, the underlying discrimination is the same.

We all agree that a human being with a hearing implant is not a victim of discrimination is still a human being. Indeed, as far as its mechanism of discrimination, hearing implants are similar to our hearing system.

If this is true, then at least as far as perception is concerned, we do not are not so special. That shouldn't bother us too much. After all, science tells us that we are no different in nature from others animals with regard to perception. Perception is a mechanism basic things that we share with other animals and, as I say, with AI systems. There are many aspects for which we are categorically different from AI systems, for example, in that they are which concerns our capacity for creativity and our ability to experience emotions - our own and those of others.

To highlight the differences and similarities between systems of AI and humans, let's now imagine a human with ever more of implants. Is there a time when it will be more the responsibility of the robot than of the being human? Let us consider an extreme case in which every cell in the body has been replaced by an implant. We could then assume that it is now a physical replica of a robot created in a laboratory computer science. These are profound problems. But what I want simply to say here is that, in terms of perception, we are not so much different from other animals and AI systems.

- Would an artificial consciousness be possible?

If you consider that perceptual consciousness is constituted by the use of perceptual capacities and that, according to the previous question, AI systems with sensors can be considered as a "sensor system" perceptual systems, then yes, AI could have perceptual awareness.

I've always been a bit surprised by the intense interest in AI conscience. I am more interested in how our mind accomplishes the amazing things he realises, whether or not by the conscious or unconscious nature of the process. Our perceptive systems process an astonishing amount of information in a fraction of a second. About 50% of the human brain is dedicated to visual treatment. This allocation of our resources is an indication of the complexity of the task at hand. A tiny amount of information processed goes up to the conscious level. Thus, a large part of the information is used to guide our action and is available for our cognitive system without ever reaching the conscious level.

One thing is certain: we are far from developing AI systems conscious. It has recently been proven that machine learning is stagnating and, despite enormous efforts, we are struggling to develop AI systems have the language skills of pre-school children. We do not should not worry so much about singularity1 or AI becoming conscious or killer robots. What should worry us are the biased algorithms. They're here now. They have huge implications in our lives.

- Speaking of biased algorithms, with sensors and calculation algorithms always better, how is it that there are still biases?

First of all, it is important to note that all systems of complex recognition are riddled with bias. The human mind and AI both exploit huge amounts of data, and whenever there are a mismatch between input quantity and processing power available, there is a need to simplify, and therefore some of the information is lost. This process generates bias.

It is important to recognise that some biases are not a problem. They make these systems more effective. For example, the perceptual system that light comes from above, and that objects in the human body have a bias that movement are solid: we sometimes bend down when we approach a moving shadows, even if it is only wind.

However, certain biases are deeply problematic and can be very detrimental to marginalised and disenfranchised groups. As all complex recognition systems have bias and that at least some of these biases are not a problem, a great challenge is to differentiate between problematic and non-problematic biases problems. One possible approach is to look at the outcome. If the bias is detrimental to a group of people, then the algorithm must be corrected to eliminate bias. If it is not harmful, it can be left.

This is the current approach to dealing with algorithmic biases. Although it has in its place, I think this approach does too little, too late. I study the means to eliminate biases further upstream, i.e. the means of developing algorithms to ensure that harmful biases do not appear in the first place.

Another element is that, contrary to what is generally assumed, most algorithmic biases are not "top-down biases". They do not are not derived from the beliefs and views of the programmer and the how these beliefs and views affect the choices he or she has made when designing the algorithm. It is certain that such biases descendants exist and that they are a big problem. However, the "Upward bias", not only in AI but also in the human mind, pose an equally important problem. These types of bias come from data, from their processing to the lowest level, models that the system detects in the data, classifications and correlations that it establishes.

- How can the philosophy of perception and the so-called "hard" sciences engage in a constructive dialogue?

Philosophy has always been closely linked to other sciences.

Historically, three major issues have motivated research on the perception. Firstly, how does perception justify beliefs and does it allow us to get to know our environment? This question has been addressed almost exclusively in philosophy. Second, how does the perception engender conscious mental states? This question has the subject was also mainly approached from a philosophical point of view. Indeed, a few neuroscientists say that, with some exceptions, we are a long way from being understand what it means to be conscious in terms of mechanisms cerebral. Thirdly, how does a perceptual system manage to convert variable information into mental representations of invariant characteristics of our environment? This last question was mainly addressed in the fields of neurosciences, the cognitive psychology and psychophysics.

A central hypothesis of my research is that the answers to these three questions are not independent of each other and that in order to be able to make progress in understanding the nature of perception, we must study it in a more integrated way. Hence the title of my latest book "The Unity of Perception" in which I am developing a unified set of tools to answer the three questions in a single theory that is conceptually disciplined and empirically constrained.

On the other hand, neuroscience is a very new field that can benefit from what philosophers know how to do best: analysing the new concepts that neurosciences generate and the articulate.

- Will this be the subject of your next book?

I am working to expose all the different ways in which AI systems all such as the human perceptual and cognitive system can be biased. Here is some essential distinctions :

A system can be biased if the incoming data are skewed. A second explanation is due to the way in which the characteristics of the incoming data are linked and classified at the processing stage. A third is due to the way the output is interpreted. Another key distinction is the one I mentioned earlier between downward bias and upward biases: upward biases are under-explored and should be better understood. Another essential distinction is that between biases due to the training sample and biases related to the way in which the characteristics of the data are linked together. Training sample bias is due to biases in the data with which an algorithm is trained. The recognition software Google's voice is an example of this. Initially, it worked a lot better on male voices than on female voices. It turned out that the reason for this was that he was mainly trained on male voices. He is therefore not surprising that it works much better in the range with a typical male voice. This type of bias is easy to correct. It is enough to expose the algorithm to a large number of female voices. This type of bias is also easy to avoid. Simply choose samples unbiased training. But of course, there will always be choices difficult to do when selecting training samples. The Google's voice recognition works very poorly for accents Scottish. Since few English speakers have Scottish accents, and since few English speakers have Scottish accents, it is important to that there are many different types of Scottish accents, difficult choices have to be made in terms of the effort to be made to ensure that recognition systems the voice works for Scottish speakers.

However this problem is solved, the benefits will be felt in the future are obviously not at the same level as the negative impacts of the bias in the algorithms used for criminal convictions, the parole, applications for employment, loan applications, loan applications, parole applications, employment health care and advertising generation systems. The biases in these applications are mainly biases related to the way in which the applications are characteristics of the data are linked together. They are much more difficult to correct.

Here is an example of this type of bias: if you Google for a name at African-American sounding, you have a better chance of getting publicity for a criminal background check only if you google a search a name generally given to European babies. The reason for this is that AdSense, Google's advertising algorithm, has detected a pattern according to which people were more likely to audit of criminal history after googling a name if that name was typical of the Afroamerican community. It then generated advertisements in consequence. This is a very detrimental bias, because it is not only to perpetuate the existing biases in our society, it amplifies them.

Given the algorithmic biases, there are good reasons to be sceptical about a supposedly greater objectivity of the computer-generated decisions compared to human decisions! I think there is still a lot of work to be done to deal with all these questions.

Interview by Lauriane Gorce

Related Articles