Twitter, Human Technology FoundationLinkedIn, Human Technology Foundation
Long Shot: Anthropologists in the land of Algorithms

Many Artificial Intelligence (AI) processes today are "black boxes" that do not allow users, or even the designers themselves, to see how the algorithms arrived at the given results. Hence the interest of algorithmic ethnography and other holistic methods to try to see more clearly into these black boxes.

What if ethologists were to look at the behaviour of online algorithms as they do with animal populations? According to anthropologist Iyan Rawhan, who now heads the Center for Humans and Machines at the Max Planck Institute for Human Development, one should be able to observe machines that show potentially different behaviour from that assigned to them by their creators. The questions posed here have staggering implications, although Rawhan makes it clear that attributing 'behaviour' to machines does not mean assuming free will...

The Swiss anthropologist Bogomil Kohlbrenner, for his part, argues for a social analysis and cultural black boxes, which are most of the algorithms.

The more we become aware of the presence and effects of machine learning algorithms in our lives, the more emotion takes over the debates. The feeling of loss of mastery intensifies with each new scandal. The many problems already identified lead us to question the nature of the processes and their responsibilities. The
algorithmic technologies have an impact on our lives that sometimes seems harmless [1], that recommendation, targeted advertising, or assistance to the
driving a car. These same algorithmic processes can be used to other areas such as autonomous cars [2] or the application of justice [3]. This is then nothing more harmless than a biased algorithm that targets a predominantly certain profiles to suggest higher dangerousness and more prison sentences severe.
This last example highlights the importance of understanding how algorithms work, i.e. to understand how we arrived at the given results, and how assume the responsibility that is due. Understanding helps to answer the questions:
why were you targeted? To what extent have our choices been directed? How are allocate priorities when an autonomous car perceives dangers? And so on. The difficulty in understanding the process, what can be called algorithm auditability, is not only due to the manufacturing secret of companies or government entities who design them. It also stems from the complexity and permanent transformation of the algorithms such as neural networks, which are of a complexity that exceeds its designers when they are actively used. In this sense, studying AI amounts to studying liquid, indeterminate and constantly changing objects. Algorithms are objects who interact, who are understood differently depending on the actors with whom they are come into contact. In short, they are objects of an unstable and multiple nature [1], which depend on the gaze and the interaction with the observer.

By defining the algorithms in this way, by extending their quantitative component to include to reveal their fluid, dynamic and interwoven nature in our society, it is clear that they are the similarities with any "cultural" object of study. As a result, the anthropologist, considered to be a as an expert in the cultural field [1], is armed with tools to understand AI and their functioning in our society and equipped with the necessary insight to understand how the use and implications, sometimes imperceptible, of algorithms on a daily basis.

The first and not the least of the problems faced by an anthropologist, it is the "black box" effect of complex AI algorithms, such as neural networks or heuristic algorithms. A "black box" is a system in which we know the power supplied and the result obtained, but the intermediary process of information processing, which remains too unintelligible to reconstruct the equation. The "black box" may be a voluntary choice of those developing the system, as in the case of to guarantee by an additional layer the manufacturing secret and the copyright. And last but not least, the "black box" effect is almost inherent to the algorithm, which exists in different forms according to the actors in contact and which is difficult to locate as it is integrated into the practices and human flows. In this sense, the algorithm cannot be reduced to a single object technical, it is a single entity that is difficult to locate.

Machine learning, which consists in feeding the algorithm with data to leave it the result to be obtained, crystallises this question of explicability and responsibility. It is a question of understanding how it works by identifying the biases that could have a detrimental effect on part of the population or institutions and, in the long run, neutralise them or even promote their positive effects. From a social point of view, we note the tendency to associate quantity of data with de facto better quality. The having large amounts of correlated data does not necessarily increase their relevance: more corrupted or incomplete data will not improve the content.

Let's take one of the most problematic cases: algorithmic justice and its biases now proven for a certain part of the population depending on their background socio-professional or community affiliation. This targeting most probably reflects biases, except that they are intended to be objective under the guise of an automated process and devoid of human intervention, at least in appearance. Basically biased data, decision making on the new data being introduced, creates a loop of dangerous feedback. You end up finding what you're looking for, because you're looking for it. Inequalities not only reinforce, but justify themselves, during that other problems are made invisible by the lack of attention paid to discovering what is going on.

The anthropologist is interested in the black box not only in terms of data and production (input/output), but also to the complexity of interactions with the user, the population serving as data and results, the designers, or the critics in charge of its audit. More importantly, it would be a question of understanding how to correct the internal process despite its invisibility and not to intervene on the data itself by means of a tinkering with the algorithm's weightings, but on the origins of such biases from the outset design.

The problem to which the anthropologist attaches himself is thus at the crossroads of systems complex such as networks of social interactions, whether they be designers, managers or users, AI algorithms and the reductionist nature of models their bases, as well as the corporate interests they are seeking to potentially preserve or develop.

The data first came into being in the field of basic scientific research. They are extracted from reality, of which it represents a simplification with a view to simplifying them in order to treatment. The collection of data is temporal and qualitative. Its processing cannot be as for him that biased. A predominantly humane approach is needed to sort out, sort out, sort out categorization, weighting, adjustments along the way, etc. It is good to remember, in the process, of the limits that are incumbent upon it, and that only part of the "real" is captured. The Technical improvements can change the limits, but not remove them. If the tone of the scientific discourse advocates mentioning limitations, the commercial discourse will make them willingly abstract. Users who come into contact with a data processing process do not will rarely be aware of the structure and dynamics of these boundaries and will only rarely seek to not to inquire proactively. When black boxes support decisions and implications, with feedback loops that move the algorithm in tangible ways, with unexpected dynamics, or even negative dynamics, reinforce the negative impacts for some of them communities. This lack of information can become problematic.

Studying algorithms without their sociotechnical implications [5] would be like closing the door on the eyes on an essential aspect. Algorithms are similar to social phenomena in that they are their complexity and dynamic nature in use, and can be studied as such for to overcome the apparent simplification of the data and results obtained [1]. If we consider that the behaviour of machines and humans influence each other, and that this is a matter of sociotechnical meta-objects that cannot escape a constant evolution linked to their own feedback loop, they cannot be separated from the social system in which they are integrated.

As a non-material, situated, singular, and immutable "object", algorithms are dynamic entities. From the point of view of the definition itself, the computer scientist and sociologist Paul Dourish observes that an algorithm is a suitcase word, which is fully understood in a context shared between members rather than by its technical or material delimitation. There are several alternatives behind the word "algorithm", several possible realities that the only way to locate them is through the related social assemblages [6]. In this sense, the notion is very close to that of the word culture, which is understood by all and yet is difficult to understand definable without context.

The reductionist, data-oriented approach that gave rise to the algorithms does not easily lend itself to this study. More holistic and systemic approaches can contribute to better to deconstruct these systems and the aspects that escape explicability. They enable us to better integrate the socio-political framework, perspectives and dead ends than certain actors may have, voluntarily or involuntarily. Take, for example, a self-portrait with an Apple phone and compare it with the shot taken on another device, knowing that the photos are automatically processed by "enhancement" algorithms before they are sent to the displayed. The hardware technology of the camera and the algorithmic engineering of the system are incomplete explanations if factors such as the impact of the economic downturn are not taken into account, criteria of beauty, regionally and socially constructed. Other holistic approaches, such as ecology to understand the algorithms in their environments or ethology, which would otherwise attribute not a consciousness but an evolutionary "nature" to the algorithms, are also more fluid and systemic scientific approaches that are more are well suited to this type of analysis [7].

Approaching algorithms through social and cultural analysis allows comparisons to be made analytics of interest not only for understanding AI, but for understanding oneself. Algorithms are as much a tool invented by man as a "total social fact" at the in the sense that one of the fathers of anthropology understands it:

"In other words, in some cases they set the whole of society in motion, its institutions (potlatch, clans fighting, visiting tribes, etc.) and in other cases only a very large number of institutions, in particular when these exchanges and contracts concern rather individuals". M. Mauss, Essai sur le don

Bogomil Kohlbrenner, University of Geneva

1) Seaver, N. (2017). Algorithms as culture: Some tactics for the ethnography ofalgorithmic systems. Big Data & Society. https://doi.org/10.1177/2053951717738104

2) Mayer N. (2019). Les IA des voitures autonomes apprennent la peur. https://www.futura-sciences.com/tech/actualites/intelligence-artificielle-ia-voituresautonomes-apprennent-peur-76053/

3) Grand H. (2019) En Estonie, une intelligence artificielle va rendre des décisions dejustice. http://www.lefigaro.fr/secteur/high-tech/en-estonie-une-intelligenceartificielle-va-rendre-des-decisions-de-justice-20190401

4) Théry M. (2019). Hôtellerie : les Chinois friands d’intelligence artificielle. https://www.bilan.ch/techno/hotellerie-les-chinois-friands-dintelligence-artificielle

5) Lent J. (2017) The Patterning Instinct: A Cultural History of Humanity’s Search forMeaning https://www.jeremylent.com/the-patterning-instinct.html

6) Dourish, P. (2016). Algorithms and their others: Algorithmic culture in context. BigData & Society, p. 3. https://doi.org/10.1177/2053951716665128.

7) Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J. F., Breazeal, C.,… & Jennings, N. R. (2019). Machine behaviour. Nature, 568 (7753), 477. https://www.nature.com/articles/s41586-019-1138-y?_hsenc=p2ANqtz-9iSMQ4-MSmAiZzi14WdeZW-MnS0RYCsbV2ppet634OGMqoB7x1TSpAelQwC5KU7DVtuTbNDQn9FJ9O810NyJx9xQLXWatFEKaa5cO0C9iQJ9AA8&_hsmi=72127156

Related Articles