Twitter, Human Technology FoundationLinkedIn, Human Technology Foundation
Promises, Real Usage and Tensions: AI at Work Put to the Test

Promises, Real Usage and Tensions: AI at Work Put to the Test

While promises of phenomenal time savings generated by AI within organizations abound, and their realization is slow to materialize, reflection must begin on how we collectively approach innovation and the transformation it induces.

Jean-Baptiste Manenti, Sébastien Louradour

Many Promises, Few Results

It has been difficult to miss, in recent years, the publications and statements praising the impressive productivity gains that the introduction of generative AI within organizations would bring.

For example, a study by the Nielsen Norman group concluded in 2023 that AI tools could generate productivity gains in areas as varied as customer relations, commercial and support functions, and computer programming, for an average productivity increase of 66% (equivalent to 88 years of "natural" productivity growth in the European Union).

Two years later, the general trend seems to be returning to greater caution on the part of companies when it comes to investing in AI tools. Citing a Deloitte study, The Economist recalls that the share of leaders with high or very high interest in generative AI is declining. Several reasons explain this moderate enthusiasm, starting with the difficult materialization of gains promised for several years.

But even when companies deploy experiments, they struggle to convince: Forbes estimates that 90% of top-down deployed pilots fail, and The Economist emphasizes that only 8% of companies have deployed more than half of their experiments. The causes include restricted access to quality data, obsolete computer systems, skill limitations, persistent regulatory concerns, not to mention fear of reputational risks that reinforce still-strong mistrust. This results in certain reluctance from executive and financial leaders.

Yet Very Present Usage

However, AI adoption is progressing, and it is difficult to deny that within organizations, usage is developing, often following a bottom-up logic. In France, even though it remains low, the share of French people declaring to use AI in a professional context is strongly increasing (rising from 12% in 2023 to 22% in 2024, according to the 2025 edition of the Digital Barometer from ARCEP, a French regulator). A sometimes disorganized development of practices that testifies to the growing gap between company fears and the speed of adoption of these tools by employees.

This usage is often the result of off-framework practices: this is the phenomenon of Shadow AI, which refers to the unregulated use of AI tools in a professional context, due to the prohibition or absence of tools offered by the employer, but also the reduced quality or ergonomics of internal tools compared to external solutions. A study conducted by the University of Melbourne indicates that 70% of employees using AI at work use free and public tools, while only 42% use solutions offered by their employers (47% of them also indicating using AI in ways that could be considered inappropriate). While these practices testify to real interest from employees and highlight the most easily automatable tasks, they can also be sources of security flaws or data leaks, and further justify the need to integrate AI into overall reflection.

The Impact of AI at Work: A Still Embryonic Study Object

Other questions remain about the effects of introducing AI tools on performance inequalities within teams themselves, again without the question seeming to reach consensus: when a Nielsen-Norman group study indicated in 2023 that these tools carried a reduction in productivity gaps between employees, a 2025 article from The Economist states that "AI will separate the best from the rest," particularly for complex tasks such as research or management, areas where performance inequalities would be multiplied by AI.

This excessive media coverage of work automation and impressive performance gains naturally ends up influencing the perception of AI tools by organization employees and increasing their mistrust. An Odoxa survey published in 2024 indicates that 44% of employees fear seeing their job replaced by a robot or AI, and the dial-ia project reminds us that these concerns call for renewed social dialogue, mobilizing all stakeholders, to understand practices, objectify usage, determine support needs, and build clear lines and safeguards for everyone's benefit.

Thinking About Organizational Transformation

The introduction of generative AI within an organization involves implementing holistic reflection, which notably consists of rethinking task segmentation and the nature of jobs, training and collaboration between different teams, information system architecture and the ability to produce and structure quality data. More globally, it also provokes debates on the distribution of created value or the recomposition of working time... For these reasons, it cannot remain unthought and requires engaging organizational reflection.

A reflection that therefore first focuses on the nature of work: it's not jobs that are automated, but rather the tasks that compose them (and certainly not all of them); the previously mentioned Nielsen-Norman study thus evokes UX design professionals, who could see certain tasks like questionnaire analysis automated, while others, like field observation, retain a strong human factor. This nuance, which establishes a logic of transformation and redefinition of certain job scopes rather than their "replacement," necessarily implies collective work to identify evolution margins within each job, service, and more globally within the organization.

The question of valorizing potentially freed time then becomes central, with this time potentially being mobilized to increase production volumes (while alerts are raised about risks of always demanding more, in a context of global competition for companies, and when they are listed, shareholder expectations on productivity gains generated by AI), or toward improving the quality of work produced (which can go hand in hand with intensification of high intellectual value work, but requires increased vigilance on cognitive exhaustion risks). Beyond these two obvious directions, paths for reallocating freed time are numerous: consolidating customer relationships, stimulating collaboration and connections between teams, creativity, reorientation toward individual training actions or investment in peer knowledge sharing... A fair balance must obviously be found, but it's evident that if these subjects remain unthought during AI tool deployment, they will become sources of tension when gains begin to manifest.

Training is obviously an essential element of this global strategy, while a BCG study published in January 2024 noted that only 6% of companies had managed to train more than 25% of their workforce, and the Odoxa study on AI at work established that more than half of employees would personally like to be trained on it. Training not only useful for ensuring good tool handling and exploiting them to their full capacity, but also for providing answers to concerns, particularly regarding replacement, and ensuring that usage stays within a secure framework respectful of companies' regulatory obligations.

Technical and technological limits also require long-term work. On one hand, not having quality data often remains an obstacle to appropriating AI to its full capacity within an organization, and projects must therefore be opened on structuring quality databases, as well as on agent literacy on this subject. On the other hand, reducing technological debt accumulated by certain organizations whose information systems remain obsolete is an expensive investment, but hardly dispensable, beyond even AI. Again, lifting these obstacles cannot be decreed, it must be planned.

Engaging these reflections therefore requires accepting a fact: while medium and long-term gains are to be expected from adopting AI within organizations (and it seems obvious that those who miss this opportunity will struggle to catch up their delay in a few years), it's highly likely that these gains won't manifest in the short term, especially since certain investment (financial, human, intellectual) is essential for this good deployment.

The very nature of this project, its transversality and its propensity to profoundly modify organization functioning, implies conducting it collectively, making it a place of dialogue with social partners, so that AI deployment within the organization is as fair and smooth as possible, and that individual gains transform into collective value creation.

Related Articles