Twitter, Human Technology FoundationLinkedIn, Human Technology Foundation
Dreamforce 2025: Will the human spirit survive the agentic era?

Dreamforce 2025: Will the human spirit survive the agentic era?

On 14 October 2025, amid the torrent of announcements at Salesforce's flagship conference, a conversation in San Francisco asked the essential question: how do we preserve the human spirit when decisions are increasingly automated?

In dialogue with Matthew McConaughey, Éric Salobir gave voice to a tension felt across every sector: in the race to innovate, who sets the limits and who holds the line when speed and competition raise the stakes?

The tech world has often made the mistake of reducing people to a user experience. Yet curiosity, creativity and moral judgement are the foundation of every technological advance and must continue to light the way, clear-eyed about our anthropological history, to give that progress meaning.

Ultimately, the choice is simple: technology must express the best of us, never the worst.

Human governance: a compass for AI

This isn't about slowing innovation. It's about steering it responsibly. If agentic systems aim to elevate people, then the creativity of language models must come with genuine human oversight, clear choices and regulatory guardrails.

In practice, that means putting people back at the centre:

  • Design organisations to protect judgement and purpose at work.
  • Define red lines and hold them: decide where AI will not be used, even under pressure, and embed that in policy and practice.
  • Audit data and enforce cross-functional governance to limit bias and keep usage traceable and auditable.

Operationally, limits must be built into governance:

  • Establish a board-approved policy, integrated into go-live criteria, with a single accountable owner empowered to halt a deployment.
  • Link guidelines to decision gates, assign clear responsibility for a kill switch, publish model cards.
  • Run an ethical pre-mortem to stress-test robustness: if something goes wrong, what protects dignity, safety and trust? If the answer is unclear, don't deploy.
  • Track exceptions transparently, with a deadline and an identified executive sponsor.
  • Finally, measure human outcomes: time given back, complaint rates, escalation quality, team wellbeing, alongside accuracy and ROI, and align incentives accordingly

Human principles must guide technology not the other way around…

Our humanity isn't measured by the "human functions" we’re able to code, but by those we refuse to trade-off: the limits we’re able set, the lines we refuse to cross, the values we won't compromise.

The real benefit of AI lies less in technical prowess than in the rigour of our judgment. Choosing to use AI to give people time back, then reinvesting that time in attention, care and creativity: that's what distinguishes true AI leadership. The kind that builds trust and performance together, and secures adoption.

Related Articles