Rethinking your concept of proof to defend your truth.
In legal, managerial, and journalistic tradition, proof rests on three pillars: documents (photos, texts), testimonies, and recordings (videos, audio). Artificial intelligence is making all three contestable. Worse, what is real can now be challenged by tangible fakes.
This is not a cybersecurity crisis. It is a crisis of organizational epistemology, with trust at its core. It directly affects the way you lead your organization — with your employees, clients, partners, suppliers, authorities, and the media. How do you protect yourself against this exploding threat? What impact does it have on how you manage your organization? This article addresses these questions
But let’s start with a couple of concrete cases – the most striking ones.
- The Incident that changed everything in our notion of proof
- Far from an isolated case… numbers are exploding
- The liar’s dividend: when the truth can be disputed despite the evidence
- The impact on how you lead
- Possible defenses to protect the integrity of proof
- The truth as a strategic asset
The Incident That Changed Everything in our notion of proof
January 2024. A finance employee at Arup — the British architecture and engineering firm with 20,000 staff — received an email from his CFO based in London. The message was unusual: execute a confidential transaction urgently.
The employee, properly trained, was suspicious. He requested confirmation via video conference.
The meeting was arranged. The CFO was there, on video. Several colleagues too. Familiar faces. Exact voices. Facial expressions matching the conversation. Management was represented. Everything looked normal.
The employee executed 15 wire transfers to five bank accounts in Hong Kong, totalling HK$200 million — approximately US$25 million. It was only when contacting headquarters a week later that he discovered the truth: there had been no colleague in that meeting.
Every participant was a synthetic reconstruction generated by artificial intelligence. The CFO had never taken part in that call
US$25 million lost by Arup, across 15 transfers to fraudulent accounts
This case, revealed by the Hong Kong police and confirmed by Arup to the international press, is not a technology news item. It is a rupture signal. For the first time, verification by visual and audio means — the natural reflex of any seasoned professional — was not enough.
“It turned out that all the people in that video conference were fake.” — Superintendent Baron Chan Shun-ching, Hong Kong Police
Far From an Isolated Case. Numbers are exploding
Deepfake fraud grew by 3,000% between 2023 and 2024, according to McAfee. In Q1 2025 alone, there were more deepfake incidents than in the entire year of 2024, tell Resemble AI, Keepnet Labs and Ceartas in their respective report of december 2025,— representing a 2,100% increase over three years. They have become the most common type of fraud.
The most striking recent documented cases:
January 2026. A Swiss entrepreneur in the canton of Schwyz transferred several million Swiss francs to Asia after receiving two weeks of calls from an AI-cloned “business partner”.
March 2025, a Singapore CFO wired $499,000 after a video conference with fake executives..
Early 2025, a coordinated wave targeted Italy’s business elite — including Giorgio Armani — to extort €1 million.
Ferrari, WPP and LastPass were already targeted in 2024.
The phenomenon is not slowing down; it is accelerating.
The first documented attack by voice cloning dates to 2019: a British energy company wired €220,000 after an employee received a call from the “CEO of the German parent company” whose voice, accent, and communication style had been perfectly reproduced. The real CEO had never made that call.
According to Deloitte, losses from generative AI-augmented fraud could reach $40 billion in the United States alone by 2027.
A Gartner survey of cybersecurity leaders in 2025 found that 62% of organisations had already experienced a deepfake attack in the preceding 12 months.
→ Read our article on vishing (what they are, who produce them and how, etc.)
The liar’s dividend: when reality can be disputed despite the evidence
Researcher Nina Schick, author of Deep Fakes and the Infocalypse (2020), was among the first to articulate what constitutes the most insidious threat: the danger is not only that fakes are believed. It is that the mere existence of the technology is enough to make any authentic proof contestable.
Legal scholars Chesney and Citron theorised this phenomenon in 2019 under the name “Liar’s Dividend“. A compromising video recording can now be dismissed by invoking technological doubt. A video of a meeting, an audio of a negotiation, a digital signature — everything becomes potentially contestable, not because it is false, but because doubt is now legitimate.
It is no longer just the ability to create fakes that is the problem. It is the ability to deny what is real. And that changes everything.
For your organisation, the implications are immediate. An orally authorised transaction can be disputed. A recorded agreement can be denied. Internal testimony can be called into question. Governance processes that relied on sensory trust — “I heard the executive give approval” — have become structurally vulnerable.
And this vulnerability does not only concern external attacks. It also concerns internal cohesion, managerial responsibility, and an organisation’s ability to defend itself legally or regulatory when its own communications may be challenged.
The Impact on How You Lead
The instinctive response to this threat is technical: invest in detection tools, update authentication systems, train teams. These actions are necessary but not sufficient.
The proof crisis is first and foremost a governance crisis. It questions how you structure authorisations, define responsibilities, and manage internal and external communications. It requires a revision of several fundamental organisational assumptions.
The End of “Seeing Is Believing” as a Management Principle
Many organisations rely on authorisation processes that value relational trust: a confirmation call from an executive, a verbal instruction from a known manager, a video conference validation. These practices, reasonable in an analogue world, have become attack vectors in a world where identities can be synthesised.
Exit, St Thomas with AI.
Gartner predicts that by 2026, 30% of companies will consider their identity verification solutions unreliable when used alone. By 2028, 50% will have adopted a zero-trust data governance posture.
Decoupling Channel from Authority
One of the key lessons from the Arup case is that the attack vector was the communication channel itself — the video conference, the ultimate legitimising tool in the post-pandemic world. The fact that communication was live, with image and sound, had become a guarantee of reliability. That is precisely what was exploited.
The managerial response is to decouple authority from channel. No communication channel, however familiar, constitutes sufficient proof of authorisation for high-stakes decisions. Authorisation must rest on protocols independent of the channel used to transmit it.
A Culture of Verification as an Organisational Competency
Cognitive research on decision-making shows that urgency and perceived authority are the two factors that most effectively suspend critical thinking — and these are precisely the two factors that deepfake attacks systematically exploit. The Arup employee had doubts. He overcame them because the video conference seemed to dispel them.
Training your staff in verification is not a cybersecurity issue: it is a management culture issue. The most resilient organisations will be those that have institutionalised the right and the duty to question — including in the face of apparent authority — and that have put protocols in place allowing this doubt to be exercised without friction.
Possible Defenses to Protect the Integrity of Proof
Truth governance is not an abstract concept. It is operationalised through concrete processes, dedicated teams, and appropriate tools.
On the process side, the priority is to revise authorisation chains for high-stakes decisions: financial transfers, disclosure of sensitive information, contractual commitments. The guiding principle is multi-channel independence: any authorisation transmitted through one channel must be confirmed via a separate, unpredicted channel. An incoming call on a known number, a confirmation through an authenticated internal system, a cross-check with the executive through a separate route.
On the tools side, exposed organisations — family offices, trading firms, international organisation directorates — should evaluate synthetic media detection solutions, behavioral authentication systems, and verifiable digital signature protocols. The EU, with the AI Act that entered into force in August 2024, already mandates marking of AI-generated content: a regulatory signal that proactive organisations can anticipate.
On the team side, truth governance no longer falls solely on the CISO or the legal department. It requires coordination between finance, communications, risk management, and senior leadership. Organisations that treat this as a purely technical silo miss its strategic dimension entirely.
Truth as a Strategic Asset
Leaders reading these lines operate in environments where reputation, trust, and rapid decision-making are competitive advantages. These are precisely the dimensions that the proliferation of deepfakes seeks to erode.
Resilience to this threat is not a defensive posture. It is a competitive advantage. Organisations able to certify the authenticity of their communications, guarantee traceability of their decisions, and maintain stakeholder trust in an environment of growing epistemic uncertainty will be better positioned — with their investors, partners, and regulators.
Gartner frames it this way: organisations can no longer afford to implicitly trust data or assume it is of human origin. Truth governance is the structural response to this new imperative.
The question is no longer ‘have we been attacked? But ‘how will we function on the day we are?
This paradigm shift — from sensory trust to truth governance — is one of the most urgent strategic priorities for exposed organisations. It combines operational dimensions (processes, tools, teams) and posture dimensions (culture, management, decision-making). It cannot be treated as an ordinary cybersecurity project
To Go Further
NWe support business leaders, international organization directors, family offices, and investment structures in implementing truth governance: exposure audits, revision of authorization processes, team training, and adapted managerial posture.
Learn more about our Truth Governance offering.
Sources and references
Arup / Police de Hong Kong (février 2024) — CNN, Financial Times, CFO Dive
Biometric Update, “Deepfake voice fraud dupes Swiss businessman into transferring millions“, 21 janvier 2026
Gartner Security & Risk Management Summit 2025 (septembre 2025), enquête auprès de 302 cybersecurity leaders
Deloitte Center for Financial Services : GenAI fraud projections 2027
World Economic Forum : Global Cybersecurity Outlook 2025
Chesney & Citron : « Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security », 107 California Law Review (2019)
Group-IB : Voice Deepfake Vishing Report (2025)
McAfee Deepfake Guide (2024)
