Skip to content

Taming Cyber-AI Crises? Flying Blind, a Weapon You Forge

  • by
dirigeants, décider dans l'incertitude, crise, communication de crise, résilience, gouvernance

Today’s leaders share a common sense of being constantly exposed to the risk of crises. There is a rise in ransomware attacks, an explosion in deepfakes, and the risk of compromise of AI models, which distorts analyses and predictions.

Added to this are the media pressure, the mandatory regulatory notifications (e.g. GDPR, NIS2), and the increasing interdependence of systems where a local failure can trigger cascading effects — all contribute to this impression of perpetual instability, like walking on quicksand.

How can you be sure you’re making the right decision in this situation?

Executive education programmes — from London Business School to INSEAD— assume reliable data, comparable scenarios, and time for deliberation.

Admittely, the dot-com bubble at the turn of the 2000s had already forced organisations to decide faster and to adapt to shorter cycles. But today’s technological acceleration, combined with growing geopolitical instability and the increasing sophistication of a discreet yet powerful cyber-criminal ecosystem, is changing the very nature and tempo of decision-making.

These days, uncertainty and confusion have become the norm. Making decisions in the age of AI is a bit like playing Russian roulette. You have to act without knowing all the ins and outs of the situation. Right… but how?

That is the question we will try to answer in this article.

In the menu :


Traditional leadership training needs recalibration to AI-Crises

Most traditional leadership programmes teach strategic analysis, change management and negotiation — skills suited to stable hierarchical organisations where information eventually converges, and waiting yields clarity.

Even frameworks like VUCA (volatility, uncertainty, complexity, ambiguity) name the problem without equipping leaders to act in it. Some newer programmes — such as Leadership Decision Making at Harvard Kennedy School — explicitly address uncertainty and cognitive bias, but these remain exceptions rather than the norm.

Complex adaptive leadership rapproaches acknowledge that different mental frameworks are required from those inherited from stable, hierarchical organisations. But these developments remain marginal. Most leaders still reach senior positions equipped with a toolbox designed for environments where information eventually stabilises, facts ultimately emerge, and waiting a little longer is assumed to lead to better decisions.

In the cyber-IA domain the reflex to wait for stabilised information is dangerous.


Cyber-IA crises: structural and chronic uncertainty

What distinguishes cyber-IA crises is not merely severity but their fundamentally unstable nature.

Whilst AI offers a genuine competitive advantage, it also poses a danger: the manipulation of reality – whether it is via a deepfake, data poisoning of an AI model or Adversarial Machine Learning.

Manipulation, an age-old ploy, now amplified a thousandfold by AI

Today, the manipulator is anonymous, hidden behind a screen somewhere unknown. They may be a man or a woman, young or old, tech-savvy or not. They could be our neighbour or live thousands of miles away, speaking a different language, living in a different time zone, in a different part of the world. With technology, these barriers are breaking down, becoming blurred.

One thing is certain: he knows us well. He has taken the time to study us. We publish so much information about ourselves publicly – both as a company and as individuals. Faced with this imbalance of power, there is a high risk of becoming a puppet in his hands.

Profound and far-reaching consequences

Three major potential losses:

  • our bearings
  • our confidence
  • our credibility

And like wildfire, the organisation is falling apart, against a backdrop of paranoia. For three aspects are shifting:

  1. Attribution becomes uncertain — who is responsible? A criminal gang? A state actor disguised as opportunistic hackers? Attribution techniques lag behind obfuscation tactics.
  2. Chronology becomes blurred — is the attack over or paused? Are backdoors still present? Detection often happens months after breach.
  3. Evidence becomes unstable — synthetic voices, videos and deepfakes erode trust in proof. This is recognised as a systemic risk by the World Economic Forum.

And sometimes, these attacks take on a political dimension. The police manage to track down the hackers but are unable to arrest them. The hackers are exploited by a political power, with the aim of destabilising an industry or a country.

Take LockBit, for example: a pair of hackers (one Russian, the other Israeli) were passing on their hacking ‘system’ to other small-time operators – forming a very active gang of hackers being told they would never get caught – until the day the founders have been infiltrated and unmasked , though unable to be arrested.
Watch the documentary about this manhunt. (French video with English subtitled interviews of French-speaking people)

In this tense climate, what strategy can be adopted?


Strategic error: waiting for clarity

Faced with this instability, the classic managerial reflex is to suspend the decision until the facts become clear. First an expert report is requested, then a crisis committee is convened, and communication is delayed until certainty emerges.

This caution, which has long been a virtue, is becoming a mistake. At least three mechanisms are triggered:

Decision paralysis sets in.

Committees are multiplying, analyses are coming thick and fast, but no clear direction is emerging. The organisation is waiting for a signal from management. Management is waiting for certainty from the experts. The experts are waiting for stable data. No one is moving.

Organisational silence is spreading.

In the absence of instructions, everyone interprets things in their own way. Employees fill the void with their own assumptions, which are often the most anxiety-provoking. Internal rumours become more influential than the official communication that is lacking.

Late decisions are finally made, but under deteriorating conditions

The impact has worsened, internal and external confidence has eroded, and options have diminished. The cost of inaction exceeds that of imperfect action.

This aligns with Herbert Simon’s insight, a concept formalised back in the 1950s: human rationality is always limited. Humans never decide with complete information. But in a cyber-AI crisis, this limitation becomes even more visible and brutal. Waiting for the facts to stabilise is no longer prudence, it is resignation.


What “acting without waiting” actually demands

If waiting for stabilization is a mistake, what should you do instead? Not improvise. Not react on impulse. But have already built, in advance, the conditions that allow you to act with clarity in the fog.

Three workstreams are essential.

1. Control what you expose = the informational attack surface

Even before a crisis occurs, every piece of published content, every position taken, every official statement becomes material for an adversary.

It is not about staying silent — it is about deciding what you publish, where, how, with what supporting material and what associated proof. Truth is no longer self-evident in the age of deepfakes and synthetic narratives: it must be governed, proven, traceable.

☝️ A few concrete questions to ask yourself:
  • What data/sources was this message or publication built on? Are they verifiable, traceable, defensible?
  • What narrative or manipulative risks are we taking versus the reputational benefits? In other words, is it really worth the risk?
  • Who internally validated this content before publication? Is there a double-verification protocol for sensitive or critical content?
  • Once published, could this content be used against us in a different context — geopolitical, competitive, media-related?
  • Which trusted third parties could authenticate or relay this message to give it weight in the face of an attack?
  • In the event of a challenge, do we have the evidence needed to defend the original version before the public, strategic stakeholders, and authorities?

This is what Truth Governance is about — putting in place the mechanisms that ensure the information you put out is verifiable and defensible, before an adversary turns it against you.

2. Reducing organisational vulnerabilities to manipulation

Manipulation, as you know, is the vector used to attack technical systems. It exploits human and organizational blind spots, such as:

  • management that ignores or downplays weak signals
  • disengaged teams that no longer report anything
  • pressure that produces mechanical decisions
  • the presence of ambiguous rules
  • a siloed organization that fragments communication and cohesion
  • etc.

>>> Read our article on weak signals

Before experiencing a crisis, you need to diagnose your own vulnerability across at least four angles with specific questions 👇

1 — The flow of information and alerts
  • Do your teams know what weak signals are?
  • If so, do they actually surface them upward, or are they filtered, softened, normalized along the way?
  • Is there a formal channel to flag an anomaly, an inconsistency, an unusual behavior without going through the direct line of management?
2 — Internal culture and its vulnerabilities
  • Are there areas of silent tension — between teams and/or hierarchical levels — that could be exploited from the outside?
  • Is internal language clear and consistent, or vague and open to manipulation? Do words mean the same thing to everyone?
  • How much conformism exists in decision-making? Are disagreements expressed, or do they self-censor?
3 — Dependencies and blind spots
  • Which suppliers, partners, or service providers have access to sensitive information — and with what integrity guarantees?
  • Are there key individuals whose departure or compromise would destabilize the entire system?
  • Which decisions rely on a single source, a single tool, a single AI model — with no possible cross-verification?
4 — Engagement and loyalty

Do teams fully understand the cyber-AI risks and challenges the organization faces, or does that information remain confined to the top?

In a crisis, who can be counted on to act quickly and clearly, without waiting for an explicit directive?

What is the level of resistance to AI adoption? From whom? Why?

The most uncomfortable question of all: If an adversary wanted to destabilize our organization from the inside, where would they start?

This is precisely what the Resilience Culture Lab is designed to identify and correct — not after the incident, but well before it. It is the very foundation of organizational resilience.

3. Revisiting your leadership posture

You already know this: only example counts. You can have the most eloquent speeches in the world — if they are not followed by actions and decisions aligned with your words, it all becomes disruptive noise.

This gap is the least often named vulnerability, and yet the most exploitable one.

Beyond ethics, coherence is a shield. A leader whose actions and words are aligned is infinitely harder to discredit. This work on posture — what teams perceive, what stakeholders read, what adversaries look for to exploit — sits at the heart of Resilient & Ethical Leadership.

A few questions to ask yourself honestly, in your own private reflection:

1 — On personal credibility under attack
  • If I bend reality to suit my needs — on results, commitments, facts — what attack surface am I creating for an adversary who might use a deepfake to put words in my mouth I never said?
  • In the event of a deepfake, will my word carry enough credibility to be believed — or will doubt settle in too easily because “it sounds just like something they would say”?
2 — On tolerance for toxic behavior
  • If I allow high performers to act in ways that contradict the values I publicly espouse, what real message am I sending the organization? And if tomorrow I need to embody ethics in the face of a crisis, who will believe me?
  • The gap between what I tolerate internally and what I advocate publicly is exactly what an adversary will seek to expose or amplify.

3 — On decisional isolation

  • Am I leaving my managers to decide alone, without a safety net, with the feeling of being abandoned when things get hard? Hackers and manipulators exploit the feeling of isolation — an isolated person is a vulnerable person, permeable to pressure, blackmail, and social manipulation.
  • Resilience is built in the daily relationship between a leader and their teams — especially in the moments when nothing is going right.

The central question that frames all three: What I do when no one is watching will always end up defining who I am when everyone is watching.

You cannot copy content of this page

Verified by MonsterInsights