Leaders today share a common feeling: they are continuously exposed to crises-level risks. The objective rise in cyber incidents, and the new forms of attacks powered by AI, fuel this sense of instability and transform the conditions for decision-making.
Immediate media coverage of every incident, mandatory regulatory notifications (e.g. GDPR, NIS2) that make visible what was once managed privately, and increasing interdependence of systems where a local failure can trigger cascading effects — all contribute to this impression of perpetual instability.
This perception isn’t mere bias. It reflects a profound change in the decision-making context. Executive education programmes — from London Business School to l’INSEAD— assume reliable data, comparable scenarios, and time for deliberation.
Admittedly, the dot-com bubble at the turn of the 2000s had already forced organisations to decide faster and to adapt to shorter cycles. But today’s technological acceleration, combined with growing geopolitical instability and the increasing sophistication of a discreet yet powerful cyber-criminal ecosystem, is changing the very nature and tempo of decision-making.
Uncertainty is no longer a phase to go through before deciding. It has become the permanent condition in which action must take place. Decisions now have to be made without full visibility, without knowing all the variables at play, and without a complete understanding of every issue involved.
In this article :
- Traditional leadership training needs recalibration
- Cyber-IA crises: structural and chronic uncertainty
- Strategic error: waiting for clarity
- Rehabilitating human judgment
- From adjudicator to navigator-leader
- Practical implications for leaders
- 5 Takeaways
Traditional leadership training needs recalibration
Most traditional leadership programmes teach strategic analysis, change management and negotiation — skills suited to stable hierarchical organisations where information eventually converges, and waiting yields clarity.
Even frameworks like VUCA (volatility, uncertainty, complexity, ambiguity) name the problem without equipping leaders to act in it. Some newer programmes — such as Leadership Decision Making at Harvard Kennedy School — explicitly address uncertainty and cognitive bias, but these remain exceptions rather than the norm.
Complex adaptive leadership rapproaches acknowledge that different mental frameworks are required from those inherited from stable, hierarchical organisations. But these developments remain marginal. Most leaders still reach senior positions equipped with a toolbox designed for environments where information eventually stabilises, facts ultimately emerge, and waiting a little longer is assumed to lead to better decisions.
In the cyber-IA domain the reflex to wait for stabilised information is dangerous.
Cyber-IA crises: structural and chronic uncertainty
What distinguishes cyber-IA crises is not merely severity — other crises can be equally damaging — but their fundamentally unstable nature.
AI challenges our perception of reality and intensifies the competitive battle for control. Cyber attacks aim to destabilise economies and decision-making processes. Political tensions are rising in many parts of the world, including places we trade with.
Uncertainty is no longer temporary but part of the crisis itself. Verifying all facts before deciding has become practically impossible. Three dimensions in particular have shifted:
- Attribution becomes uncertain — who is responsible? A criminal gang? A state actor disguised as opportunistic hackers? Attribution techniques lag behind obfuscation tactics.
- Chronology becomes blurred — is the attack over or paused? Are backdoors still present? Detection often happens months after breach.
- Evidence becomes unstable — synthetic voices, videos and deepfakes erode trust in proof. This is recognised as a systemic risk by the World Economic Forum.
Strategic error: waiting for clarity
Faced with this instability, the classic managerial reflex is to suspend the decision until the facts become clear. First an expert report is requested, then a crisis committee is convened, and communication is delayed until certainty emerges.
This caution, which has long been a virtue, is becoming a mistake. At least three mechanisms are triggered:
Decision paralysis sets in.
Committees are multiplying, analyses are coming thick and fast, but no clear direction is emerging. The organisation is waiting for a signal from management. Management is waiting for certainty from the experts. The experts are waiting for stable data. No one is moving.
Organisational silence is spreading.
In the absence of instructions, everyone interprets things in their own way. Employees fill the void with their own assumptions, which are often the most anxiety-provoking. Internal rumours become more influential than the official communication that is lacking.
Late decisions are finally made, but under deteriorating conditions
The impact has worsened, internal and external confidence has eroded, and options have diminished. The cost of inaction exceeds that of imperfect action.
This aligns with Herbert Simon’s insight, a concept formalised back in the 1950s: human rationality is always limited. Humans never decide with complete information. But in a cyber-AI crisis, this limitation becomes even more visible and brutal. Waiting for the facts to stabilise is no longer prudence, it is resignation.
From adjudicator to navigator Leader
Leaders must shift from the traditional figure of an adjudicator — who compares well-documented options — to that of a navigator who sets a course despite incomplete data.
The leader-adjudicator operates in sequences.
He receives analyses, compares scenarios, delegates uncertainty to experts, and communicates once he is sure. This model assumes that information will eventually converge, that the facts will eventually speak for themselves, and that the decision can wait until it is ripe.
The leader-navigator acts quickly, navigating by sight.
He sets a clear course, despite incomplete information. He accepts that the information remains partial, communicates intentions rather than certainties, and adjusts as he goes along without losing consistency. Decision-making is still disciplined — not random.
This distinction redistributes responsibility: the guidance counsellor cannot hide behind ‘we are waiting to find out more’. They take responsibility for making decisions based on the information they have, while knowing that this information is insufficient.
Essential clarification: deciding without certainty does not mean deciding at random.
Committees are multiplying, analyses are being carried out one after another, but no clear direction is emerging. The organisation is waiting for a signal from management. Management is waiting for certainty from the experts. The experts are waiting for stable data. No one is moving.
The cyber-AI crisis makes uncertainty visible, immediate, and brutal.
Rehabilitating human judgment
Management orthodoxy has long prioritised processes and algorithms over human judgement, viewing humans as weak links prone to bias.
The literature on cognitive biases — notably the work of Daniel Kahneman — has reinforced this mistrust, with a clear implicit message: the less room we leave for human judgement, the better our decisions will be.
But when facts are unstable, this logic reverses. Machines cannot:
Prioritise conflicting signals where all are partially true.
Two experts say the opposite. Three technical indicators point in opposite directions. No algorithm can decide which one to favour when all are partly true and partly false.
Weigh heterogeneous risks that aren’t measurable on a single scale.
Should priority be given to technical security, operational continuity, reputation preservation or personal data protection? These risks cannot be measured on the same scale. Their weighting is a matter of judgement, not calculation.
Decide without waiting more information.
Knowing that we know enough to act, even if it is not enough to be certain, is a human skill. No system automatically determines when waiting becomes more costly than acting.
Contextualise the elements.
When faced with a crisis, understanding what is at stake requires placing the event in several contexts that technology cannot grasp: economic, social, historical.
This ability makes it possible to respond appropriately, avoiding both overreaction and underestimation, and to maintain strategic consistency beyond the incident itself. No AI, however sophisticated, can reconstruct the contextual depth that results from lived, transmitted and shared experience.
Perceiving the intangible through instinct, experience, the sixth sense.
A security manager who ‘senses’ that an apparently minor incident hides something more serious. A financial director who detects an inconsistency in a perfectly documented transfer. A manager who perceives a change in tone in his team’s exchanges.
These subtle signals, impossible to quantify, are just as important as statistics and tangible facts, which they complement. In an environment where evidence can be synthetic and data manipulated, this ability becomes a strategic asset once again.
Build cohesion between teams and individuals in decision-making
In cyber-AI crises, early detection relies on distributed vigilance: technical anomalies, administrative inconsistencies and unusual behaviour first manifest themselves at the periphery of the organisation, where transactions, exchanges and daily flows take place.
If only senior management is authorised to judge and decide, a double risk arises: on the one hand, weak signals will not be reported up the chain of command due to a perceived lack of legitimacy to express them; on the other hand, if they are reported, they will be gradually filtered, reformulated and standardised by middle management to meet the supposed expectations of senior management. This loss transforms an early warning into a late diagnosis.
Rehabilitating human judgement therefore involves:
- Train people to exercise judgement in uncertain situations, not just in processes and tools.
- Legitimise decision-making at all levels by giving employees the freedom to act on weak signals without waiting for full hierarchical validation.
- Create conditions for unfiltered feedback, where expressing doubt, intuition or discomfort is not perceived as a lack of professionalism.
Organisational science shows that judgement is not a flaw to be corrected but a skill to be cultivated. It can be trained, calibrated and shared. It is based on accumulated experience, on the confrontation of perspectives, on the ability to hold together incompatible elements.
Human is not the weak link. Human is the decision-making organ.
Practical implications for leaders
Leaders must cultivate competencies rarely taught in traditional programmes:
- Tolerance for ambiguity — accept not knowing, without abdicating decisiveness.
- Active listening for weak signals — value frontline insights even if incomplete.
- Decision-making “on the move” — set direction, observe effects, refine continuously.
- Ethical clarity of intent — when facts are unstable, purpose matters as much as method.
These skills are not innate, but are cultivated through exposure to real-life situations, by being confronted with choices for which there is no right answer, and by reflecting on one’s actions afterwards.
They also require humility: acknowledging that one does not know everything does not disqualify one, but rather lends credibility.
Finally, we must bear in mind that technology, however advanced it may be, will never replace:
- the responsibility for a decision made in doubt
- the credibility gained through consistency over time
- the trust built through transparency about what we do not know.
