top of page

Crisis Leadership — When Everything Goes Wrong

The 36-Hour Outage That Made Me a Leader

Many know me from transformations and turnarounds. Fewer know my real executive journey started with a crisis.

A 36-hour nationwide outage in Maputo. Network down. Customers angry. Regulators watching. The kind that breaks careers—or makes them.

I wasn't supposed to be in charge that day. But as things escalated, I realised: In a crisis, everyone looks at you—not for perfect answers, but for calm, direction, and ownership.

So I:

  • Kept the team together—no blame, just focus

  • Prioritised services clearly—what comes back first and why

  • Communicated honestly—internally and externally

  • Stayed present until the last light came back on

That outage taught me two things:

  1. You don't wait for certainty to act. Leadership means stepping in before it's easy.

  2. People remember who showed up in the fire, not the photo.


Eye-level view of a serene landscape with a winding path
War Room in crisis

Crisis Priorities: Who Comes First?


In critical infrastructure, not all services are equal—especially in crisis. Your responsibilities come in this order:


1. Critical and emergency services — Defined by license. Avoid national damage.

2. Safety of your employees — No network is worth a life.

3. Your commercial services

4. Enterprise services—by contractual SLA


This last one triggers debates. "Shouldn't we prioritise by economic importance?"

No. Prioritise by contractual responsibility.


Example: A bank buys 90% uptime with 2-hour recovery (lower price). A factory buys 99% uptime with 1-hour recovery (higher price).


In crisis, my priority is the factory. Why? The bank's resilience is their responsibility—my network is one part. The factory bought all its resilience from me.


Don't let politics rewrite priorities mid-crisis. Keep your eye on the ball.


Root Cause, Not Scapegoats


Why do crises repeat? After the heat, politics override root-cause analysis.

Network collapses due to a provider. First reaction: "It's the vendor's fault." Fines, legal battles, press release... then we move on. Until the next outage.

The real root cause isn't just the bug—it's the design, vendor strategy, and risk appetite.


Questions rarely answered:

  • Who decided on a single point of failure?

  • Who accepted "good enough" resilience?

  • Who chose cost savings over redundancy?


Root cause analysis is a mirror. If after an incident you announce the vendor's name, issue a tough statement, but don't redesign architecture or update risk registers—nothing was learned.

Real leaders go past blame into design, governance, and culture.


Crisis Leadership Framework

If you lead in tech, crises aren't "if"—they're when. Here's my three-phase approach:


Before: Build Risk Culture


  • Take corporate risk seriously

  • Maintain a living risk register

  • Share past crisis stories regularly

  • Keep memory alive without creating panic


During: Keep the Team Together


Your mission: sail out of the storm.

  • Set clear priorities (what recovers first and why)

  • Keep communication tight

  • Don't let politics enter the command room

  • Have a single trusted voice giving regular updates

People look for calm, not perfection.


After: Ruthless Learning, Respectful Accountability


The worst thing: moving on as if nothing happened.

The day after service returns:

  • Be at your desk asking "Why? How? What's the systemic cause?"

  • Do lessons learned within a week, not a year

  • Write a clear incident report

  • Hold people accountable even if facts aren't 100% clear

  • Bring in external views if needed

  • Freeze unnecessary changes temporarily


Crises reveal your architecture, culture, and leadership—all at once.



How CrowdStrike Recovered From Disaster?


My son recently told me: "I like CrowdStrike. Falcon AI is powerful."

This is the company that triggered one of the biggest global IT outages recently—grounded planes, frozen banking, ~$10B+ impact.

I watched CEO George Kurtz explain in real time. Not pretty, not polished—but present.


Then something interesting happened:

  • Day 2: Clear public apology, no hiding

  • 24 hours: ~97% recovery

  • Communication: continuous, transparent, focused on recovery

  • Following months: Product roadmap accelerated, massive AI emphasis


Stock crashed. Lawsuits came. Brand took a hit.

But less than a year later, a new generation of investors says "I like what they're doing."


Lesson: You will fail at scale. What matters is how fast you admit it, how clearly you communicate, and how relentlessly you execute recovery.


That's how you turn disaster into a case study.


Conclusion

Crises don’t create leaders — they reveal them.


They expose your architecture, your risk culture, and your ability to stay calm when certainty is gone. The real test isn’t the moment service returns; it’s what you change the next morning.


Own the moment, protect the team, communicate clearly, and fix the system — not the story.

 
 
 

Comments


IMG_5510.jpeg

Hi,
I'm Amir

Leaders must be both human and tough.

My style is direct, fair and transparent. People follow leaders who tell them the truth, protect them from nonsense, and still demand their best work.

Post Archive 

Tags

How I work

My operating system is simple:

  • EPIC culture: Efficient, Precise, Intelligent, Credible

  • Root cause first: We diagnose before we prescribe

  • Little big wins: Fast, visible progress that builds trust and momentum

  • No theatre: Clear language, direct conversations, honest statusIf this resonates with how you like to work, we’ll get along well.

© 2025 by Amir Abdelazim.
Powered and secured by Qualiteq labs

Amir Abdelazim

Innovatics Partners GmbH & Co. KG

71 Urbanstr.

Berlin, 10967

Germany

  • LinkedIn
  • Instagram
  • X
  • TikTok
  • Youtube
bottom of page