top of page

The Big Questions : AI Few Big Questions

Flying From New York to Tokyo in 45 Minutes


A thought experiment that still makes me smile:


First commercial flight: 1914. Roughly a century later:

  • We're talking about commercial space trips

  • We're watching artists like Katy Perry "go to space" within 110 years of humans boarding flying boats over water


Things that used to take:

  • Centuries → now take decades

  • Decades → now take years

  • Years → now take months


Same with software: Time from first mainstream apps to AI writing code instead of being written with code? Incredibly short.


Now ask yourself: "Am I designing my organisation for a world that flies New York to Tokyo in 45 minutes—or for a world that still travels by boat?"


Because right now:

  • Many companies are plugging agentic AI next to employees

  • But keeping org charts, processes, skill planning as if nothing has changed


If your organisation has AI writing reports, analysing logs, drafting code... but your structure, KPIs and processes are still built for a slower, manual world—your problem isn't AI adoption. Your problem is org design denial.


You don't have to go full sci-fi. But you do need to ask:

  • What work disappears?

  • What new work appears?

  • How do I redesign roles, teams, and culture around this?


Because the risk is simple: The world moves to space flights, while your company is still perfecting the steam engine.


Eye-level view of a serene landscape with a winding path
Some little Big Quesions


A Map as Big as the City—And Other AI Lessons


One of my favourite lines: "The map is not the territory."

Or as Borges wrote, a map as big as the city is useless.


That's exactly how I feel about AI models today. Everyone wants:

  • A perfect model

  • A flawless simulation

  • An "intelligence" that predicts everything with zero error


But that's not how the world—or science—works.

What I love about science is simple:

  • Scientists are wrong until they are not

  • Models are always incomplete

  • Facts change

  • Debate is allowed

  • Humility is built-in

AI needs the same humility.


A "perfect" model can be as useless as a map the size of the city. What we need are good enough models, grounded in reality, that:

  • Guide, not dictate

  • Help navigate, not paralyse

  • Inform decisions, not replace thinking


Remember: AI models are simplifications, not magic.

Use AI as a tool. Keep science free to debate. And keep your eyes on the actual territory—customers, teams, infrastructure—not just the beautiful digital map your data team built.


Deepfakes, Misinformation & Why Source Matters More Than Content


This week I did something scary: I used a free tool, recorded my voice for three minutes—and generated a convincing deepfake of myself.

Three minutes. Free tool. No special skills.


Now imagine what a "pro" setup can do.


We're entering a world where:

  • You can't trust your eyes

  • You can't trust your ears

  • You can't trust a video just because "it looks real"


Which means one thing: Validation of the source is now more important than the content itself.

We're going to be surrounded by:

  • Fake speeches

  • Fake statements

  • Fake scandals

  • Fake "evidence"


And not just for politicians and celebrities—but for normal people too.

Now add another layer: We're building AI agents that will consume this content, make decisions, and act on our behalf. If those agents are fed garbage, they will generate even more garbage—at scale, at speed.


The line between freedom of expression and the management of misinformation is getting thinner every day.


I don't have all the answers (yet). But I know this: we must invest as much in authenticity, verification, and trust as we do in generating content.

Prediction, Prevention... or Arresting on Intent?

In one of our podcast segments, we discussed prediction models—in tech, in security, in behaviour.


When is prediction about being ready? When is it about preventing harm? And when does it become punishing intent, not action?

Predictive policing. Risk scoring. AI models that "know" who will default, who will get sick, who will churn, who will protest.


If we're not careful, we move from:

"I want to know what might happen so I can prepare."

to

"I will treat you as if you've already done it."


There's a fine line between:

  • Using models for readiness

  • And using models for prejudice


Same with internal systems: If your system requires 80% of users to significantly change their natural behaviour just to make it work—either the system is wrong, or you picked the wrong crowd.


Controls should guide, not suffocate. Prediction should empower, not criminalise.

Technology is getting very good at guessing patterns. Our job as leaders is to decide what we do with those guesses—and where we draw the line.



Innovation That Makes Things Worse


Not every innovation is an upgrade. Sometimes "innovation" is just:

  • Adding extra steps

  • Breaking what used to work

  • Making people miserable—in a more modern interface


We've all seen it:

  • A chatbot added "in front of" the call centre—not to help, but as an extra obstacle before talking to a human

  • Government digital portals that add six extra clicks to end up with the same physical visit to a grumpy employee

  • "Smart" apps that simply wrap an old broken process in a shiny UI


Here's the core problem: Too many leaders start with:

❌ "What's our AI / digital / innovation strategy?"

Instead of:

✅ "What problem are we solving, and for whom?"

✅ "Does this make life easier or harder for the user?"


I once had a CEO who wanted to launch a huge "digital transformation programme" that would cost more than the company's annual revenue... to solve what was essentially an internal communication issue between teams.


No business case. No customer value. No operating model change. Just a very expensive way to avoid a candid conversation.

Innovation should:

  • Improve the operating model

  • Improve the business model

  • Or radically improve customer experience

If it doesn't do any of the three, it's not innovation. It's a UX downgrade with a fancy budget.

Comments


IMG_5510.jpeg

Hi,
I'm Amir

Leaders must be both human and tough.

My style is direct, fair and transparent. People follow leaders who tell them the truth, protect them from nonsense, and still demand their best work.

Post Archive 

Tags

How I work

My operating system is simple:

  • EPIC culture: Efficient, Precise, Intelligent, Credible

  • Root cause first: We diagnose before we prescribe

  • Little big wins: Fast, visible progress that builds trust and momentum

  • No theatre: Clear language, direct conversations, honest statusIf this resonates with how you like to work, we’ll get along well.

© 2025 by Amir Abdelazim.
Powered and secured by Qualiteq labs

Amir Abdelazim

Innovatics Partners GmbH & Co. KG

71 Urbanstr.

Berlin, 10967

Germany

  • LinkedIn
  • Instagram
  • X
  • TikTok
  • Youtube
bottom of page