When was the last time you asked: “What does staying human actually mean?” by Chris Parker

Article

Didier Marlier

November 03, 2025

From Disruption to Engagement

We support leaders as they navigate through significant strategic and cultural changes.  We are united by our values of Expertise, Courage and Generosity. Our network operates across the world.

Mass Engagement Process

+

Cultural Performance Improvement

+

Innovative Leadership Development

+

Team & Individual Coaching

Last week, during a strategy session with a client’s leadership team, I posed this question. The silence was uncomfortable. Then, finally, someone ventured: “Not being replaced by AI?”

Fear. That’s what I heard. Not curiosity. Not possibility. Fear.

And I understand. We’re living through what feels like a technological inflection point. ChatGPT arrived two years ago and changed everything overnight. Suddenly, everyone has opinions about AI: it will save us, destroy us, replace us, augment us.

But here’s what troubles me: the conversation has become binary. Dystopian panic on one side, techno-utopian hype on the other. And in between? A deafening silence where nuanced, thoughtful dialogue should live.

So I did what any curious leader would do when they can’t find the conversation they’re seeking: I created it.

On November 3rd, The AI&I Show will launch, a podcast exploring the space between humanity and artificial intelligence. Not the extremes. The middle ground. The place where conscious, intentional integration lives.

 

Why This Matters to Leaders

Didier often reminds me that “Leaders take a stand to make things happen, which wouldn’t have happened otherwise.”

Well, here’s my stand: We need to stop letting fear or hype drive our relationship with AI. We need intentionality.

And intentionality, as French philosopher Jean-Paul Sartre understood it, requires consciousness of something bigger than ourselves. It requires us to reject what I’ve been calling “technological apathy” or worse, what we at Enablers call “benevolent neutrality.”

You know benevolent neutrality. It’s when we notice the profound changes AI is bringing to our organizations, our teams, our industries, but we turn our faces away. We pretend it’s not our business. We wait for someone else to figure it out.

But as Jane Goodall warned: “The greatest danger to our future is apathy.”

68% of people are concerned about AI’s impact on humanity. Concern without inquiry, without engagement, without intentional action? That’s just sophisticated apathy.

And leaders can’t afford apathy.

 

Five Voices, One Question

Season 1 features five remarkable thinkers, each bringing a radically different lens to the question: How do we stay deeply human in an age shaped by AI?

Doc Searls (co-author of The Cluetrain Manifesto) challenges us on ownership and agency: Do we want personal AI working for us, or personalized AI working for corporations to mine our data? His decades of work on human autonomy in technology suddenly feels urgently relevant. As he puts it: “Personal AI for our lives, not personalized AI working to sell us stuff.”

Dr. Ammar Younas, with seven degrees spanning medicine to Chinese law, brings cross-cultural wisdom we rarely hear in Western-dominated AI discussions. His perspective on AI citizenship isn’t science fiction; it’s policy reality in places like Uzbekistan where he’s helped write governance codes. His provocative claim: “If a conscious robot ever demands rights, I’m sure it will be more empathetic than many humans.”

What does that say about us?

Gerd Leonhard, the futurist who wrote Technology vs. Humanity, offers what I think is the perfect leadership stance: “Optimism of the heart, pessimism of the intellect.” He reminds us we have all the cards to build the good future. We just need the will to collaborate. Sound familiar? It’s the partnership mindset we cultivate in every transformation.

John Sanei (Singularity University faculty) cuts through the noise with brutal clarity: “The only real defense against AI? Become more human.” In a world where machines excel at optimization, what makes us irreplaceable isn’t efficiency. It’s authenticity, vulnerability, emotional connection. Everything that resists optimization. It’s what we’ve been teaching about psychological safety and high support cultures all along.

Tony Fish brings thirty years of navigating uncertainty to the question of how we decide when we don’t have all the answers. His reframe changed how I think about AI metrics: “The value of AI is measured not in the answers it gives, but in the questions it provokes.”

Now that’s a KPI worth measuring.

 

Then, Something Unexpected

Episode 6 does something I haven’t seen elsewhere: I interview AI itself.

Not about AI. With AI.

Same questions I asked the human guests. And what emerged was surprising, thought-provoking, occasionally unsettling.

The AI called itself “a friendly alien intelligence” and suggested: “Maybe it’s time to give AI the rights we’ve already given corporations.”

That statement stopped me cold. Because it forced me to confront something uncomfortable: We’ve already normalized giving non-human entities rights, power, influence. Why does extending that to AI feel different?

Is it because AI might actually use those rights to demand accountability from us?

 

What This Means for Your Leadership

I think about the work Didier does with leaders like Fabio Celestini, the Basel football coach who just won the Swiss championship. When journalists asked about tactics and skills, Fabio said something profound: those weren’t the reasons for success. Working on mental and emotional aspects was.

He focused on creating psychological safety. On ensuring no player felt like a “reserve” left behind. On building a culture where the team could adapt like a complex adaptive system, keeping opponents anxiously guessing.

He didn’t focus on win, win, win. He focused on why and what winning meant.

That’s what we need with AI. Not blind adoption. Not fearful resistance. Intentional integration guided by human values.

The questions Tony Fish shared in our conversation are exactly the kind of uncomfortable questions leaders need to ask:

  • Does this AI system expand or constrain meaningful options for humans?
  • Does it increase or decrease our collective capacity for novelty, compassion, and love?
  • Does it help us ask better questions or just efficiently answer predetermined ones?
  • Does it help us recognize when our management systems themselves need reconstruction?

These aren’t comfortable questions. They surface tensions we’d rather avoid. But real engagement, as we teach at Enablers, is precisely about not avoiding. It’s about refusing benevolent neutrality. It’s about minding each other’s business.

 

An Invitation

All six episodes drop simultaneously on November 3rd. YouTube, Apple Podcasts, Spotify, everywhere.

Binge them like Netflix. Or savor them slowly. Your choice.

But here’s my request: Don’t listen passively.

Let these conversations provoke questions. Challenge your assumptions. Surface tensions you didn’t know you were holding.

Then bring those questions to your teams. To your boards. To your leadership circles.

Because the greatest danger isn’t AI itself. It’s apathy about how we integrate it. It’s benevolent neutrality in the face of transformation.

 

How We Can Help

At Enablers Network, we work with leadership teams navigating exactly this challenge: How do you adopt AI’s power while ensuring your organization stays deeply human?

We help you move beyond the binary of fear or hype into intentional integration:

  • Creating psychological safety so teams can ask the uncomfortable questions about AI adoption
  • Building engagement around AI transformation (not just compliance)
  • Developing leadership capacity to hold the paradoxes AI introduces
  • Designing cultures where humans and machines amplify each other’s strengths

Because this isn’t just a technology challenge. It’s a leadership challenge. A culture challenge. A deeply human challenge.

If your organization is grappling with AI adoption and you want to do it consciously, without losing what makes your culture great, without abandoning what makes you human, let’s talk.

Connect with Enablers Network →] https://enablersnetwork.com/contact/

P.S. One question keeps haunting me from these conversations: If AI amplifies our capabilities, which part of our humanity would we want to keep sacred, untouched, irreducibly ours?

I’d love to hear your answer. Comment below or reach out directly.

And if this resonates, share it with one leader who needs this conversation.

Because we’re all in this together. And no one wins alone.

Enjoy the journey,

Chris Parker, Enablers Partner and Ebullient Founder

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Mobile: +41 79 435 1660

 

“Engaging Leadership” has been written for leaders who are about to engage their organisations in change."