The Real Risk Isn’t AI

Most of us would agree, it’s been a big five years.

But if we look ahead to 2030, the words being used are different: tectonic, monstrous, unrelenting. The pace of change isn’t just fast – it’s exponential.

One former OpenAI researcher predicted that by the next Summer Olympics, we could see 100 years’ worth of progress!

This isn’t just about new tools. It’s a seismic shift in how we work, live and lead. Bigger than the internet. More embedded than smartphones.

But the real transformation won’t come from the tech itself. It will come from how we choose to use the capacity it creates.

Could this be our once-in-a-generation opportunity to redesign work and workplaces for the better?

Done well, this shift could lift satisfaction, engagement, and performance, while reversing the rising tide of burnout and disconnection draining so many teams.

But to get there, we need to start with one thing: rethinking the risks of standing still.

What We’re Getting Wrong

In my last post, I shared that AI conversations are still too narrow, focused on tools and efficiency rather than strategy and purpose.

That’s especially true here in Australia (and other advanced economies).

A global study by the University of Melbourne and KPMG found Australians are the most sceptical of AI worldwide. Only 30% believe the benefits outweigh the risks, the lowest confidence rate on the planet.  

So it’s no surprise that only half of Australians report using AI tools. Now compare that to countries like India, Nigeria, the UAE and Egypt, where over 90% are actively using AI. And while we wait, we risk of losing market position, top talent, clients and momentum.

This isn’t a technology gap because we all have access. It’s a gap in mindset and confidence.

And even when AI is used, it’s often just surface-level. People are dabbling without direction or a strategy. They’re adopting small shortcuts instead of bold redesigns of how work gets done – and what roles are really for.  

Yet the teams forging ahead aren’t more comfortable, they’re just more curious. They’re experimenting despite the uncertainty, and they’re asking the better questions (some of which will be shared at the upcoming AI Advantage Masterclass).

Standing still might feel safe. But in a time of rapid change, inaction is the bigger risk.

The Real Risks People are Talking About

  • Surveillance and Scrutiny

In 2024, Woolworths rolled out an AI-driven performance system in its warehouses. It tracked every move, set strict productivity targets, and left workers feeling monitored like robots, which caused great stress, illness and burnout. Over 1,500 workers went on strike, costing the business $140M, not against AI, but against how it was being used.

The lesson? Be clear and transparent. Use AI with your people, not on them.

  • Misinformation at Scale

AI doesn’t distinguish between fact and fiction unless it’s trained (and constrained) to do so. It can surface biased, outdated, or flat-out false content with absolute confidence. For example, over 80% of mental health advice on TikTok is inaccurate or misleading and that same flawed content is influencing generative tools.

The solution? Have strong content filters. Use human oversight and a bias-aware mindset.

  • Hallucinations and False Authority

AI has confidently invented legal precedents, fake scientific studies, and fictitious news articles. These “hallucinations” are not rare glitches, they’re systemic vulnerabilities. And in high-stakes contexts, they’re dangerous or at the very least, they can overinflate your confidence.

The safeguard? Teach teams to verify, not just trust. Promote a human-in-the-loop model for all critical outputs.

  • Job Anxiety

People fear being replaced. They’re unsure what their role will look like in six months or if it will even exist. But here’s the truth: AI can’t replicate your humanness. It can’t shape culture, build trust, spark empathy, or connect in meaningful ways. It doesn’t create, innovate or lead with purpose – the very things that drive performance.

The work? Redefine value. Rethink responsibilities. Develop the human skills that truly matter.

  • Leadership Erosion and Avoidance

AI is creating uncertainty and many leaders don’t feel equipped to guide their teams through it. So they avoid the topic. Or they delay decisions. Or focus on tools, without talking about what this change means. That silence can be more damaging than the technology itself. Because in moments of rapid change, teams don’t need all the answers, they just need their leaders to show up. And when they don’t get that, some are already turning to AI for clarity, direction, and even emotional safety.

The move? Lead the conversation. Ask better questions. Model curiosity over control.

So, What Do We Do?

  • We don’t panic or freeze, and we definitely don’t wait. Instead,
  • We build guardrails: ethical use, responsible frameworks, and clear boundaries around AI use.
  • We equip people: with curiosity, training, and space to experiment safely and meaningfully.
  • We develop muscle: teaching people to think critically, challenge biases, and ask better questions.
  • We lead the conversation: with teams about what’s changing, what it means, and what’s possible.

I’m not blind to AI’s risks. I don’t believe AI it’s a panacea for all our workplace problems. But I’m still the forever optimist, because I see the opportunity. And how could you not?

Because I believe in a better future and optimism is the strategy to make that happen, and what helps people step up and take responsibility for making it so (Noam Chomsky).

So, what might shift if you got a little more AI curious?

Share

Anna's Insights

Get actionable insights for sustainable high-performance — straight to your inbox.

Discover More Insights

2025: A Year of Building Stronger Foundations

This week marks my final days at the desk for 2025 so this will be my last note to you ...

Are You Ready for the Year of Change?

Twelve months ago, I called 2025 the Year of Connection.  And I’ve definitely seen that theme play out again and again ...