2025 was the year AI moved from "wow" to "what now?" The noise didn't disappear, but something shifted. Organisations stopped asking whether they should adopt AI and started wrestling with how to do it well.
In my first blog, I talked about the AI bus already leaving the station. Well, that bus is moving - but the real question now is who's steering it, who's just along for the ride, and who's still standing at the bus stop wondering if they've missed their chance.
Looking back at the organisations I've worked with this year, three patterns emerged. Not from research reports or analyst predictions, but from watching real teams navigate real implementation challenges. Here's what 2025 taught us - and what it means as we look forward to 2026.
I worked with an HR team this year who were being asked to use Microsoft Copilot but were paralysed by fear. They had the business case, the budget and the licenses. But they couldn't pull the trigger. They were terrified - of data breaches, of intellectual property leaking, of employees creating poor-quality work at scale.
Here's the thing: their fear wasn't irrational resistance. It was intelligence. They understood the stakes in a way that organisations who rush in often don't. The problem wasn't the anxiety itself - it was that they needs help to turn that anxiety into action.
The key lesson? Stop treating fear as something to overcome with reassurance and "don't worry, it's easy" messaging. Treat it as valuable data. Where there's fear, there's usually a genuine gap in governance, training, or clarity. Address the gap, and the fear transforms into informed confidence.
2026 needs less "AI is simple, just try it" and more "yes, this matters - here's how we'll navigate it together."
Some organisations are genuinely experimenting. I have watched one tech-forward software company pilot agentic AI across their legal, HR, IT department and beyond - not because it was trendy, but because they'd identified clear use cases where AI could genuinely help.
What’s making it effective isn’t just the technology. It’s that leaders are trying the tools themselves first. They’re creating psychological safety for teams to experiment and fail. They’re tracking what’s working, what’s safe and killed what isn’t. That’s real experimentation, not AI tourism.
Contrast that with the organisations buying tools nobody uses, running pilots that never scale, or worse - implementing AI systems and then wondering why adoption is sluggish. The difference isn't budget or technical sophistication. It's clarity of purpose and cultural readiness.
2026's challenge will be moving from ‘proof of concept’ to ‘proof of practice’. That means fewer shiny pilots and more honest conversations about what's actually working, what's flopping, and why.
The NHS has been quietly implementing ambient voice technology (AVT) in some settings - AI scribes that record clinician-patient conversations and automatically generate clinical notes.
But here's what makes this responsible implementation rather than reckless adoption: it's gone through rigorous due diligence for the specific clinical settings where it's used. Patients give explicit permission before any recording starts. And crucially, clinicians review and approve every AI-generated note before it becomes part of the medical record - the human stays firmly in the loop.
The result? Overstretched doctors and nurses get precious time back. Less paperwork, more actual patient care. No fuss, no fanfare, just a genuine problem being solved thoughtfully.
This is what good AI implementation looks like:
A clear pain point
Proper governance
End users involved in design
Patient consent
Human oversight.
And crucially, measurable value at the end of it
The pattern across every successful implementation I've seen? It's not about what AI can do. It's about what people actually need. When those two things align, magic happens.
The opportunity in 2026 is to stop asking "what's the coolest thing AI can do?" and start asking "what's the most annoying problem our people face?"
So what needs to mature as we head into 2026? Three things stand out.
We need frameworks that protect people and data without requiring a committee meeting every time someone wants to try a new prompt.
Tick-box training won't cut it. People need ongoing, embedded AI fluency - the kind that comes from practice, experimentation, and permission to learn by doing.
The reality is there is often a chasm between senior executives who think AI is brilliant and employees who aren't so sure. And it's not closing by itself. Both HR and Learning & Development have a critical role here: not just training people on tools, but building confidence, critical thinking, and the human judgement to know when to trust AI and when to override it.
The organisations that'll thrive in 2026 won’t be the ones with the fanciest AI tools. They'll be the ones treating this as a people transformation, not a technology deployment.
As we roll into 2026, here's my suggestion: don't start the year by buying more AI tools. Start by asking better questions:
Where's the fear in our organisation - and what's it telling us?
Where's genuine experimentation happening - and what can we learn from it?
Where is AI actually helping people do their jobs better, not just faster?
So what’s my biggest lesson from 2025? AI readiness isn't a destination you arrive at. It's a practice you build, refine, and sustain. So check your rear-view mirror, learn from what you see, and keep your eyes firmly on the road ahead.
In this webinar, Lindsey Coode and Amira Kohler reveal how forward-thinking organisations are closing the critical gaps between investment and maturity, leaders and employees, and technology and trust.
The AI bus is already moving. Make sure you’re the one driving it—and everyone knows where it’s heading.
Watch the webinar on demand now.