I've run about 40 AI training sessions over the past couple of years. Different organisations, different sectors. Public sector, higher education, charities, insurance, media. The rooms look different. The dynamics don't.

Every time, the room splits into three groups.

Junior staff get it quickly. They're building things by lunch. No existential crisis, no defensiveness about the way they've always done things. AI is another tool to them, like email or a shared drive. They pick it up and move on. Sometimes too quickly — they'll publish something that clearly came out of ChatGPT without reading it properly, and nobody catches it because the workflow hasn't been designed to catch it. But they're not the problem.

Senior staff ask the most interesting questions. Not "will this take my job?" but "what does this mean for how we're structured?" and "should we be organised this way if this tool can do that?" They see the technology and immediately see the organisational implications. They're playing chess two moves ahead, which is useful, though it sometimes means they skip the practical bit entirely.

The middle layer — the people who've been doing the work for eight, ten, twelve years — is where it stalls.

And I don't think most organisations understand why.


A training coordinator at a public body told me, after a session: "I spent eight years becoming the fastest person in my office at putting together event packs. I could do it with my eyes closed. Now you're telling me a machine can do it in ten minutes."

She wasn't angry. She wasn't being difficult. She was doing the maths. Eight years of compounding skill, and the thing she'd compounded was now worth less. Not worthless — she still had judgement about what goes in the pack, what the audience needs, what the format should be. But the speed and the process knowledge, the thing that made her visibly good at her job, had been commoditised overnight.

A project manager at a different organisation said something similar: "I've built my whole career on keeping track of all the moving parts. If the software can do that, what's my role?"

These aren't resistant people. They're rational people doing a cost-benefit analysis that nobody in leadership has bothered to address.


I ran a session for about 50 people at a large public body recently. Big room, mixed seniority, everyone on the same tool. The organisation had rolled out Copilot. But here's the thing: they had a policy for Copilot and nothing for anything else. No guidance on ChatGPT. No guidance on Claude. No clear rules about what data could go where. No support channel for when someone got confused about what their licence actually gave them access to.

The junior staff didn't care. They just used whatever worked. The senior staff nodded at the policy and moved on to strategy. The mid-career people froze. They're the ones who own the operational risk. They're the ones who'll be asked to explain it if something goes wrong. Unclear policy didn't read as "figure it out." It read as "here's a trap."

Half the questions in that session weren't about capability. They were about permission. "Am I allowed to put this data in?" "What happens if I use the wrong tool?" "Who do I ask?" These are governance questions dressed up as technology questions, and they land hardest on the people who've spent a decade learning to stay within the rules.


At another organisation — a charity — we completed an entire engagement. Delivered recommendations. Built a framework. Identified the tools. Mapped the workflows. Then nothing happened.

Eight follow-up emails. No response.

The project lead wasn't opposed. She was overloaded. She sat between a board that wanted AI progress and a team that needed her for everything else. AI adoption landed on top of a full workload with no space carved out to actually do it. Nobody had scoped the implementation as a real project with real time allocated. It was an add-on. And add-ons die in the middle layer, because the middle layer is where everything competes for the same limited hours.

This is the pattern that doesn't show up in adoption surveys. The organisation ticks the box: "We've done AI training." The team ticks the box: "We attended." But the actual behaviour change requires someone in the middle to have time, permission, and motivation to rewire their workflows. When all three are missing, you get compliance without adoption.


A public sector team told me their staff had ideological objections to AI tools at work. Built by big tech, political concerns about the companies behind them, values-based resistance. In that sector, these aren't trivial objections. They're connected to real concerns about procurement ethics, data sovereignty, and institutional independence.

But the same staff were using ChatGPT daily at home. The ideology didn't stop personal use. It stopped organisational endorsement. The conversation at work got stuck in abstract debate about whether the organisation should be seen to endorse these tools. Meanwhile, people were quietly using them in their personal capacity to do the same work faster.

This is particularly acute in public sector, cultural organisations, and charities. The mid-career professionals in those sectors often carry the institutional values most seriously. They're the ones who'll flag the ethics. That's valuable. But when it becomes a reason not to act rather than a reason to act carefully, the organisation stalls.

The junior staff aren't thinking about institutional positioning. The senior staff can make a strategic call and own it. The middle layer is caught between their own values, the institutional reputation, and the practical reality that everyone's already using these tools anyway.


One organisation reported 85% AI adoption after a training programme. That's a good number. The kind of number that goes in a board update.

When I asked what that looked like in practice, the answer was: "Esoteric conversations rather than practical implementation."

People were talking about AI. Discussing it in team meetings. Sharing articles. Having opinions. Nobody was building workflows. Nobody had changed how they actually did their job. Awareness had been mistaken for adoption, and the metric didn't distinguish between the two.

This is common. It's particularly common when the middle layer intellectualises the change instead of experimenting with it. Discussion feels like progress. It feels productive. It's also safe — you can discuss indefinitely without committing to anything. The risk only arrives when you actually try to use the tool on real work with real stakes. And that's the step the middle layer hesitates on, because they understand the stakes better than anyone else in the room.


Another organisation mandated a single AI tool and banned the alternatives. One approved platform, everything else off-limits. The first training session, by their own account, "created more questions than answers."

When adoption stalled, the assumption was: people need more training. The actual problem was different. Nobody had been consulted on the tool choice. Nobody understood the data governance around it. The mandate had bypassed the people who'd have to make it work. They felt done-to rather than involved.

Mid-career professionals expect to have a say in tools and processes that affect their work. Not because they're precious. Because they know things about the work that the people making tool decisions often don't. They know which data is sensitive. They know which workflows are fragile. They know which processes look simple on a diagram and are actually held together by undocumented human judgement.

When you mandate without consulting, you don't get resistance to the technology. You get resistance to being bypassed. Different problem, same symptom.


The pattern across all of these is the same.

Mid-career professionals aren't afraid of AI. They're afraid of making mistakes without clear guidance, being blamed for gaps in policy they didn't write, or watching their hard-built expertise get devalued with no story about what comes next.

Most change management effort goes to leadership buy-in at the top or broad awareness programmes. Almost none goes to the specific needs of the middle layer. And the middle layer is exactly where adoption lives or dies, because senior staff don't implement things and junior staff follow wherever they're pointed.

What the middle layer actually needs is different from what they're getting. They need bounded experiments with clear rules — "try this tool on your next project, here's what counts as success, here's what we'll do with what you learn." They need permission to try things without career risk. They need a credible story about how their role evolves rather than disappears. And they need evidence from people like them — same seniority, same kind of work, same anxieties — who've come through this and landed somewhere solid.

They don't need enthusiasm. They've heard enough enthusiasm.

They're too experienced to be reassured by a slide deck. Too invested not to ask what's in it for them. And too important to keep ignoring when they do.