Blog

Before the AI tools, there are people

In this blog

Table of Contents

Share this article:

An organization invests in AI tools. Someone from IT runs a session. Guidelines go out. And then…. Well, often not a whole lot changes. People keep working the way they always have and the tools sit there.

The easy read is that people need more training. Better prompts. A stronger nudge from leadership. But that’s usually not it.

Ask someone how they feel about AI at work and most will give you the safer answer. Curious. Open to it. Willing to learn. Ask them in a small group, with a bit of time and no agenda, and something else comes out.

“I’m afraid AI is going to take my job.”

“I’m afraid this is going to take my job. I’m afraid I won’t feel like I’m doing work of value anymore.”

We are hearing this across organizations – from capable, committed professionals who’ve spent years getting good at what they do and aren’t quite sure what that means now. It doesn’t announce itself as resistance. It shows up as polite non-use. Tools accessed once or twice then quietly set aside. People who say the right things and go back to doing what they’ve always done.

More training, content, sharper guidance, stronger nudges from the top – none of that speaks to someone wondering whether the judgment and expertise they’ve built over a career still has a place. That question needs a different response.

When we work with organizations on AI adoption, the first thing we do isn’t open a tool or run a session on prompting. We start with something we call ‘feelings mapping’ – a structured exercise where people name what they’re actually feeling about AI and their work, in a group, and before anything else happens. We do it because what sits unspoken in the room doesn’t just disappear, in fact it gets in the way.

Before AI

What surfaces is rarely what people expect to hear from their colleagues. Job security comes up almost every time – and once it does, something more specific tends to follow. People aren’t just worried about being replaced. They’re worried about losing the work that requires their judgment. The part that comes from experience, from reading a situation, from knowing what the numbers don’t tell you. Which problems belong to AI and which ones belong to them? It’s the question sitting underneath a lot of AI anxiety – and one that deserves more deliberate attention than most programmes give it. We call that discernment, and we’ve written about why it might be the most important AI skill nobody’s building

Once fear gets named and taken seriously, people will engage differently in AI adoption programs. They start making it their own, bringing real problems from their actual work, figuring out where AI genuinely serves them and where their own judgment is the thing that matters. The people who were most sceptical at the start are often the ones who later want to bring others through it.

That arc – from fear to fluency to wanting to bring others along the journey – is exactly what organizations need to be building for, at scale. Because the real challenge isn’t getting people to use AI. It’s developing a workforce confident enough in its own value to direct AI, discerning enough to know when not to use it, and adaptable enough to keep pace as the tools keep changing.

Those capabilities – confidence, judgment, resilience, the willingness to keep learning – are all human skills. They take time, practice and the right conditions to build. (We’ve written about what it actually takes to develop them here).

This is why the feelings conversation at the start of an AI adoption programme isn’t a soft warm-up. It’s the foundation. Someone still quietly wondering whether their expertise matters can’t yet exercise the judgment AI actually requires of them.

The conversation about how people feel isn’t a precursor to the real work. It is the real work! 

If you want to see what this looks like in practice, our AI adoption case study goes into the detail. Check it out.

More blogs you might like