At WeLearn, we spend a lot of time inside organizations that have already invested in AI. The tools are there. The guidelines exist. Someone from IT ran a session. And yet when you ask employees how often they’re actually using AI in their day-to-day work, the answer is rarely what the investment warranted.
What we’ve found consistently is that the gap between AI deployment and AI adoption isn’t a technical problem. It’s a human one. And closing it requires a different starting point.
The program we’re describing here is one example of what that looks like in practice. An L&D leader at a global technology company ran it from the ground up – no formal mandate, no development budget, no architects. What she built became one of the more instructive models we’ve seen for how peer-led, human-first enablement actually changes behavior at scale.
The real barrier isn’t skill. It’s fear.
Before designing anything, there was a diagnostic question worth asking: why weren’t people using tools that were already available to them?
What surfaced wasn’t a capability gap or a technical barrier. It was something harder to address through training alone.
“Sometimes those conversations would get into: I’m here because I am fearful of AI taking my job, or I’m fearful that I’m not going to feel like I’m doing work of value if I use AI.”
Employees were navigating a real shift. AI tools had landed in their workflows without a clear picture of what that meant for their roles, their contribution, or their professional identity. The company had been explicit about its human-first values, but the emotional response to change doesn’t resolve itself through policy.
The answer was to treat that directly. Before any tools were introduced, a structured ‘Feelings Mapping’ session opened the program. Participants named their fears, their hopes, and their emotional responses to AI in a facilitated, small-group setting with no judgment attached.
It wasn’t a soft warm-up. It was the foundation. Without it, the rest of the program wouldn’t land.
"AI done right is not where humans are replaced. Humans are elevated to orchestrate and direct what AI is focused on. Getting people to that understanding was the shift that made everything else possible."
The six-week framework
The program ran in small cohorts – eight to ten people – meeting for 30 minutes each week with structured homework between sessions. No architects required. No development resource. Participants used whatever AI tool their IT team had approved. The goal wasn’t tool mastery. It was habit formation.
- Week 1: AI Awareness & Mindset Shift. Feelings Mapping. What AI is. Which tools are approved. What the governance guardrails look like. Meeting people where they are.
- Week 2: Personalising Your AI. Participants name their AI assistant and build a prompt that trains it on their own communication style – drawing on six months of their own writing to generate a personal style guide. For most, this is where AI stops feeling generic and starts feeling useful.
- Week 3: AI in Action. Participants bring real use cases from their own roles and work through them together. The peer dynamic is deliberate – people consistently discover applications they hadn’t considered by hearing what colleagues are tackling.
- Week 4: Workflow Integration. The focus moves from individual tasks to daily habits. Where does AI fit into the way you actually work?
- Week 5: Advanced Strategies. Deeper prompt engineering. Role-play scenarios. Where AI genuinely helps versus where human judgment is not replaceable.
- Week 6: Measuring Success & Scaling. Participants review their efficiency data, identify their highest-value use cases, and explore opportunities to coach others.
Alongside each cohort, an AI coaching assistant built in Copilot Studio by the L&D team (without any development resource) handled questions between sessions on tools, governance, sustainable practices, and prompting basics. Common questions were answered at scale. Cohort sessions stayed focused on peer learning and human coaching.
“Asking for clarifying questions first was the bit that changed everything for me. Not just creating prompts, but being intentional,” said one program participant.
What makes it stick: the coach pipeline
In this particular case, around 20% of completers went on to become volunteer coaches – a natural funnel that emerged from the program’s own momentum. The requirement was straightforward: complete one six-week cohort successfully, earn an AI Coaching badge, and take on a cohort of your own.
The badge carried more weight than a symbolic gesture might suggest. It was a visible, internally recognised credential that people actively sought out and displayed as a signal of genuine expertise within a community that was still forming. Cohorts filled quickly and waitlists formed.
What that created was a transferable, repeatable and scalable program. It continues to run as a pillar of the company’s internal champions network, sustained by the coaches it produced.
"The coach model is what makes it transferable and gives it longevity."
What the data showed
The experiment that started it all – six people, one month, no formal structure – produced 67 hours of productivity gain. That was enough to ask what a structured program could do.
Participants tracked their time savings weekly throughout the six weeks: which tasks they used AI for, how long those tasks took before and after, and estimated time saved. Around a third of completers reported data consistently enough to include in the analysis. Average time saved was five hours per week or 20 hours per month. Some roles reported gains of 25% or more, depending on how much of their work AI could support.
At program entry, most participants described themselves as occasional or infrequent AI users. By week six, every employee who completed the course was using AI daily.
Across 104 completers, the estimated productivity value is $1.8M, the equivalent of 13 full-time employees in capacity gained.
"It really gives me a true understanding of how much time you are spending on tasks - it's eye-opening." — Program participant
Human+AI
What this program proved is that you don’t need a top-down mandate or a significant budget to move the needle on AI adoption. You need someone willing to start with the harder conversation, and the patience to let the rest follow from there.
When the human infrastructure is right, AI stops being a transformation initiative and starts being part of how work gets done.
The organizations getting the most from AI right now are those who have taken the human side of the equation as seriously as the technical one.
To talk through what an AI adoption program could look like in your organization, contact us.