Most AI adoption programs start with tools. This one started with people.
As a Learning and Development Leader at a global technology company, Diane Gaa ran a one-month experiment with her team of six. Pick any AI tool, she told them, apply it to tasks that are wasting your time, and track what happens.
The team gained 67 hours of productivity in a single month. One question followed: “What if I could do this for the entire organization?”
She designed a peer-to-peer program to find out. What she found first was that the tools weren’t the problem.
The challenge: AI investment, without adoption
To understand what she was working with, it helped to look at the broader picture. The company had deployed AI tools, issued guidelines, and pointed its (circa) 10,000 employees toward them. But adoption was not following. The productivity opportunity was significant – and unrealized.
The approach: human first, efficiency second
The starting point wasn’t a training plan. It was a question: why weren’t people using the tools they already had? Were they utilizing it to its fullest capability?
Employees were navigating a shift that no one had quite prepared them for. New tools had landed in their workflows without a clear picture of what that meant for their roles, their output, or their value to the organization. The answer, when it came, was unexpected. The primary barrier wasn’t capability or access. It was fear – fear of being replaced, fear that leaning on AI would make their own contribution feel less valuable, despite the company being clear on their human-first values
“Sometimes those conversations would get into – I’m here because I am fearful of AI taking my job, or I’m fearful that I’m not going to feel like I’m doing work of value if I use AI,” said Diane. “Those were the real, deep conversations we had to get through before anything else could happen.”
The response was to address that directly. A structured and facilitated ‘feelings mapping’ session was deliberately placed at the start. Participants named their fears, hopes, and emotional responses to AI before any tools were introduced. It was the foundation.
The six-week framework
The program ran in small cohorts – typically eight to ten people – meeting for 30 minutes a week with structured homework between sessions. Participants used whatever AI tool that was approved by IT. In this case it was CoPilot. The focus was on building habits, not on mastering any particular tool.
- Week 1 – AI awareness and mindset shift. The session included a feelings mapping exercise. What AI is, what the approved tools are, what the governance guardrails look like. Meeting people where they are.
- Week 2 – Personalising your AI. Participants name their AI assistant and build a prompt that trains it on their own communication style – drawing on six months of their own writing to generate a personal style guide. For most participants, this is the moment AI starts to feel genuinely useful rather than generic.
- Week 3 – AI in action. Participants brought real use cases from their own roles and worked through them together. The peer dynamic is deliberate – people consistently discover applications they hadn’t considered by hearing what colleagues are tackling.
- Week 4 – Workflow integration. The focus moves from individual tasks to daily habits. Where does AI fit into the way you actually work?
- Week 5 – Advanced strategies. Deeper prompt engineering. Role-playing scenarios. Where AI genuinely helps versus where human judgment is not replaceable.
- Week 6 – Measuring success and scaling. Participants review their efficiency data, identify their highest-value use cases, and learn about opportunities to coach or share their learnings with others.
Alongside each cohort, an internal AI coaching assistant was available to answer questions between sessions on topics such as tools, governance practices, and prompting fundamentals. It helped handle common questions and logistics so that the cohort sessions could remain focused on peer learning and human coaching.
An AI coaching agent – built by the L&D team in Copilot Studio, without any development resource – ran alongside every cohort to answer questions between sessions on tools, governance, sustainable practices, and prompting basics. It handled the logistics while the coaching stayed human.
At program entry, most participants described themselves as occasional or infrequent AI users. By week six, every employee who completed the course was using AI daily.
The shift went further than frequency. Participants described approaching problems differently – being more deliberate about where AI helps and where it doesn’t.
Participants overall
- Reported developing prompts that were later adopted as part of their team’s core practices
- Used the role-plays to work through situations where they didn’t have direct manager support
- Found that the peer sessions surfaced use cases they had never considered
The outcomes went beyond what was planned for.
The coach pipeline (and how the program sustains itself)
Around 20% of completers went on to become volunteer coaches. The requirement was straightforward: complete one six-week cohort successfully and earn an AI Coaching badge – a credential displayed internally, recognised by peers, and increasingly sought after as the program grew.
“I had a wait list of people wanting to get into the sessions.”
– Diane Gaa
Coaches ran their own cohorts. The waitlist never cleared. The program is now embedded as a pillar of the company’s internal champions network and run by the coaches it produced.
“That was always the goal,” says Diane. “The coach model is what makes it transferable and gives it longevity.”
One manager noted that they encouraged their team to join the sessions as well, emphasizing the value of learning together and modelling the behavior they hope to see.
The return
Participants tracked their time savings weekly – which tasks they used AI for, how long those tasks took before and after, and estimated weekly time saved.
Around a third of completers reported data consistently enough to include in the analysis. The average time saved was five hours per week, 20 hours per month. Some roles reported gains of 25% or more, depending on how much of their work AI could optimize.
“It really gives me a true understanding of how much time you are spending on tasks – it’s eye-opening.”
– Program participant
Across 104 completers during the initial pilot, the estimated productivity value is $1.8M – the equivalent of 13 full-time employees in capacity gained. That figure is based on an average rate of £400 ( $535 USD) per day – consistent with the average contingent worker benchmark, and applied across five hours saved per week per person.
* Calculated using an average rate of £400 GBP per day or £50/ per hr across 104 completers, averaging 5 hours saved per week. In USD, this is equivalent to $535 USD per day using March 2026 currency conversion rates.
Many organizations treat AI as a tool to adopt. The real opportunity is what happens when people genuinely change how they work – building efficiencies into every workflow, reclaiming time, and creating capacity that didn’t exist before. 104 people changing how they work meant $1.8M in productivity value. When applied to the entire c.10,000 workforce, that would mean:
AI Adoption Program Assessment
Book a chat with WeLearn to walk through your current state, your workforce profile, and what an AI adoption program could look like in your organization.