Blog

Discernment: the AI skill no one’s building

In this blog

Table of Contents

Share this article:

By Sean Stowers, CEO @ WeLearn

A few weeks ago I was catching up with a colleague who told me about an interview process he’d been going through. The hiring manager said, “I’m going to use AI to build this case study for you.” And my colleague said, “Great, because I’ll probably use AI to help me respond to it.” They both laughed. It was completely natural and open, no awkwardness, no disclaimers.

That conversation stuck with me because it’s still so rare inside organizations. Most people I talk to are using AI regularly. They use it the way they use Google – to get work done faster, to think through problems, to draft and iterate. But when it comes to admitting that at work, they go quiet.

The research lines up with what I’m seeing. Employees expect to be perceived negatively for using AI – as though relying on it signals a lack of capability, even when the finished work is stronger for it. And it goes beyond perception. As David Chestnut points out in his recent article (here) a significant number of workers “actively conceal their AI use from managers and peers” – not because they doubt the quality, but because they’re unsure how the behavior will be read.

David captures the dynamic well: employees treat AI as “something to benefit from quietly rather than a legitimate part of how work gets done.” Over time, that silence becomes the culture. People stick with what feels safe, even when they know there’s a better way to work. The organization checks the AI-enabled box. The way people actually work doesn’t move. David calls what’s missing discernment – and I think he’s right.

What discernment actually means

A lot of AI enablement right now focuses on tool training and prompt engineering. Those have value and they answer the question “how do I use this?” The better question is: “should I use AI here at all, and if so, how do I know whether the output is good enough?”

That’s discernment. It’s the ability to decide whether AI belongs in a given task, which tool fits the situation, what good output actually looks like in your specific context, and when the situation calls for your own expertise instead.

David frames this through a set of decision considerations he’s been teaching – whether a task is probabilistic or deterministic, whether precision matters or variability is acceptable, and where AI’s actual strengths (summarization, pattern recognition, ideation, language transformation) line up against the work. What I like about the framework is that it gives people a vocabulary for decisions they’re already making instinctively. Most people I talk to have some sense of when AI fits and when it doesn’t. They just haven’t had a way to articulate it or defend it to their manager.

David’s article goes deeper into this, and I’d recommend reading it (HERE it is!).

AI Skills

What discernment looks like in practice

Here’s how this plays out in my own work.

I use AI tools throughout my week. I use ChatGPT to help write RFP responses. I’ve built a custom GPT that has our brand voice, knows how I think about our services, and understands how we position ourselves. When I draft a response, I’ll feed the GPT the client’s RFP and ask it to tell me what’s strong and where the gaps are. Then I revise.

Sometimes I move across tools depending on the task. I’ll draft in ChatGPT, then switch to Claude for a different kind of analysis, then come back. No single tool does everything well, and part of the skill is knowing which one fits the job you’re doing right now.

The prompt engineering piece is a small part of that. The bigger part is understanding my own work well enough to know where AI genuinely helps, where it introduces risk, and where I need to throw out what it gave me and start over. That came from doing the work, making mistakes, and building up a sense of what fits over time.

A colleague in the industry recently asked me to share the prompt I use for my custom GPT. I shared it. She came back and said, “You use this in such a different way.” Same tool, same prompt structure, completely different application – because the context she brought to it and the decisions she made with it were her own. That’s the part you can’t hand someone in a training course.

Why more training doesn’t close this gap

As David puts it in his article, “understanding how to work with AI isn’t the same thing as choosing to do so when it matters.” I think every L&D leader needs to sit with that line. I see it constantly – someone who’s perfectly capable with AI choosing the manual route because they’re not sure how it’ll be received. The gap isn’t knowledge. It’s confidence in the environment.

This isn’t a new problem. Long before anyone was talking about AI, I watched organizations roll out a new system or process. Training gets built. People complete it. And then entire teams emerge to manage the workarounds, because the system didn’t reflect how the work actually gets done.

AI enablement is on the same path. Organizations run the courses, give people time to play with the tools, and point to enrollment numbers as proof it’s working. Those are vanity metrics. People revert because the incentives haven’t moved. Managers still expect the same outputs the same way. Nothing in the system tells people it’s safe to work differently.

AI Skills

Discernment is a human skill

This connects to something we’ve been diving into at WeLearn. Discernment – the ability to evaluate, decide, and apply in context – sits squarely within the human skills category. The same category that includes creative and analytical thinking, self-awareness, resilience, and the capacity to learn and adapt.

These capabilities need sustained, deliberate development. They need mentorship, real-world application, and environments where people can practice and get honest feedback. And the data shows they’re more fragile than most organizations assume. Human skills eroded broadly during the pandemic and have not yet recovered to pre-2019 levels according to the World Economic Forum.

If we expect people to exercise discernment with AI, we need to invest in building that capacity with the same seriousness we’d bring to any business-critical skill – mentoring, real practice with real stakes, and support that lasts longer than a single training cycle.

We’ll be sharing more on this in our upcoming ebook on why human skills matter more than ever in an AI-enabled workplace.

What L&D teams can actually do

I think there are a few things L&D leaders can start on now.

Name the distinction between tool proficiency and discernment openly in your organization. Help leadership understand that teaching people how to use AI tools is necessary and also incomplete. Most people are developing the harder skill – the judgment around when and how to apply AI – entirely on their own.

Build practice into your enablement programs that reflects actual work. I don’t mean sandbox exercises where people experiment with prompts in a vacuum. I mean embedding AI into real tasks with real constraints, and creating space to debrief what worked and what didn’t. That’s where discernment grows.

Address the cultural side directly. If people in your organization still feel uncomfortable admitting they used AI, your enablement programs won’t change behavior no matter how well designed they are. Leaders need to use AI visibly. Teams need room to share openly. David has a mantra that we’ve adopted on my teams: we don’t apologize for using AI where it’s appropriate. We should only ever apologize for using it poorly – meaning without critical review, without weighing whether AI was the right call, without bringing our own expertise to bear on the result.

Shift how you measure enablement. Completion rates and badge counts tell you who showed up. They don’t tell you who’s working differently. Track whether people are actually integrating AI into their workflows a month after training, how they’re using it, and what’s changing in how they approach their work.

And connect AI enablement to human skill development. They’re intertwined. Organizations can’t build AI capability on a workforce whose foundational human skills like critical thinking, adaptability, self-awareness and collaboration are declining. The capacities that discernment depends on are the same ones your organization needs for everything else.

Where this goes

AI enablement is going to succeed when organizations build environments where people develop real judgment about AI and feel safe using it. That’s the work ahead for L&D. And it’s deeply human work.

More blogs you might like