April 27, 2026

Slow AI adoption? Check your learning culture.

POSTED BY:

Deanna Kent

AI has already compressed skill half-lives, rewired job requirements, and accelerated execution across every industry. The question isn't whether your organization needs to adapt. It’s whether you'll do it intentionally or chaotically. Most orgs won’t fail because the tech was too hard. They’ll fail because their people didn’t feel safe enough to try it openly. 

There's an irony here. The tech that’s making people anxious about falling behind might be the clearest signal of whether an org’s performance culture is healthy. AI doesn’t just require us to learn new skills. It’ll also expose old habits — especially the ones that seem productive but don’t hold up under pressure.   
  
78% of employees admit to using AI tools not approved by their employer. People are experimenting anyway, but a lot of them are doing it underground.  

Despite the fears around AI, most employees aren’t resistant to building AI fluency. What they’re wary of is its visibility. Experimenting openly can feel risky at a moment when AI is already reshaping work and, in some cases, eliminating roles. When experimenting feels more like exposure, many people cope by staying quiet.   

The foundational work isn’t about getting everyone ‘AI‑ready.’ It’s about creating a culture where people feel ready to try. —Cheryl Yuran, CHRO, Absorb

Effective AI adoption at work requires a healthy culture 

Many orgs don’t have a north star to what a healthy organizational culture looks like. In the age of AI, it’s more about structure than mindset. Are leaders creating clarity and safety? Are teams able to experiment openly? Do the systems make learning behavior visible enough to facilitate improvement?  If the answers are all yes, AI adoption and usage can build on itself. But when there’s even one no, experimentation may be performative.  
  
Performative usage can look productive on paper, but in practice, it just keeps people busy, compliant, and safely unchanged. The cost of that will show up in other places.  

While adopting AI may not be optional for most, how it's integrated is. In environments that are already performative or psychologically unsafe, AI adoption won’t accelerate progress — it’ll amplify fear, silence, and surface level compliance. In healthy organizations, the same tool will support better or different work, not just more work.  

A real learning culture is an organizational survival system  

I spent a decade as a teacher, where I became deeply curious about how people learn. Later, I spent another decade at Disney, watching how learning plays out inside collaborative teams.
  
In the classroom, when I asked students to practice metacognition, documenting their thinking processes and not just their answers, participation only happened when the environment was demonstrably safe. When they did, messy attempts and honest failures were validated. Trust grew and outcomes followed.  

The same pattern is true at work. In high-trust, collaborative teams where leaders encouraged experimentation and sharing, adoption of new tools and thinking felt good — often, they allowed us to move faster and solve harder problems.   
 
One truth I keep coming back to these days, especially with AI raising the stakes, is that a culture of capability isn’t a values poster. It’s how people behave. It’s whether people feel safe enough to ask questions, share unfinished thinking, experiment with new tools, and learn without fear of looking incompetent.  
 
If the culture is surface-level or performative vs outcomes based, real people in real working conditions tend to expose its cracks. And with tech like AI stress-testing systems, those cracks widen. The strongest organizations don’t do away with uncertainty, but they give people a way to move through it productively.  

That’s why learning culture functions as a survival system. Survival systems run on role clarity, shared trust, and feedback that reflects reality, not just what’s reported. They’re not optional when conditions change. They don’t rely on motivation or belief — they rely on supported systems in place. So, when roles are shifting faster than skills can stabilize, learning has to work reliably, visibly, and at scale.  

Can AI help us work faster, better, and differently?  

No matter the emotions that bubble up when we talk about AI (trust me...as a writer with an artist partner, the terror is real), for the first time in decades, the cost of not having a powerful, outcome‑focused culture that can tackle new tech like AI may decide the fate of your business. Far from abstract, this is operational. It’s measurable, leader‑modeled, team‑powered, and sustained by systems that turn learning into capability in the flow of work.  
  
These days, I hear a lot of debates about whether a culture that’s committed to experimentation and adoption is built top‑down or bottom‑up. That’s the wrong question.  

Leaders set the conditions, teams fuel the engine, and AI can show how it's going.   

Top-down or bottom up? (That’s not the question.)   

The strongest cultures don't choose between top-down or ground-up when it comes to AI experimentation and adoption. They run both, a two-gear system that works simultaneously and relentlessly, in dynamic tension. Leadership sets direction and intent. Teams drive discovery and momentum. And more and more, AI‑enabled systems keep both honest by showing what’s happening and not just interpretations.  

Kimberly Williams, Absorb CEO says in the Forbes article Your AI Strategy has a People Problem:

Teams led by executives who model AI use openly consistently outperform teams where leadership waits for certainty. The leaders who go first create cultures that move. The leaders who wait create cultures that drift.

Non-negotiables:   

Leaders go first, naming the skills that matter now, funding the learning, and being open about why experimentation matters and how people are protected while learning in public. Critically, they model the behavior themselves. They use the tools, share what they try, and publish data.
  
Teams bring critical thinking and curiosity, and test micro‑experiments that happen in the flow of work. They try out minor changes, share partial wins, and surface patterns. Here, learning becomes messy, social, iterative, and real.  

But it all must work on a foundation of trust and transparency.   

I have a friend at a game studio who’s been asked to experiment with AI workflows. She’s smart, experienced, and openly stressed. Her fears are rational. We know AI will change work. Some roles will disappear. But we also know opting out doesn’t slow down the change — it just removes your voice from shaping what comes next. Learning new ways of working is possible when leaders are transparent about intent, when teams feel safe sharing what they don’t yet know, and when fear starts to loosen its grip.   

This is why “top‑down vs. bottom‑up” is the wrong debate. The orgs that are winning are building a high-performing culture use a bidirectional system, not a linear one. Leaders create clarity and safety, teams create momentum and meaning, and our AI‑enabled systems scale what works and expose what doesn’t.  

To turn trust into capability, you need good architecture, not just good intent. Without architecture, learning defaults to performance. Sure, courses are completed and boxes are checked, but all the insights will be private.  

What the blended AI adoption model demands  

When it comes to AI at work, the psychological and the operational aren't opposites — they're load-bearing walls for each other. And both are under pressure right now. We know learning is important to employees. We know, increasingly, there’s fear with this powerful tech and also that it may negatively impact jobs. That fear is both real and fatal to a healthy culture if there's no transparency around it.  

Here's what the architecture actually looks like in practice. Without all three systems, learning sparks and fades — or never catches at all.  

Phase 1: Vision — top-down ignition  

Leaders adopt the tools. Publish the 30-day usage log. Name 3–5 outcomes learning is tied to. But be truthful about where you’re going with AI, what AI experimentation will look like, and what it will be used for. Talk about how people are protected while doing it. Using new tools in public can be risky but a leader who's visibly uncertain and still experimenting gives everyone else permission to do the same.  

Phase 2: Co-design — bottom-up momentum  

Teams prototype in-the-flow learning — but only if phase one creates real safety. When AI fear gets dismissed, ignored, or eye-rolled, curiosity goes private. When people don’t feel psychologically safe on a team (especially something AI-related that might signal they’re replaceable), they’ll learn selectively, share strategically, and be secretive — self-preservation.  

But teams can build trust within learning processes. Try running weekly micro-win demos. Talk about AI intent. Talk about AI fears. Try and fail together. And celebrate experiments that move a KPI, even 1%. Make space for what didn't work because it’s a strong signal fear is dropping. My game studio friend? She tried a few prompts that made her process easier and shared them with her team. They didn’t take away from the integrity of her work. And it was a catalyst for some really important and open discussions around AI ethics, usage, and where the company was going.  

Phase 3: Scale — systems and metrics  

Within the process of all of this, AI earns its place. As people experiment with using it, it will be easier to surface patterns to leadership, personalize paths by skill gaps, and route high performers/the highly curious into more experimentation. The irony I mentioned earlier is real... the thing people fear most is also the thing that can make the capability loop honest. For organizations willing to use that data transparently, AI becomes the catalyst rather than the threat.  

These three phases are architecture. But none of it holds without the precondition that people have to believe that not knowing something is a starting point, not a liability.  

Build a strong capability culture with AI, using AI   

Earlier, I said that the top down or bottom-up question was the wrong one. Here’s the real one: What does the system look like where everyone can keep up with learning as fast as the world is changing?   

It looks like leaders experimenting in front of their teams. Being open about what they don’t know yet. Bringing their own reservations, tests, successes, and failures out in the open. It looks like teams who experiment, share, and bring critical thinking as they work. It’s small changes and the messy business of imperfect learning of new ways to work. But all of it is connected to outcomes. It looks like a cohesive system that reveals behavior. When the survival system is working, leaders set conditions, teams generate momentum, and AI makes learning visible enough to improve.  

AI is reshaping HR, but not in the way we predicted. The real shift is less about automation and efficiency and more about how willing we are to stay human while everything around us changes. — Cheryl Yuran, CHRO for Absorb Software, AI will reshape HR—but only if we stay human enough to use it well.  

Cultures at work don’t usually break with a dramatic event or spectacular failure. They die when curiosity and psychological safety aren't the default. Organizations that want high AI adoption encourage curiosity as practiced behavior, not a personality trait. And safety can't be a slogan. It needs to be reinforced by what leaders reward, what teams share, and what the data shows.   

As organizations make the AI shift, for many reasons, people will be scared. But at the speed of change, avoiding fear is far riskier than admitting discomfort and uncertainty.   

Nobody I've talked to has any of this figured out. (That's not a disclaimer — but it is the point.) Whether we mean to or not, right now, we're all learning publicly, messily, and hopefully safely within a team that has the bandwidth to create a healthy culture for what comes next. 

That's exactly why the conditions matter more than the answers. Build the environment where people can learn in public — imperfectly, incrementally, together — and the capability follows. 

AI-Powered LMS
ai generated learning content

Want to see what intentional AI adoption looks like in practice?

Get demo