The Three Tribes of AI Adoption: Why Many Organisations Get It Wrong

This informal CPD article, ‘The Three Tribes of AI Adoption: Why Many Organisations Get It Wrong’, was provided by Andrew Howie, Director of Partnerships at Open eLMS, who focuses on how organisations can leverage technology solutions to enhance rather than replace human capability in workplace learning.

Artificial intelligence has arrived in workplace learning, but organisations are struggling to respond effectively. After extensive conversations with L&D leaders across sectors—from global technology firms to aerospace manufacturers, healthcare organisations to international development agencies—a clear pattern emerges: most organisations fall into one of three distinct tribes when approaching AI adoption. And each approach presents significant challenges.

Tribe One: The Frozen

The first tribe views AI as a potential threat. These organisations have implemented blanket bans, blocked access to AI tools, and created policies that treat any AI experimentation as career-limiting behaviour. Their L&D teams whisper about using ChatGPT at home but would never admit it at work.

Research into AI adoption patterns in corporate learning environments identifies this "resistance through restriction" approach as one of the most common initial responses to generative AI technologies [1]. One global L&D director described encountering this mindset from senior leadership: "The immediate response was 'we don't need these people anymore.' They genuinely believed AI would replace the entire learning function within months" [2].

This tribe's fear isn't entirely unfounded. Studies on workplace automation demonstrate that AI does automate certain routine tasks, particularly in content curation and administrative processes [3]. But their response—avoiding engagement entirely—creates a different problem: their workforce falls behind whilst competitors build AI literacy and integration skills. By the time they're ready to engage, the capability gap has become enormous.

Tribe Two: The Reckless

The second tribe moves rapidly into AI adoption with minimal governance, inadequate training, and unrealistic expectations. They purchase enterprise AI platforms, mandate their use across the organisation, and measure success through adoption dashboards rather than actual outcomes.

Analysis of enterprise AI implementation reveals that tools deployed without adequate training or clear purpose typically achieve less than 15% sustained adoption rates within six months [4]. One senior learning technologist observed: "Leadership bought an AI content creation tool, sent one email announcing it, and expected magic. Six months later, usage was at 7% and nobody could articulate what problem it was solving" [2].

This tribe generates what education technology researchers have termed "algorithmic output without pedagogical value"—content that's technically AI-generated but adds no genuine learning benefit [5]. They confuse activity with progress, treating the presence of AI tools as equivalent to successful transformation.

Tribe Three: The Confused

The third tribe sits somewhere between frozen and reckless: they know AI matters, they've been told to "do something about it," but they have significant misconceptions about what the technology actually does.

These organisations treat AI like an enhanced search engine rather than understanding its generative and conversational capabilities [6]. They expect it to provide perfect answers immediately. They don't understand the concept of iterative prompting, contextual refinement, or validation requirements. When the tool doesn't deliver instant perfection, they conclude it's not ready for serious work.

Research into AI literacy in professional contexts highlights this "expectation mismatch" as a significant barrier to effective adoption [7]. One university instructor teaching both corporate L&D and future instructional designers noted: "Students ask whether their degrees still matter. Corporate clients ask whether they still need people. Both questions assume AI is either irrelevant or all-powerful. Neither is true" [2].

What Actually Works: The Fourth Approach

Between these three tribes exists a fourth approach that's far less common but demonstrably more effective. Organisations following this path share five characteristics:

1. They treat AI as a collaborative partner, not a replacement

Successful organisations position AI as an augmentation tool—something that handles initial drafts, generates options, accelerates mundane tasks, but always requires human judgment, validation, and refinement [8]. This "human-in-the-loop" approach has been identified as critical for maintaining quality and accountability in AI-assisted professional work [9].

A global learning leader managing programmes across multiple continents explained: "AI gets us 60-70% of the way there on curriculum frameworks or assessment design. But that final 30-40%—the cultural nuance, the contextual adaptation, the judgment calls—that's where human expertise remains essential" [2].

2. They create structured experimentation spaces

Rather than either banning AI or mandating its use, effective organisations create explicit permission for structured experimentation. Research into innovation adoption demonstrates that

psychological safety and dedicated experimentation time significantly increase successful technology integration [10].

One aerospace L&D leader described implementing "AI exploration hours" where team members could experiment with different tools and share findings: "We learned more in those sessions than we would have from any formal training. People discovered use cases we'd never have predicted" [2]. 

cpd-Open-eLMS-AI-as-augmentation-tool
AI as an augmentation tool

3. They model from the top

In organisations where AI adoption succeeds, leadership doesn't just permit use—they actively demonstrate it. Studies of organisational change highlight that visible leadership modelling of new behaviours accelerates adoption rates by 40-60% compared to policy changes alone [11]. They show draft outputs and explain their refinement process. They share both successes and failures. They normalise continuous learning rather than pretending expertise.

4. They distinguish between efficiency AI and capability AI

Perhaps most importantly, successful organisations understand a critical distinction: some AI applications genuinely reduce work (efficiency AI), whilst others simply create different work or add complexity without value [12].

An instructional designer specialising in bias-aware learning put it bluntly: "AI that automatically generates generic e-learning modules saves time but produces mediocre results. AI that helps me analyse learner performance patterns and adapt content accordingly—that's genuinely valuable. Most organisations can't tell the difference" [2].

5. They maintain the human development pipeline

The most sophisticated organisations recognise a paradox: AI that handles entry-level and mid-level tasks can inadvertently destroy the development experiences people need to reach senior capability [13]. This phenomenon, termed "skill hollowing" in organisational development literature, occurs when automation removes the developmental experiences that build expertise [14].

One L&D strategist working with international organisations warned: "If AI does all the routine sales presentations, where do junior salespeople learn client interaction skills? If AI handles straightforward legal analysis, how do junior lawyers develop judgment? We risk creating what I call 'hollow shell organisations'—AI-efficient but brittle, lacking the human capability to adapt when conditions change" [2].

These organisations deliberately design development experiences that preserve critical learning pathways even when AI could handle the immediate task more efficiently.

The Leadership Question

The difference between these approaches ultimately comes down to leadership behaviour and organisational culture rather than technology choices.

Organisations stuck in tribe one, two, or three typically share certain characteristics: senior leaders who expect instant results without investment in learning; cultures where mistakes are punished rather than analysed; procurement processes focused on tool features rather than implementation readiness; success metrics that measure activity rather than outcomes [15].

Organisations finding the fourth path demonstrate different patterns: leaders who acknowledge their own learning curves; psychological safety that permits experimentation [16]; recognition that capability development requires time and structured support; willingness to say "we don't know yet" without it being career-limiting.

Where This Leaves L&D Professionals

For learning and development professionals navigating AI adoption, the implications are clear:

Your role isn't becoming obsolete—but it is changing. Research into AI's impact on professional work suggests that transactional elements (content curation, basic assessments, scheduling, administrative tracking) increasingly can be automated [17]. What remains essential is what AI cannot do: building relationships, reading emotional context, asking questions the AI wouldn't think to ask, creating psychological safety, adapting to cultural nuance, and exercising judgment where the "right answer" depends on factors no algorithm can fully capture [18].

The question isn't whether AI will transform workplace learning. It already has. The question is which tribe your organisation belongs to—and whether you can help navigate toward the fourth path.

We hope this article was helpful. For more information from Open eLMS, please visit their CPD Member Directory page. Alternatively, you can go to the CPD Industry Hubs for more articles, courses and events relevant to your Continuing Professional Development requirements.

References

[1] Mollick, E.R. and Mollick, L. (2022). "New Modes of Learning Enabled by AI Chatbots: Three Methods and Assignments." SSRN Electronic Journal. Available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4300783

[2] Primary research: Structured interviews with senior L&D leaders and learning technology specialists across sectors including aerospace, technology, healthcare, education, and international development, conducted between August 2025 and January 2026 by the author.

[3] Brynjolfsson, E. and McAfee, A. (2014). The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. New York: W.W. Norton & Company.

[4] Gartner Research (2023). Predicts 2024: Generative AI Adoption in Enterprise Learning and Development. Gartner, Inc.

[5] Selwyn, N. (2019). Should Robots Replace Teachers? AI and the Future of Education. Cambridge: Polity Press.

[6] Baidoo-Anu, D. and Ansah, L.O. (2023). "Education in the Era of Generative Artificial Intelligence (AI): Understanding the Potential Benefits of ChatGPT in Promoting Teaching and Learning." SSRN Electronic Journal. Available at: https://ssrn.com/abstract=4337484

[7] Long, D. and Magerko, B. (2020). "What is AI Literacy? Competencies and Design Considerations." Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1-16.

[8] Jarrahi, M.H., Askay, D., Eshraghi, A. and Smith, P. (2023). "Artificial Intelligence and Knowledge Management: A Partnership Between Human and AI." Business Horizons, 66(1), pp. 87-99.

[9] Shneiderman, B. (2020). "Human-Centered Artificial Intelligence: Reliable, Safe & Trustworthy." International Journal of Human-Computer Interaction, 36(6), pp. 495-504.

[10] Edmondson, A.C. (2019). The Fearless Organization: Creating Psychological Safety in the Workplace for Learning, Innovation, and Growth. Hoboken, NJ: John Wiley & Sons.

[11] Kotter, J.P. (2012). Leading Change. Boston: Harvard Business Review Press.

[12] Davenport, T.H. and Mittal, N. (2023). "How Generative AI Is Changing Creative Work." Harvard Business Review, November-December 2023.

[13] Autor, D.H. (2015). "Why Are There Still So Many Jobs? The History and Future of Workplace Automation." Journal of Economic Perspectives, 29(3), pp. 3-30.

[14] Kellogg, K.C., Valentine, M.A. and Christin, A. (2020). "Algorithms at Work: The New Contested Terrain of Control." Academy of Management Annals, 14(1), pp. 366-410.

[15] Rogers, E.M. (2003). Diffusion of Innovations (5th ed.). New York: Free Press.

[16] Edmondson, A.C. and Lei, Z. (2014). "Psychological Safety: The History, Renaissance, and Future of an Interpersonal Construct." Annual Review of Organizational Psychology and Organizational Behavior, 1(1), pp. 23-43.

[17] Manyika, J., Lund, S., Chui, M., Bughin, J., Woetzel, J., Batra, P., Ko, R. and Sanghvi, S. (2017). Jobs Lost, Jobs Gained: Workforce Transitions in a Time of Automation. McKinsey Global Institute.

[18] Huang, M.H. and Rust, R.T. (2018). "Artificial Intelligence in Service." Journal of Service Research, 21(2), pp. 155-172.