top of page

“I’m OK, You’re OK, It’s OK.” How AI Adoption Breaks Down When the Human Side Is Ignored 

  • Writer: James Russell
    James Russell
  • Apr 20
  • 12 min read

James Russell and Ideja Bajra


When organisations talk about AI adoption, the focus is usually on capability, governance, and change management. 


What receives far less attention is the emotional response the term ‘AI’ provokes and the existential misalignment it can surface between individuals and the organisations they work in. AI may be framed as essential to organisational survival, while simultaneously being experienced by individuals as a threat to their sense of worth, relevance, or place, particularly where identity has been built on expertise, judgement, or experience. 


This article explores the challenges people and organisations encounter beneath the surface of AI adoption, and offers a way of understanding them through a social‑psychological-existential lens drawn from Transactional Analysis — one that helps create greater ease, agency and engagement with AI. 


1. The Fear Beneath the Fear 


Many organisations invest in AI (as well as other technologies and change) expecting relatively smooth integration, only to encounter unexpected friction. Resistance to change is not new and AI tends to trigger something deeper than a new process or tool. It reshapes how people work, how value is perceived, and what contribution means. 

This makes AI adoption not just a technology challenge, but a human and organisational one. When the psychological impact is left unaddressed, the potential of AI is constrained and over time, team confidence, performance, and trust are eroded. 


The Current Reality 

  • Nearly nine out of ten organisations are now using AI in some form.  And yet, most are still struggling to embed it deeply enough to create meaningful enterprise‑level value (McKinsey Global Survey, 2025). 

  • What’s holding adoption back is rarely the technology itself. Increasingly, research shows that AI initiatives stall because of anxiety about relevance, identity, and job security, rather than gaps in capability. Leaders who approach AI adoption as a psychological challenge, not just a technical rollout, consistently outperform those who do not (Harvard Business Review, 2026). 

  • A growing perception gap compounds the problem. Many executives believe employees are broadly enthusiastic about AI, while employees report far more hesitation and ambivalence. In some cases, this disconnect leads to quiet resistance or surfacelevel compliance rather than genuine engagement (Forbes / MIT / Writer, 2026). 

  • Importantly, this tension exists even where jobs are not actually disappearing. Research suggests that it is the perceived exposure - not confirmed displacement - that drives fear, defensiveness, and loss of psychological safety. As AI researchers at Anthropic have noted, the psychological impact of uneven exposure is often more significant than the labour‑market effects themselves. 


To understand why AI adoption so often stalls, we therefore need to look beyond tools and training and pay closer attention to what is happening inside people and organisations. 


2. What we Currently See in AI Projects (Ideja Bajra, Edvance AI) 


The dynamics described so far in terms of uncertainty, fear, and resistance show up very clearly inside real AI projects. 


For most of industrial and digital history, the relationship between humans and technology was relatively straightforward. Humans designed the systems, programmed them, and decided when they ran and when they stopped. Even as automation reshaped entire industries, people retained a key assumption: they remained epistemically in charge. The machine executed instructions. 

Contemporary AI challenges this assumption. 


AI is not simply a faster tool or a more efficient system anymore. It generates options that users did not explicitly request, improves processes that were not formally identified as problems, and performs tasks not only more quickly but, in some cases, to a higher standard. 


When AI systems make mistakes, these errors often do not feel like operator error. They are experienced as a failure of the underlying source of truth. This produces a qualitatively different kind of uncertainty. 


The impact is particularly significant because of where AI is being deployed. Previous waves of automation primarily transformed manual and routine work. In contrast, current AI systems are entering domains that white‑collar professionals have historically regarded as distinctly human: creative work, professional judgement, diagnosis, and interpretation. 


For many people working in finance, insurance, law, or strategy, this is the first time technology has not only supported their expertise, but directly questioned it. This ultimately becomes an identity challenge. 


Despite this, many organisations continue to approach AI adoption as if it were a conventional IT project. 


Recurring patterns tend to appear across legacy industries and otherwise capable leadership teams, as we have seen at Edvance AI: 


  • AI strategies are developed in boardrooms rather than at the level of day‑to‑day workflows. 

  • Perceived urgency accelerates timelines. 

  • Pilots are launched before employees are adequately prepared. 

  • Training is compressed, inconsistent, immeasurable, or treated as optional.

  • New tools are introduced, employees demonstrate surface‑level compliance, and adoption then plateaus. 

  • Organisations respond by retraining and relaunching, only to see the initiative stall again in a slightly different form. 


The primary issue is that the underlying human and organisational dynamics remain unaddressed. These dynamics are exactly what the Transactional Analysis lens - and the Life Positions / OK‑OK framework in the next section - can help leaders see and work with more clearly. 


3. Why AI Adoption Operates on Three Levels (and why the Levels Relate) 


Transactional Analysis (TA) offers a useful way of understanding why AI adoption is rarely “just” a tooling or training problem. TA distinguishes between three interrelated levels of human experience and interaction: Social, Psychological, and Existential. What matters most is not simply that these levels exist, but that they influence each other in a clear direction: 


Existential ‘givens’ that influence the psychological mind (and therefore what we think, believe and feel)


The psychological mind shapes social behaviour (what we outwardly say and do). 

In other words: what we see people doing around AI (their behaviours) is often the observable result of what they are thinking and feeling, and that inner experience is frequently shaped by deeper existential realities that are difficult to name directly.


This is why AI adoption can become stuck even when the technology works.

 

The Three Levels 


  • Social level (observable): what people say and do — roles, behaviours, processes, language, and ways of working. This is the level most organisations focus on first.

  • Psychological level (internal): what people think and feel — identity, competence, belonging, relevance, self‑worth, and questions like “Do I matter here?” This is where adoption commonly wobbles: people comply on the surface while privately feeling threatened or diminished. 

  • Existential level (the ‘givens’): the unalterable facts of being human — mortality, uncertainty, limited control, and the absence of any built‑in meaning (meaning is 

    something we make at the Psychological level). Most organisations do not talk at this level, but AI can bring these realities and uncertainties closer to the surface, which then amplifies psychological reactions (and potentially social behaviours). 


What this looks like in AI adoption 


At the Social level, you see the visible patterns: 


  • rollouts, policies, training, pilots, ‘yes, in the meeting’ behaviours, and plateauing usage. 


At the Psychological level lie the un-observable thoughts that are behind what the Social layer is expressing: 


  • fear of looking incompetent; quiet avoidance; over‑reliance; cynicism; loss of confidence; a drop from commitment into compliance. 


And beneath that, at the Existential level, AI can act as an amplifier to these thoughts: 


  • It can stir questions about whether human work is becoming less valued, whether the future is less predictable, and for some, what this means for the long‑term viability of human roles, or even humanity itself. These existential pressures are hard to hold directly. Instead, the mind often converts them into psychological positions that are easier to live inside, which then show up as behaviour. 

  • For many people, this existential anxiety is not limited to their own role. It extends to questions about the future of work itself and what opportunities will exist for the next generation. When people worry not just about their relevance, but about whether meaningful human contribution is being eroded over time, psychological threat intensifies. That threat can then surface socially as resistance, disengagement, or over‑compliance. 

  • It is also important to acknowledge that people do not encounter AI in a neutral world. For some groups, early experiences of AI have already included bias, exclusion, or harm, particularly where historic human prejudice has been encoded into data or systems. While this article focuses on workplace adoption, those wider experiences matter. They shape the psychological stance people bring with them into organisational settings, and they help explain why trust, safety, and OKness cannot be assumed


Key implication: When organisations treat AI adoption only as a social‑level change — tools, training, and process — they end up addressing symptoms. Sustainable adoption comes from working at the Psychological level (how people are experiencing the change), while acknowledging the existential pressures that may be feeding those experiences. 


Why this matters for AI adoption 


AI adoption fails when organisations: 

  • Work only at the social level 

  • Attempt to manage psychological responses with more process 

  • Ignore the deeper psychological threats that nudge towards the question of “Do I still have a place here at all?” 


Successful adoption happens when leaders: 

  • Name what’s happening at a Social level 

  • Are curious about what might be behind behaviours at a Psychological level (for individuals, teams and the wider organisation) 

  • Create psychological safety before pushing capability 

  • Reinforce a sense of human value, agency, and judgement alongside AI capability 

In short: 

AI adoption isn’t blocked by technology. 

Integration progresses by understanding people and the system(s) they are in. 


Seen through this lens, it becomes clear why so many AI initiatives stall despite strong intent and investment. Organisations focus their energy on the social level - tools, governance, workflows - while the real friction sits lower down. The result is often compliance rather than commitment: people do what is asked of them and stop short of fully engaging. Until the psychological impact of AI is recognised and worked with, adoption remains superficial. Transactional Analysis offers a practical way of understanding and responding to what’s happening beneath the surface. 


The question is not “Why are people resisting AI?” It is “What existential pressure is AI surfacing and what psychological story is the organisation currently living inside because of it?” 


4. Extending OK‑OK: Making Space for AI Without Losing Human Agency 


Transactional Analysis offers a simple but powerful way of understanding how people relate to themselves and others under pressure. At the heart of this is the concept of Life Positions, first articulated by Eric Berne and later adapted by Frank Ernst into the OK‑Not OK Matrix (often called the ‘OK Corral’). 


At its core, the model describes the underlying stance people take towards themselves and others, particularly when they feel challenged or threatened. These stances are not intellectual positions; they are lived, emotional orientations that shape behaviour, trust, learning, and decision‑making. 


The most psychologically healthy and productive stance is: 


“I’m OK, You’re OK.” 


In organisational life, this position underpins: 

  • Psychological safety 

  • Adult‑to‑Adult dialogue 

  • Learning rather than defensiveness 

  • Accountability without blame 


It assumes that I have worth, you have worth, and that differences can be explored rather than defended against. 


Why AI Disrupts the ‘OK‑OK’ Position 


AI introduces something genuinely new into this relational landscape. 


For the first time at scale, people are not just working with other humans, but alongside systems that analyse, generate, recommend, and sometimes outperform them in areas that have historically defined professional value: judgement, expertise, interpretation, creativity. 


This matters psychologically because AI is no longer experienced as a neutral background tool. It increasingly behaves like an active participant in work: 


  • It offers answers unprompted 

  • It suggests decisions rather than executes instructions 

  • It speaks in human‑like language 

  • It appears confident, authoritative, and fast 


Under pressure, humans naturally anthropomorphise (attribute human characteristics or behaviours) powerful non‑human forces — cars, boats, machines — especially when we depend on them and can’t fully predict them. Given this tendency, AI can become unconsciously treated as a “someone” rather than a “something”. And once that happens, the OK‑OK position is quietly destabilised. 


People can begin to slip into less healthy Life Positions (‘I’m Not OK’) exhibited socially or held psychologically as: 


  • “It knows more than me” 

  • “I’m falling behind” 

  • “I’d better not question this” 

  • “What’s the point of my judgement now?” 


These are not failures of attitude or capability. They are signals of threat to OKness.


Unhealthy Positions in AI‑Enabled Work 


In an unhealthy Life Position, a combination of uncertainty and anthropomorphised AI can pull people out of OK‑OK and into reactive patterns (Figure 1): 


Figure 1: 


These behaviours are often mislabelled as “resistance to AI”. In reality, they are attempts to regain a sense of safety when OK‑ness feels under threat. 

Extending the Model: Introducing “It’s OK” 


To work effectively with AI, people need a way to remain grounded without competing with it or submitting to it


This requires a subtle but important extension to the traditional TA framing: I’m OK — You’re OK — It’s OK 


Where “It” refers to: 

  • The AI system 

  • The model or agent 

  • The technology‑mediated way of working now present 


Crucially, “It’s OK” does not mean “It’s right” or “It’s always correct.” 


It means something more psychologically stabilising: 

  • AI is not a judgement on human worth 

  • AI can be engaged with without submission or resistance 

  • Human judgement remains legitimate 

  • Responsibility stays with people 


At a Psychological level, the stabilising position is: 

“I’m OK in relation to this system.” 

At a Social level, the relational frame with other humans returns: 


“I’m OK, You’re OK - and together we decide what to do with what It produces.” 


AI does not replace relationship or responsibility; it sits within them. In this way, AI can be accepted without being deferred to, and human responsibility remains intact. 


Healthy AI Engagement from an OKOKIt’s OK Position 


This example flow shows how OK‑ness enables agency, judgement, and learning.


Figure 2: 


When OK‑ness is preserved, AI becomes something to work with, not something to fight, defer to, or hide behind. 


What This Enables in Practice 


When people can hold I’m OK — You’re OK — It’s OK simultaneously: 


  • They don’t need to compete with AI to feel valuable 

  • They don’t need to defer blindly to feel safe 

  • They can challenge outputs without defensiveness 

  • They stay accountable, present, and engaged

 

In organisational terms, this supports: 


  • Commitment rather than compliance 

  • Confidence rather than control 

  • Learning rather than comparison 

  • Adoption without loss of identity 


AI becomes a participant in work not an authority over it


Why This Matters for Leaders 


Many AI initiatives implicitly ask people to jump straight to “It’s OK” without first reinforcing “I’m OK.” That gap is where anxiety, resistance, and over‑reliance emerge. 


Leaders who enable sustainable adoption attend to the full equation: 


  1. Affirm human value and judgement (I’m OK

  2. Maintain trust and relationship (You’re OK

  3. Normalise AI within human accountability (It’s OK


Only then can people genuinely operate from: 


I’m OK. You’re OK. It’s OK. And we can do good work together. 


5.Bringing it together: adoption that works with people, not against them 

AI adoption rarely fails because organisations choose the wrong tools. It falters when the human experience of the change is left unattended


Throughout this article we’ve explored why AI is different from previous technology shifts - not just because of what it can do, but because of what it touches: confidence, identity, relevance, and meaning. When those deeper layers are ignored, organisations see compliance without commitment, usage without trust, and progress that never quite sticks. 


The move from “I’m OK, You’re OK” to “I’m OK, You’re OK, It’s OK” is a practical reframing that helps leaders recognise what people need in order to engage with AI in ways that are both effective and human. It creates space for judgement, challenge, learning, and ownership rather than fear, submission, loss of agency or resistance. 


This is where the technical and the psychological must come together. 


Ideja Bajra explains how Edvance AI is addressing this challenge with organisations and teams: 


Edvance works with organisations to diagnose why there is a gap between AI being implemented and gains in productivity, cost-to-serve and efficiency. The team audits existing tools and pilots against real operational need through their proprietary FAULT framework: five pillars that scan the organisational system for where adoption is breaking down: 


  • Foundations - is the data, infrastructure and governance actually ready for AI?

  • Alignment - is AI connected to real business strategy or just chasing trends?

  • Unity - are teams and functions moving together or creating silos? 

  • Leadership - do leaders have the clarity and confidence to drive this forward

  • Trust - do people trust the technology, the process, and each other enough to engage? 

Alongside that diagnostic, their work focuses on the human layer - working directly with leaders and teams through one-to-ones, speeches and group sessions to surface the limiting beliefs and stories driving resistance. Personal journeys with AI are drawn upon as proof to help move forward and work to map individual and team road maps: what AI literacy actually means for them, what the technology is for, and what the future looks like with people at the centre of it. 

Conclusion 

Patterns such as hesitation, resistance, over‑reliance, or disengagement are often treated as obstacles to overcome. Seen differently, they can be read as indicators of where psychological safety or OK‑ness is under pressure. 


The task is not to accelerate adoption for its own sake, but to create conditions where people can remain grounded, thoughtful, and responsible in how they engage with AI. 

When AI is approached in this way, it becomes something organisations can live with and learn alongside rather than something people must either submit to or resist. 


About the Authors 

James Russell is an executive 

coach and organisational 

change adviser who helps 

leaders understand the human 

and psychological dynamics 

that shape whether change, 

including AI adoption, genuinely takes hold. www.leadtochange.co.uk

Ideja Bajra is the founder of 

Edvance AI, an award-winning 

consultancy helping 

organisations to get AI adoption 

right, with a focus on 

organisational readiness, digital 

literacy, and the human side of AI 

transformation. www.edvance-ai.com



Acknowledgments 

With thanks to Rosemary Napper, TSTA‑O, for her contribution to the thinking and application of Transactional Analysis in organisational settings. 

bottom of page