top of page
HCL Review
nexus institue transparent.png
Catalyst Center Transparent.png
Adaptive Lab Transparent.png
Foundations of Leadership
DEIB
Purpose-Driven Workplace
Creating a Dynamic Organizational Culture
Strategic People Management Capstone

The Hidden Cost of Trust Misalignment: How Emotional and Cognitive Dissonance Undermines AI Adoption in Organizations

Listen to this article:


Abstract: Artificial intelligence adoption in organizations fails at rates approaching 80%, despite substantial investment and strategic priority. This article synthesizes findings from a real-world qualitative study tracking AI implementation in a software development firm to reveal how organizational members develop four distinct trust configurations—full trust, full distrust, uncomfortable trust, and blind trust—each triggering different behavioral responses that fundamentally shape AI performance and adoption outcomes. Unlike previous research assuming use/non-use as the primary behavioral outcome, this analysis demonstrates that organizational members actively detail, confine, withdraw, or manipulate their digital footprints based on trust configurations, creating a vicious cycle where biased or asymmetric data degrades AI performance, further eroding trust and stalling adoption. The article offers evidence-based interventions addressing both cognitive trust (through transparency, training, and realistic expectation-setting) and emotional trust (through psychological safety, ethical governance, and leadership emotional contagion), while highlighting the critical insight that organizational culture alone cannot guarantee AI adoption success. Organizations must develop personalized, trust-configuration-specific strategies that recognize the intricate interplay between rational evaluation and emotional response in technology adoption.

The promise of artificial intelligence has captivated organizational leaders across industries. AI technologies—sophisticated systems that collect, aggregate, and process vast amounts of data to automate or augment human decision-making—represent a potential source of competitive advantage, capable of enhancing decision quality, fostering innovation, facilitating collaboration, and increasing productivity (Kellogg et al., 2020; Raisch & Fomina, 2023; Tong et al., 2021). Yet despite substantial investments and strategic commitment, approximately 80% of AI adoption initiatives fail (Bojinov, 2023).


This paradox has intensified as organizations recognize that technological sophistication alone cannot guarantee successful implementation. The human element—specifically, how organizational members perceive, trust, and respond to AI—has emerged as a critical determinant of adoption success (Burton et al., 2020; Gillespie et al., 2023; Glikson & Woolley, 2020). Trust, defined as the willingness to be vulnerable to another party's actions based on positive expectations (Mayer et al., 1995), plays an particularly vital role in AI adoption because AI technologies often involve greater uncertainty and perceived risk than conventional technologies.


Recent scholarship has illuminated the multidimensional nature of trust in AI, distinguishing between cognitive trust—rational evaluation of AI's competence and usefulness—and emotional trust—affective responses rooted in feelings rather than logic (Glikson & Woolley, 2020; Hengstler et al., 2016). However, research findings remain inconclusive regarding how these two dimensions interact and jointly influence adoption. Some studies suggest emotional trust flows from cognitive assessments (Choung et al., 2023), while others demonstrate their independence and argue for emotional trust's decisive role (Moussawi & Benbunan-Fich, 2021; Seitz et al., 2021). Moreover, most studies assume that organizational members' behavioral responses to AI consist primarily of use or non-use decisions, overlooking the broader range of adaptive behaviors that may emerge.


This article addresses these gaps by examining how different configurations of cognitive and emotional trust shape organizational members' behaviors and, ultimately, AI adoption outcomes. Drawing on a qualitative study that tracked the introduction, implementation, and use of an embedded AI tool in a medium-sized software development firm, the analysis reveals four distinct trust configurations and demonstrates how each triggers specific behavioral responses in the digital environment. These behaviors—detailing, confining, withdrawing, and manipulating digital footprints—directly influence the data quality feeding AI algorithms, creating either virtuous or vicious cycles that determine adoption success or failure.


The implications extend beyond academic theory. Understanding how trust configurations drive behavior offers leaders actionable insights for designing adoption strategies that address both the rational and emotional dimensions of human-AI interaction, recognizing that organizational culture alone cannot overcome trust-related barriers to technology adoption.


The AI Adoption Landscape

Defining AI and Its Organizational Manifestations


Artificial intelligence encompasses technologies designed to collect, aggregate, and process large data volumes from diverse sources to either automate human tasks (replacing human activity to improve efficiency) or augment human capabilities (complementing human judgment through continued interaction) (Raisch & Krakowski, 2021). This distinction matters because automation and augmentation imply fundamentally different relationships between humans and technology, with augmentation requiring sustained human engagement and trust.


AI manifests in organizations through three primary representations (Glikson & Woolley, 2020):


  • Robotic AI: Physical machines capable of movement and interaction, ranging from manufacturing robots to surgical assistants and social robots designed for human engagement

  • Virtual AI: Non-physical systems with distinguished identities—names, avatars, or voices—including virtual assistants like Alexa and Siri, and purpose-specific chatbots deployed in customer service and employee support

  • Embedded AI: Algorithms operating invisibly "behind the scenes" without visual representation or distinguished identity, such as resume-screening systems in human resources, fraud detection applications, or recommendation engines


Each representation presents distinct trust challenges. Robotic and virtual AI benefit from tangibility and anthropomorphic features that may enhance both cognitive and emotional trust through familiar human-like characteristics (Bainbridge et al., 2011; Gkinko & Elbanna, 2023). Embedded AI, however, operates invisibly, potentially creating uncertainty about its existence, operation, and implications—a particularly relevant concern in organizational contexts where data privacy and algorithmic decision-making intersect with employee wellbeing and career outcomes (Hengstler et al., 2016).


State of Practice: Investment, Implementation, and the Adoption Gap


Organizations have invested heavily in AI adoption, recognizing its strategic importance. Research demonstrates that AI can enhance decision-making processes (Cao et al., 2021), facilitate interfirm collaboration (Cepa & Schildt, 2023), foster innovation (Haefner et al., 2021), and increase employee productivity when deployed effectively (Tong et al., 2021). However, the translation from investment to value realization remains elusive for most organizations.


The failure rate is staggering and consistent across industries and geographies. Beyond the headline 80% failure estimate, empirical studies reveal specific manifestations of adoption challenges:


  • Only 24% of middle managers and 14% of front-line managers express willingness to trust AI advice in business decisions (Kolbjørnsrud et al., 2017)

  • Organizational members frequently engage in "foot-dragging"—ignoring AI tools whenever possible—when they perceive AI recommendations as conflicting with professional identity or lacking credibility (Christin, 2017)

  • Even when AI demonstrates superior performance, people exhibit "algorithm aversion," preferring human judgment after witnessing even minor AI errors (Dietvorst et al., 2015)


These patterns suggest that technical capability alone cannot drive adoption. The human experience of AI—shaped by trust, perception, emotion, and organizational context—determines whether sophisticated algorithms deliver organizational value or languish unused.


Organizational and Individual Consequences of Trust Misalignment

Organizational Performance Impacts


When trust configurations undermine AI adoption, organizations forfeit substantial performance benefits while incurring significant costs. The consequences manifest across multiple dimensions:

Diminished decision quality and speed. Organizations implementing AI for decision augmentation expect improved accuracy and velocity. However, when organizational members distrust AI recommendations—whether for cognitive or emotional reasons—they either ignore the technology entirely or engage in time-consuming verification processes that negate efficiency gains. Research on algorithmic decision aids demonstrates that low trust leads to underutilization even when algorithms outperform human judgment (Dietvorst et al., 2015).


Compromised data ecosystems. AI systems depend on comprehensive, accurate data inputs. When organizational members confine, withdraw, or manipulate their digital footprints due to trust concerns, they create incomplete or distorted data environments. The TechCo case revealed that behavioral responses to trust misalignment caused experts to disappear from competence maps and created asymmetric representations of organizational capabilities—outcomes that degraded AI performance and triggered further trust erosion (Vuori et al., 2025).


Wasted investment and opportunity costs. Failed AI implementations represent not only sunk development and deployment costs but also foregone competitive advantages. Organizations that cannot successfully adopt AI lose ground to competitors who leverage AI for innovation, efficiency, and market responsiveness. The consulting firm KPMG estimated that organizations waste billions annually on AI projects that fail to deliver value (Gillespie et al., 2023).


Cultural and political fragmentation. Trust misalignment often creates factions within organizations: early adopters frustrated by colleagues' resistance, skeptics validated by early failures, and leaders struggling to understand why strategic investments fail to gain traction. These divisions can undermine collaboration and create lasting cynicism about technology initiatives.

Individual Wellbeing and Employee Impacts


The consequences of trust misalignment extend beyond organizational metrics to affect individual employees' work experiences, psychological wellbeing, and career trajectories:


Anxiety and emotional labor. Organizational members experiencing low emotional trust—particularly the "uncomfortable trust" configuration combining high cognitive assessment with negative emotional responses—face ongoing psychological tension. The TechCo study documented employees expressing fear about data misuse, worry about surveillance, and discomfort with visibility (Vuori et al., 2025). Managing these emotional responses constitutes additional emotional labor that detracts from core work activities.


Behavioral burden and inefficiency. Employees who confine or withdraw their digital footprints to manage trust concerns often create extra work for themselves and others. Marking calendar entries as private, avoiding certain communication channels, or manually verifying AI outputs requires time and attention. Moreover, these defensive behaviors can undermine legitimate organizational processes—a TechCo manager noted that marking events as private improved personal comfort but made scheduling meetings more difficult (Vuori et al., 2025).


Career implications and inequality. When AI systems influence resource allocation, project assignments, or performance evaluation, trust misalignment can have material career consequences. Employees who withdraw from AI-enabled systems may become invisible to opportunities, while those who manipulate their digital footprints may gain unfair advantages. The TechCo case revealed employees deliberately cultivating digital footprints to appear as experts in "hot" technologies to secure desirable project assignments (Vuori et al., 2025). This dynamic potentially rewards those comfortable with impression management over those with genuine expertise who prefer privacy.


Diminished autonomy and control. Embedded AI often operates without explicit user control, creating particular challenges for employees who value autonomy. Research on algorithmic management of gig workers demonstrates that lack of transparency and control can trigger resistance behaviors ranging from gaming the system to withdrawing entirely (Möhlmann & Zalmanson, 2017). These responses protect individual agency but undermine collective performance.


Evidence-Based Organizational Responses

Table 1: Organizational Strategies and Interventions for Successful AI Adoption


Intervention Strategy

Focus Area

Target Trust Dimension

Specific Practices

Key Benefits

Organizational Example

Transparent Communication and System Explainability

Transparency

Cognitive Trust

• Create non-technical documentation (FAQs, visual diagrams)

• Tailor algorithmic explanations to user expertise levels

• Explicitly communicate data sources and usage

• Honestly describe AI limitations and error boundaries

• Provide visualization tools for AI processes

Reduces uncertainty; Enables rational evaluation of AI competence; Increases confidence in AI recommendations; Improves human-AI collaboration

Siemens

Procedural Justice and Participatory Design

Procedural Justice

Emotional Trust

• Involve end-users in selecting and configuring AI tools

• Establish employee committees for ethical oversight

• Use opt-in defaults for participation

• Create regular feedback channels for concerns

• Clarify decision rights for AI vs humans

Enhances feelings of being respected; Addresses bias and fairness; Increases adoption through ownership; Signals respect for autonomy

Unilever

Psychological Safety and Emotional Legitimacy

Psychological Safety

Emotional Trust

• Lead with senior leader acknowledgment of uncertainties

• Host structured forums (town halls, focus groups)

• Train managers in empathetic response protocols

• Differentiate support based on trust configurations

• Model positive AI usage by leadership

Validates emotional responses; Increases emotional comfort; Influences members through emotional contagion; De-stigmatizes negative emotions

Microsoft

Capability Building and Expertise Development

Technical and Contextual Expertise

Cognitive Trust

• Offer tiered training programs (literacy, intermediate, advanced)

• Provide hands-on experimentation environments

• Create cross-functional learning communities

• Contextualize AI to specific domain use cases

• Provide calibration training for AI overrides

Develops accurate assessments of AI competence; Increases informed skepticism; Improves understanding of data requirements; Builds intuition for AI behavior

Deloitte

Realistic Expectation Management and Gradual Deployment

System Performance

Cognitive Trust

• Communicate realistic capabilities

• Use phased rollout for limited use cases

• Engage beta user programs

• Publicly share performance transparency metrics

• Establish failure tolerance protocols

• Implement human backstop mechanisms

Prevents trust erosion from overpromising; Allows trust to develop through competence; Calibrates expectations; Prevents cycles of degrading performance

IBM

Ethical Governance and Data Stewardship

Ethics and Privacy

Emotional Trust

• Document written AI ethics principles

• Establish independent ethics review bodies

• Implement data minimization practices

• Prohibit data usage for punitive purposes

• Conduct regular algorithmic audit protocols

• Establish recourse mechanisms for AI decisions

Ensures AI does not harm members; Demonstrates commitment to responsible AI; Protects against perceived surveillance; Increases comfort via visible governance

Salesforce


Organizations can address trust misalignment through targeted interventions that build both cognitive and emotional trust. The following approaches synthesize evidence from AI adoption research, organizational psychology, and the TechCo case study.


Transparent Communication and System Explainability


Building cognitive trust begins with helping organizational members understand what AI does, how it works, and why it makes particular recommendations. Transparency reduces uncertainty and enables rational evaluation of AI competence and usefulness (Kizilcec, 2016).


Effective transparency practices include:


  • Comprehensive documentation accessible to non-technical audiences: Creating FAQs, visual diagrams, and narrative explanations of AI logic and data sources that avoid overwhelming users with technical jargon

  • Algorithmic explanations tailored to user expertise: Providing different explanation depths for different audiences—executives may need strategic implications while data scientists want methodological details

  • Data source and usage clarity: Explicitly communicating what data AI collects, from which sources, how it processes information, and for what purposes—with particular attention to sensitive employee data

  • Limitation acknowledgment: Honestly describing AI's boundaries, potential errors, and scenarios where human judgment should override algorithmic recommendations

  • Visualization tools: Offering interfaces that allow users to "see inside" AI processes, understanding how inputs translate to outputs


TechCo initially created extensive documentation explaining their competence-mapping AI tool and held workshops for questions and concerns. However, transparency alone proved insufficient—the organization also needed to address emotional trust and behavioral responses to transparency itself. Some employees found the transparent search log threatening, perceiving it as revealing too much about their thought processes and creating vulnerability (Vuori et al., 2025). This highlights a crucial insight: transparency about AI can paradoxically trigger emotional distrust if it reveals information employees consider private or potentially harmful.


Siemens implemented a "transparent AI" initiative in their industrial automation division, creating visual dashboards that showed employees how predictive maintenance algorithms reached conclusions about equipment failure risks. The dashboards indicated which sensor data carried the most weight, how historical failure patterns influenced predictions, and confidence levels for each recommendation. Engineers reported increased confidence in AI recommendations and more effective collaboration between human expertise and algorithmic analysis.


Procedural Justice and Participatory Design


Emotional trust develops when people feel respected, heard, and involved in decisions affecting them. Procedural justice principles—emphasizing fair processes, voice, and respect—prove particularly valuable in AI adoption contexts (Tyler & Lind, 1992).


Approaches that enhance procedural justice include:


  • Employee participation in design and deployment: Involving end-users in selecting AI tools, defining use cases, and establishing implementation parameters gives them voice and ownership

  • Clear governance structures: Establishing committees or councils with employee representation to oversee AI ethics, data usage, and dispute resolution

  • Opt-in rather than opt-out defaults: Allowing organizational members to choose whether to participate in AI systems, rather than requiring active withdrawal, signals respect for autonomy

  • Regular feedback mechanisms: Creating structured channels for reporting concerns, suggesting improvements, and questioning AI decisions without fear of retaliation

  • Transparent decision rights: Clarifying which decisions AI makes autonomously, which it recommends with human final authority, and which remain entirely human-controlled


The TechCo case demonstrated the limitations of opt-out approaches. While the organization allowed employees to prohibit data collection, the social pressure and professional implications of opting out created uncomfortable dynamics. Some employees felt they needed to participate to remain visible for opportunities, even when uncomfortable with the technology (Vuori et al., 2025).


Unilever transformed its talent acquisition process by involving HR professionals, hiring managers, and employee representatives in selecting and configuring AI-powered resume screening. The participatory process identified concerns about bias, established human review requirements for all AI-flagged candidates, and created a feedback loop where recruiters could challenge AI recommendations. This approach increased adoption rates and improved the quality of hire metrics while addressing fairness concerns.


Psychological Safety and Emotional Legitimacy


Building emotional trust requires creating environments where negative emotions about AI are legitimate, discussable, and addressable rather than dismissed or stigmatized (Edmondson, 1999).


Practices that cultivate psychological safety around AI include:


  • Leadership acknowledgment of valid concerns: Senior leaders openly discussing their own uncertainties, learning curves, and concerns about AI signals that questioning is acceptable

  • Structured forums for concern expression: Regular town halls, focus groups, or anonymous feedback channels specifically dedicated to AI-related worries create safe spaces for emotional expression

  • Empathetic response protocols: Training managers to respond to AI-related anxiety with empathy and curiosity rather than defensiveness or dismissal

  • Differentiated support for different trust configurations: Recognizing that organizational members with uncomfortable trust (high cognitive, low emotional) need different support than those with blind trust (low cognitive, high emotional)

  • Emotional contagion through positive leadership modeling: Leaders visibly using AI, expressing excitement about possibilities, and demonstrating trust can influence organizational members through emotional contagion (Barsade, 2002)


The TechCo case revealed that even in organizations with strong cultures of trust and transparency, AI introduction can trigger unexpected emotional responses. Employees who generally trusted organizational leadership nonetheless feared data misuse, worried about surveillance, or felt uncomfortable with visibility (Vuori et al., 2025). These emotional responses proved independent of cognitive assessments and required direct attention.


Microsoft implemented "AI Empathy Labs" where employees could express concerns about AI tools in confidential small-group sessions facilitated by organizational psychologists. The labs validated emotional responses, provided education about data protection measures, and created opportunities for employees to influence AI governance policies. Post-participation surveys showed significant increases in emotional comfort with AI tools, even among employees whose cognitive trust remained unchanged.


Capability Building and Expertise Development


Cognitive trust correlates with AI expertise—people who understand how AI works and what it can accomplish develop more accurate assessments of its competence and usefulness (Allen & Choudhury, 2022). However, expertise development must address both technical understanding and contextual judgment.


Effective capability-building approaches include:


  • Tiered training programs: Offering basic AI literacy for all employees, intermediate courses for frequent users, and advanced programs for those making AI-related decisions

  • Hands-on experimentation environments: Providing safe spaces where employees can experiment with AI tools, see how their inputs affect outputs, and develop intuition for AI behavior

  • Cross-functional learning communities: Creating forums where employees share experiences, discuss challenges, and develop collective wisdom about effective AI use

  • Domain-specific contextualization: Connecting AI capabilities to specific use cases relevant to employees' work rather than abstract technical explanations

  • Calibration training: Helping employees develop accurate intuitions about when to trust AI recommendations and when to exercise skepticism based on context and stakes


The TechCo case suggested that expertise influences both trust formation and behavioral responses. Data scientists and AI-literate employees developed more nuanced views of the competence-mapping tool, recognizing both its capabilities and limitations. However, expertise alone did not guarantee adoption—some technically sophisticated employees still experienced emotional distrust due to privacy concerns (Vuori et al., 2025).


Deloitte developed an "AI Apprenticeship" program pairing consultants with data scientists on client projects involving AI. The apprenticeship combined technical training with experiential learning, allowing consultants to see AI development processes, understand data requirements and limitations, and develop informed perspectives on AI capabilities. The program increased both cognitive trust and appropriate skepticism—apprentices learned when to rely on AI and when human judgment remained superior.


Realistic Expectation Management and Gradual Deployment


Cognitive trust depends heavily on AI performance meeting or exceeding expectations. Organizations often inadvertently undermine trust by overselling AI capabilities or deploying immature systems that fail visibly (Dietvorst et al., 2015).


Strategies for managing expectations include:


  • Honest capability communication: Describing what AI can accomplish realistically rather than overpromising to generate initial enthusiasm

  • Phased rollout approaches: Starting with limited use cases where AI can demonstrate clear value before expanding to more complex or sensitive applications

  • Beta user programs: Engaging volunteers willing to tolerate early-stage imperfections in exchange for influence over system development

  • Performance transparency: Publicly sharing AI accuracy metrics, error rates, and improvement trajectories so users can calibrate trust appropriately

  • Failure tolerance protocols: Establishing norms that acknowledge AI errors as learning opportunities rather than reasons to abandon technology

  • Human backstop mechanisms: Implementing human review processes for high-stakes decisions to prevent catastrophic AI errors while the system matures


The TechCo case demonstrated the severe consequences of performance failures. When the competence map generated inaccurate results—showing wrong experts or missing genuine experts—users quickly lost confidence and stopped using the tool. Because AI learns from usage data, the decline in usage created a vicious cycle of degrading performance (Vuori et al., 2025).


IBM implemented a gradual AI deployment strategy in their customer service operations. Rather than immediately replacing human agents with chatbots, they deployed AI to handle simple, repetitive inquiries while routing complex issues to humans. The AI system displayed confidence scores with each response, and human agents could override recommendations. As the AI improved through learning, IBM gradually expanded its scope. This approach allowed both employees and customers to develop trust based on demonstrated competence rather than promised capabilities.


Ethical Governance and Data Stewardship


Emotional trust fundamentally depends on confidence that AI will not be used in ways that harm organizational members' interests. Establishing clear ethical boundaries and demonstrating commitment to responsible AI use proves essential (Gillespie et al., 2023).


Governance practices that build emotional trust include:


  • Written AI ethics principles: Documenting organizational commitments regarding fairness, privacy, transparency, and human oversight—with consequences for violations

  • Independent ethics review: Creating oversight bodies with authority to challenge AI deployments that raise ethical concerns

  • Data minimization practices: Collecting only data necessary for legitimate purposes and regularly purging unnecessary information

  • Usage restriction commitments: Explicitly prohibiting using AI data for punitive purposes like performance evaluation or termination decisions unless clearly disclosed

  • Algorithmic audit protocols: Regular third-party reviews of AI systems for bias, fairness, and alignment with stated principles

  • Recourse mechanisms: Establishing processes through which employees can challenge AI decisions, request explanations, and seek redress for perceived harms


The TechCo case revealed the inadequacy of trust and transparency culture alone. Despite the organization's reputation for openness and its stated commitment to using the competence map only for knowledge sharing, employees feared potential misuse. Some worried that performance data could inform termination decisions or that visible gaps in expertise might harm career prospects (Vuori et al., 2025). These fears persisted despite leadership assurances, suggesting that emotional trust requires more than verbal commitments.


Salesforce established an Office of Ethical and Humane Use of Technology with authority to review AI deployments across the company. The office created public AI ethics principles, conducted impact assessments before major AI implementations, and published regular transparency reports documenting AI usage and ethical compliance. Employees could confidentially report concerns, and the office maintained a public dashboard showing ethical review outcomes. This visible governance structure demonstrably increased employee comfort with AI systems.


Building Long-Term AI Adoption Capability

Successful AI adoption extends beyond addressing immediate trust concerns to building organizational capabilities that sustain effective human-AI collaboration over time. Three strategic pillars support this long-term capability development.


Recalibrating the Psychological Contract


The introduction of AI fundamentally alters the implicit psychological contract between organizations and employees—the set of mutual expectations regarding contributions, rewards, and obligations (Rousseau, 1995). Organizations must explicitly renegotiate these contracts to accommodate AI's presence.


Traditional psychological contracts often assumed that employee expertise, judgment, and information provided competitive advantage and job security. AI challenges these assumptions by automating or augmenting cognitive work previously considered uniquely human. This shift can trigger anxiety about obsolescence, resentment about devaluation of hard-won expertise, and uncertainty about future roles.


Effective psychological contract recalibration includes:


  • Clear articulation of AI's role: Explicitly defining whether AI will automate tasks (replacing humans), augment capabilities (complementing humans), or enable new work (creating novel roles)

  • Skill development commitments: Investing in training that helps employees develop capabilities that complement rather than compete with AI

  • Career pathway clarification: Demonstrating how AI adoption creates new opportunities rather than simply displacing existing work

  • Value proposition evolution: Redefining what the organization values in employees—shifting from routine task execution to judgment, creativity, and complex problem-solving

  • Reciprocal transparency: Just as organizations expect employees to adapt to AI, leaders commit to protecting employee wellbeing and ensuring equitable distribution of AI-generated productivity gains


The TechCo case illustrated psychological contract tensions. Employees who built careers on tacit knowledge networks suddenly faced a technology that made expertise visible and quantifiable. Some responded by withdrawing from the system to preserve information advantages, while others manipulated their digital footprints to inflate perceived expertise—both behaviors reflecting deeper anxieties about changing value propositions (Vuori et al., 2025).


Organizations that successfully recalibrate psychological contracts approach AI as an opportunity to elevate human work rather than replace it. For instance, when AI automates routine analysis, knowledge workers can focus on interpretation, strategy, and creative application—higher-value activities that provide greater job satisfaction and organizational impact. However, this transition requires explicit communication, skill development support, and demonstrated commitment to human flourishing alongside technological advancement.


Distributed Leadership and Trust-Configuration-Specific Management


The TechCo case revealed four distinct trust configurations among organizational members, each requiring different leadership approaches. Organizations building long-term AI adoption capability must move beyond one-size-fits-all strategies to develop adaptive leadership approaches that respond to individual and subgroup trust dynamics (Vuori et al., 2025).


Elements of trust-configuration-specific leadership include:


  • Diagnostic assessment: Regularly evaluating trust configurations across the organization through surveys, interviews, and behavioral observation to understand distribution and evolution

  • Targeted interventions: Designing different support strategies for employees with full trust (who may need advanced capabilities), uncomfortable trust (who need emotional safety), blind trust (who need cognitive development), and full distrust (who may need both)

  • Peer influence networks: Identifying and empowering employees with full trust to serve as champions who can influence others through demonstration and social proof

  • Manager training: Equipping frontline leaders with skills to recognize trust configurations among team members and adapt their approach accordingly

  • Flexibility in participation: Allowing different levels of AI engagement rather than mandating universal adoption, recognizing that some roles may benefit more from AI than others


The uncomfortable trust configuration—high cognitive assessment but low emotional comfort—presents particular leadership challenges. These organizational members recognize AI's capabilities but experience fear or anxiety about its implications. They require psychological safety and ethical governance more than technical training. In contrast, blind trust individuals—high emotional comfort but low cognitive assessment—need capability development and realistic expectation-setting more than emotional support.


Distributed leadership proves essential because frontline managers often have better visibility into team members' trust configurations than senior executives do. Empowering these managers with diagnostic tools, intervention options, and decision rights enables more responsive and personalized support than centralized mandates could achieve.


Continuous Learning Systems and Feedback Loops


AI systems evolve through learning, and organizations must evolve alongside them through structured feedback mechanisms that connect user experience, system performance, and strategic adjustment (Raisch & Fomina, 2023).


Components of effective learning systems include:


  • Behavioral analytics: Monitoring how organizational members actually use (or avoid) AI tools to identify patterns indicating trust issues or usability problems

  • Performance tracking: Measuring AI accuracy, reliability, and impact on outcomes to ensure systems deliver promised value

  • User feedback integration: Creating channels through which organizational members can report problems, suggest improvements, and share insights about AI behavior

  • Iterative refinement: Rapidly incorporating feedback into system improvements, demonstrating responsiveness to user concerns

  • Transparency about learning: Communicating how user feedback shapes AI evolution so organizational members see their input valued and incorporated

  • Balanced metrics: Tracking both adoption rates (quantity) and adoption quality (effective, trust-based use) rather than celebrating superficial usage that masks underlying resistance


The TechCo case demonstrated the consequences of inadequate feedback loops. When the competence map generated inaccurate results, users lost trust and stopped using the tool. The decline in usage reduced available data, further degrading performance and accelerating the vicious cycle. Leaders recognized the problem but struggled to reverse the trajectory once trust eroded and behavioral withdrawal became widespread (Vuori et al., 2025).


Organizations with effective learning systems treat early adopters as partners in system development rather than passive recipients of technology. They create structured beta programs, user advisory councils, and rapid response mechanisms that incorporate feedback into tangible improvements. They also develop "trust repair" protocols for recovering from failures—acknowledging errors, explaining corrective actions, and demonstrating improved performance to rebuild confidence.


Continuous learning extends beyond technical system refinement to organizational adaptation. As AI capabilities evolve and organizational members develop greater sophistication, the nature of effective human-AI collaboration changes. Learning systems must therefore track not only system performance but also evolving user needs, expectations, and capabilities, adjusting strategies accordingly.


Conclusion

The paradox of AI adoption—substantial investment yielding limited return—stems not from technological inadequacy but from organizational underestimation of trust's complexity and behavioral consequences. This analysis demonstrates that successful AI adoption requires moving beyond simplistic assumptions about trust as a direct determinant of technology use to recognize the intricate interplay between cognitive assessment, emotional response, behavioral adaptation, and system performance.


Four actionable insights emerge for organizational leaders:


First, neither cognitive nor emotional trust alone predicts AI adoption. Organizations must attend to both dimensions and recognize that trust configurations vary across individuals. An employee may rationally assess AI as competent while experiencing fear about data misuse—a state requiring psychological safety rather than additional technical explanation. Conversely, emotional comfort without cognitive understanding may lead to inappropriate over-reliance on AI recommendations. Effective strategies must address both dimensions through transparency, capability building, ethical governance, and psychological safety.


Second, organizational members are not passive recipients of AI but active agents whose behaviors shape AI performance and ultimately determine adoption success. The digital footprint behaviors identified in the TechCo case—detailing, confining, withdrawing, and manipulating—directly influenced AI accuracy and usefulness, creating feedback loops that either sustained or undermined adoption. Organizations must recognize these behavioral dynamics and create conditions that encourage productive engagement while addressing legitimate concerns driving defensive behaviors.


Third, organizational culture alone cannot guarantee AI adoption success. TechCo maintained strong cultures of trust and transparency for years before AI introduction, yet these cultural foundations proved insufficient to prevent trust erosion and adoption failure. AI introduces specific challenges regarding data privacy, algorithmic decision-making, and work transformation that require explicit attention beyond general organizational trust. Leaders cannot assume cultural strengths will automatically transfer to AI contexts without deliberate bridging efforts.


Fourth, successful AI adoption requires sustained investment in human-centric capabilities, not just technological infrastructure. The interventions outlined—transparent communication, participatory design, psychological safety, capability building, realistic expectations, and ethical governance—demand ongoing attention, resources, and leadership commitment. Organizations pursuing AI adoption must allocate budget and leadership attention to these human dimensions with the same rigor they apply to technical implementation.


The vicious cycle observed at TechCo—trust misalignment leading to defensive behaviors, degrading AI performance, further eroding trust—can be reversed through comprehensive strategies that build both cognitive and emotional trust while creating feedback mechanisms that sustain virtuous cycles of improvement. However, reversing negative trajectories proves far more difficult than preventing them. Early attention to trust configurations, behavioral dynamics, and human experience yields far better returns than late-stage intervention attempts.


As AI capabilities continue advancing and organizational adoption intensifies, the human dimensions of trust, emotion, and behavior will increasingly determine which organizations successfully leverage AI for competitive advantage and which see substantial investments languish unused. The question is not whether AI can deliver value—ample evidence confirms its potential—but whether organizations can create conditions for productive human-AI collaboration. That challenge is fundamentally human, not technological.


Research Infographic



References

  1. Allen, R., & Choudhury, P. (2022). Algorithm-augmented work and domain experience: The countervailing forces of ability and aversion. Organization Science, 33(1), 149-169.

  2. Bainbridge, W. A., Hart, J. W., Kim, E. S., & Scassellati, B. (2011). The benefits of interactions with physically present robots over video-displayed agents. International Journal of Social Robotics, 3, 41-52.

  3. Barsade, S. G. (2002). The ripple effect: Emotional contagion and its influence on group behavior. Administrative Science Quarterly, 47, 644-675.

  4. Bojinov, I. (2023). Keep your AI projects on track. Harvard Business Review.

  5. Burton, J. W., Stein, M. K., & Jensen, T. B. (2020). A systematic review of algorithm aversion in augmented decision making. Journal of Behavioral Decision Making, 33, 220-239.

  6. Cao, G., Duan, Y., Edwards, J. S., & Dwivedi, Y. K. (2021). Understanding managers' attitudes and behavioral intentions towards using artificial intelligence for organizational decision-making. Technovation, 106, 102312.

  7. Cepa, K., & Schildt, H. (2023). Data-induced rationality and unitary spaces in interfirm collaboration. Organization Science, 34, 129-155.

  8. Choung, H., David, P., & Ross, A. (2023). Trust in AI and its role in the acceptance of AI technologies. International Journal of Human Computer Interaction, 39, 1727-1739.

  9. Christin, A. (2017). Algorithms in practice: Comparing web journalism and criminal justice. Big Data & Society, 4(2), 2053951717718855.

  10. Davenport, T. H., & Short, J. E. (1990). The new industrial engineering: Information technology and business process redesign. Center for Information Systems Research.

  11. Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144, 114-126.

  12. Edmondson, A. (1999). Psychological safety and learning behavior in work teams. Administrative Science Quarterly, 44(2), 350-383.

  13. Gillespie, N., Lockey, S., Curtis, C., Pool, J., & Akbari, A. (2023). Trust in artificial intelligence: A global study. The University of Queensland and KPMG.

  14. Gkinko, L., & Elbanna, A. (2023). Designing trust: The formation of employees' trust in conversational AI in the digital workplace. Journal of Business Research, 158, 113707.

  15. Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14, 627-660.

  16. Haefner, N., Wincent, J., Parida, V., & Gassmann, O. (2021). Artificial intelligence and innovation management: A review, framework, and research agenda. Technological Forecasting and Social Change, 162, 120392.

  17. Hengstler, M., Enkel, E., & Duelli, S. (2016). Applied artificial intelligence and trust: The case of autonomous vehicles and medical assistance devices. Technological Forecasting and Social Change, 105, 105-120.

  18. Kellogg, K. C., Valentine, M. A., & Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14, 366-410.

  19. Kizilcec, R. F. (2016). How much information? Effects of transparency on trust in an algorithmic interface. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems.

  20. Kolbjørnsrud, V., Amico, R., & Thomas, R. J. (2017). Partnering with AI: How organizations can win over skeptical managers. Strategy & Leadership, 45, 37-43.

  21. Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20, 709-734.

  22. Möhlmann, M., & Zalmanson, L. (2017). Hands on the wheel: Navigating algorithmic management and Uber drivers' autonomy. Proceedings of the International Conference on Information Systems, Seoul, South Korea.

  23. Moussawi, S., & Benbunan-Fich, R. (2021). The effect of voice and humour on users' perceptions of personal intelligent agents. Behaviour & Information Technology, 40, 1603-1626.

  24. Raisch, S., & Fomina, K. (2023). Combining human and artificial intelligence: Hybrid problem-solving in organizations. Academy of Management Review.

  25. Raisch, S., & Krakowski, S. (2021). Artificial intelligence and management: The automation-augmentation paradox. Academy of Management Review, 46(1), 192-210.

  26. Rousseau, D. M. (1995). Psychological contracts in organizations: Understanding written and unwritten agreements. Sage Publications.

  27. Seitz, L., Woronkow, J., Bekmeier-Feuerhahn, S., & Gohil, K. (2021). The advance of diagnosis chatbots: Should we first avoid distrust before we focus on trust? ECIS.

  28. Tong, S., Jia, N., Luo, X., & Fang, Z. (2021). The Janus face of artificial intelligence feedback: Deployment versus disclosure effects on employee performance. Strategic Management Journal, 42, 1600-1631.

  29. Tyler, T. R., & Lind, E. A. (1992). A relational model of authority in groups. Advances in Experimental Social Psychology, 25, 115-191.

  30. Vuori, N., Burkhard, B., & Pitkäranta, L. (2025). It's amazing—but terrifying! Unveiling the combined effect of emotional and cognitive trust on organizational members' behaviours, AI performance, and adoption. Journal of Management Studies.

Jonathan H. Westover, PhD is Chief Research Officer (Nexus Institute for Work and AI); Associate Dean and Director of HR Academic Programs (WGU); Professor, Organizational Leadership (UVU); OD/HR/Leadership Consultant (Human Capital Innovations). Read Jonathan Westover's executive profile here.

Suggested Citation: Westover, J. H. (2026). The Hidden Cost of Trust Misalignment: How Emotional and Cognitive Dissonance Undermines AI Adoption in Organizations. Human Capital Leadership Review, 32(1). doi.org/10.70175/hclreview.2020.32.1.5

Human Capital Leadership Review

eISSN 2693-9452 (online)

future of work collective transparent.png
Renaissance Project transparent.png

Subscription Form

HCI Academy Logo
Effective Teams in the Workplace
Employee Well being
Fostering Change Agility
Servant Leadership
Strategic Organizational Leadership Capstone
bottom of page