top of page
HCL Review
nexus institue transparent.png
Catalyst Center Transparent.png
Adaptive Lab Transparent.png
Foundations of Leadership
DEIB
Purpose-Driven Workplace
Creating a Dynamic Organizational Culture
Strategic People Management Capstone

Navigating AI Displacement Threats: Evidence-Based Strategies for Organizational Resilience and Employee Creativity

Listen to this article:


Abstract: Artificial intelligence adoption is reshaping workplaces at an unprecedented pace, creating significant concerns about job displacement among employees across industries and skill levels. This article examines recent empirical research demonstrating that perceived AI displacement threats can paradoxically enhance employee creativity under specific organizational conditions. Drawing on a multi-study investigation spanning laboratory experiments and field studies across Chinese organizations, we explore how supervisory support and employees' intrinsic motivation interact with displacement concerns to influence creative performance. The findings reveal that while AI threats can motivate creative problem-solving, this relationship depends critically on supportive leadership and employees' baseline motivation levels. Organizations can leverage these insights through evidence-based interventions including transparent communication, capability development programs, and leadership practices that emphasize psychological safety and autonomy. This analysis provides practical frameworks for leaders navigating technological transitions while maintaining workforce engagement and innovation capacity.

The integration of artificial intelligence into organizational workflows represents one of the most significant workplace transformations in recent history. From automated customer service systems to advanced data analytics platforms, AI technologies are fundamentally altering how work gets done across virtually every sector. This transformation brings substantial productivity gains and operational efficiencies, yet it simultaneously generates profound uncertainty for employees who perceive their roles as potentially redundant or obsolete.


Recent empirical research provides nuanced insights into this dynamic. Contrary to assumptions that AI displacement threats simply demotivate employees, evidence suggests these concerns can actually catalyze creative performance under the right conditions (Sun et al., 2025). This paradoxical finding—that existential workplace threats might enhance rather than diminish creative output—challenges conventional wisdom about organizational change management and opens new pathways for leadership intervention.


The stakes are considerable. Organizations navigating AI integration face dual imperatives: capturing the efficiency and capability benefits of technological advancement while maintaining employee engagement, psychological wellbeing, and innovation capacity. Getting this balance wrong risks not only immediate productivity losses but also long-term erosion of organizational capabilities as talented employees disengage or depart. Conversely, organizations that successfully navigate this transition can build competitive advantage through enhanced human-AI collaboration and sustained innovation.


This article synthesizes recent research findings with practical implementation frameworks to help organizational leaders address AI displacement concerns constructively. We examine the psychological mechanisms through which displacement threats influence employee creativity, identify critical moderating factors that determine whether these effects prove beneficial or harmful, and outline evidence-based interventions for creating supportive environments that channel displacement concerns toward productive outcomes.


The AI Displacement Landscape

Defining AI Displacement Threats in Organizational Contexts


AI displacement threat refers to employees' perception that artificial intelligence technologies could perform their job tasks more effectively, potentially rendering their roles unnecessary or substantially diminished (Sun et al., 2025). This perceived threat operates at both cognitive and emotional levels. Cognitively, employees assess the overlap between their current responsibilities and emerging AI capabilities, evaluating whether their specialized knowledge and skills retain distinctive value. Emotionally, these assessments trigger concerns about economic security, professional identity, and social standing within organizations.


Importantly, displacement threat differs from general job insecurity in its specificity to technological substitution. While traditional job insecurity might stem from economic downturns, organizational restructuring, or performance concerns, AI displacement threat centers specifically on the perceived superiority of algorithmic or automated alternatives to human work. This distinction matters because the psychological responses and appropriate organizational interventions differ meaningfully between these threat types.


The threat manifests differently across occupational categories. Employees in roles involving highly structured, repetitive tasks may perceive immediate displacement risk from automation, while those in knowledge-intensive positions may face concerns about AI systems that can process information, recognize patterns, or generate content more rapidly than humans. Even creative professionals—once considered largely immune to automation—now confront AI systems capable of producing artwork, writing, music, and design outputs that challenge human uniqueness in these domains.


Prevalence and Distribution of AI Displacement Concerns


Displacement concerns appear widespread across organizational hierarchies and functional areas. Research conducted across multiple Chinese organizations revealed that AI displacement threats are not isolated to particular industries or worker demographics but rather represent a pervasive concern among employees with varying job roles and organizational tenures (Sun et al., 2025). These findings suggest that even in contexts where actual job losses to AI remain limited, the psychological experience of threat is prevalent and consequential.


Several factors appear to intensify displacement perceptions. Employees in organizations with active AI implementation initiatives naturally exhibit heightened awareness of technological substitution possibilities. The visibility of AI systems—particularly those that directly augment or replace discrete task components—makes displacement threats more psychologically salient than abstract concerns about future technological change. Additionally, organizational communication about AI adoption, whether explicit or implicit, shapes employee perceptions of threat severity and likelihood.


The sectoral distribution of displacement concerns reflects both actual AI deployment patterns and employees' subjective assessments of their role's vulnerability to automation. While manufacturing and routine transaction processing have historically experienced greater automation, contemporary AI capabilities extend displacement concerns to professional services, creative industries, and knowledge work domains previously considered secure from technological substitution.


Organizational and Individual Consequences of AI Displacement Threats

The Paradoxical Impact on Creative Performance


Recent empirical evidence reveals a complex and sometimes counterintuitive relationship between AI displacement threats and employee creativity. Rather than uniformly suppressing creative performance, displacement threats can actually enhance creative output when employees possess high intrinsic motivation and receive adequate supervisory support (Sun et al., 2025).


This paradoxical effect operates through what researchers term "promotion-focused engagement." When employees perceive AI as a competitive threat, those with strong intrinsic motivation—who find inherent satisfaction and meaning in their work—may respond by intensifying their creative efforts to demonstrate distinctive human value. This response reflects a fundamental psychological drive to differentiate oneself from technological substitutes by emphasizing uniquely human capabilities: novel problem formulation, conceptual flexibility, contextual judgment, and integrative thinking.


The empirical evidence for this relationship emerged from multiple complementary studies. Laboratory experiments demonstrated that participants primed with AI displacement scenarios subsequently generated more creative solutions to open-ended problems, but only when they reported high intrinsic motivation for the tasks. Field studies with employees across various Chinese organizations confirmed these patterns in naturalistic work settings, showing that the positive relationship between displacement threat and supervisor-rated creativity strengthened significantly as employee intrinsic motivation increased.


However, this creative boost is neither automatic nor universal. For employees with low intrinsic motivation—those primarily driven by external rewards or compliance rather than genuine interest in their work—displacement threats appear to have neutral or potentially negative effects on creativity. These employees may lack the psychological resources or inclination to respond to threats with enhanced creative effort, instead potentially withdrawing or focusing on more routine protective strategies.


Individual Wellbeing and Psychological Dynamics


While displacement threats may catalyze creativity under specific conditions, the broader psychological toll warrants careful attention. The experience of existential workplace threat naturally generates stress, anxiety, and uncertainty that can erode psychological wellbeing even when performance remains stable. Employees facing sustained displacement concerns may experience:


Cognitive burden: Constant evaluation of one's skills against advancing technological capabilities consumes mental resources that might otherwise support creative thinking, learning, or collaboration. This evaluative vigilance can become psychologically exhausting over time.


Identity challenges: For employees whose professional identities center on specialized expertise or craft mastery, AI systems that replicate these capabilities threaten fundamental aspects of self-concept and meaning-making. The question shifts from "what do I do?" to "what can I do that machines cannot?"—a more existential concern.


Motivational complexity: The research demonstrates that intrinsic motivation moderates creative responses to displacement threats, but the threats themselves may gradually erode intrinsic motivation for some employees. Constant comparison to AI performance standards could shift motivational orientation from inherent task satisfaction toward instrumental demonstration of human superiority.


Relationship strain: Displacement concerns may alter workplace relationships as employees become more competitive about demonstrating individual value or more cautious about sharing knowledge that might be codified into AI systems.


These psychological dynamics create a delicate organizational challenge: harnessing the potential creative benefits of moderate displacement threat while preventing the erosion of longer-term wellbeing and engagement.


Evidence-Based Organizational Responses

Table 1: Organizational Resilience and AI Displacement Strategies

Organizational Strategy

Description

Key Implementation Actions

Impact on Employee Creativity

Required Supervisory Support

Target Psychological Outcome

Intrinsic Motivation Enhancement through Work Design

Redesigning job roles to emphasize task significance, variety, and human judgment in the face of AI.

Highlighting work impact, preserving autonomy over methods, and positioning employees as "context integrators" or quality arbiters.

Functions as a crucial moderator; high intrinsic motivation leads to "promotion-focused engagement" where threats catalyze creative efforts.

High; requires supervisors to redesign roles and provide feedback on human-superior dimensions like ethics.

Intrinsic Motivation

Supervisory Support Framework

A multi-dimensional support system encompassing instrumental assistance and emotional validation to buffer the stress of AI displacement.

Active listening and validation of anxieties, provision of tools/resources for experimentation, and protecting employee autonomy.

Acts as a critical moderator that enables a paradoxical creative boost; without it, creative benefits are minimal regardless of motivation.

High; requires supervisors to provide both practical resources and emotional empathy to create psychological scaffolding.

Psychological Safety

Transparent and Developmentally-Oriented Communication

A communication strategy focusing on honesty, early dialogue, and framing AI as augmentation rather than replacement.

Initiating dialogue before implementation, specifying exact task impacts, and establishing bi-directional feedback channels.

Channels potential anxiety into productive outcomes by reducing abstract uncertainty and allowing for mental preparation.

Moderate; requires leaders to be honest about changes and maintain open forums for employee questions.

Reduced Uncertainty and Trust

Capability Development and Learning Infrastructure

Systematic investment in skills that are complementary to AI, focusing on uniquely human strengths.

Prioritizing training in ethical judgment and complex problem framing, and providing just-in-time learning platforms.

Increases capacity to add value in ways AI cannot replicate, thereby fostering creative synthesis.

Moderate; supervisors must allocate time and connect development to immediate work contexts.

Creative Efficacy and Agency

Distributed Decision-Making and Employee Voice

Governance structures that include employees in the AI implementation process.

Establishing cross-functional implementation working groups and ethical review processes with employee input.

Transforms employees from passive subjects to active architects, encouraging innovative human-AI collaboration models.

Moderate; management must relinquish unilateral control and facilitate participation.

Sense of Agency and Respect

Recalibrated Psychological Contract

Shifting the employer-employee agreement from job security to employability and shared value creation.

Committing to continuous skill updating and sharing productivity gains (e.g., via wage increases or reduced hours).

Reduces the existential threat of displacement by providing a clear path for future value contribution.

Low to Moderate; primarily requires organizational policy changes and leadership commitment to transparency.

Employability Security

Supervisory Support as a Critical Buffer


Research demonstrates that supervisory support plays a decisive role in determining whether AI displacement threats enhance or undermine employee creativity (Sun et al., 2025). Supervisor support encompasses both instrumental assistance—providing resources, removing obstacles, and offering practical help with work challenges—and emotional support—demonstrating care for employees' wellbeing, listening empathetically, and validating their concerns.


The empirical evidence reveals a three-way interaction among displacement threat, intrinsic motivation, and supervisor support. Employees with high intrinsic motivation and strong supervisor support showed the most pronounced creative gains when facing displacement threats. Conversely, those lacking supervisor support showed minimal creative benefits regardless of their motivation levels, suggesting that supportive leadership provides essential psychological scaffolding for productive threat responses.


Supervisors can provide effective support through several evidence-informed approaches:


Active listening and validation: Rather than dismissing displacement concerns as unfounded or premature, effective supervisors acknowledge the legitimacy of employees' anxieties while helping them develop constructive response strategies. This validation doesn't require agreeing that jobs will disappear, but rather recognizing that the uncertainty itself is genuine and worthy of attention.


Resource provision: Supervisors who allocate time, tools, and information to support employees' skill development and creative experimentation help transform abstract concerns into concrete development pathways. This might include protecting time for learning new capabilities, facilitating access to emerging technologies, or connecting employees with developmental opportunities.


Psychological safety creation: Employees need to feel safe taking creative risks and potentially failing as they explore ways to differentiate their contributions from AI capabilities. Supervisors who explicitly encourage experimentation and frame failures as learning opportunities create environments where displacement threats can motivate rather than paralyze.


Autonomy preservation: Research on intrinsic motivation consistently demonstrates that autonomy—employees' sense of control over their work methods and decisions—is fundamental to sustained engagement (Amabile, 1996). Supervisors who maintain or expand employee autonomy even as AI systems are implemented help preserve the intrinsic motivation that enables creative responses to displacement threats.


Organizations can support supervisors in providing this critical buffering function by ensuring they have adequate time for developmental conversations, training in psychological safety principles, and protection from excessive span-of-control burdens that prevent meaningful employee interaction.


Transparent and Developmentally-Oriented Communication


How organizations communicate about AI implementation fundamentally shapes employees' psychological experience and behavioral responses. Communication strategies that emphasize transparency, developmental framing, and realistic timelines can help channel displacement concerns toward productive outcomes rather than allowing them to fester into anxiety and disengagement.


Effective communication approaches include:


Early and honest dialogue: Organizations that initiate conversations about AI adoption before implementations begin, rather than announcing changes as faits accomplis, give employees time to process implications and develop adaptive strategies. This early engagement respects employees' need to mentally prepare for change and signals organizational respect for their concerns.


Specific rather than abstract information: Vague statements about "digital transformation" or "AI integration" often amplify rather than reduce uncertainty. More effective approaches specify which tasks or processes will involve AI, what human roles will remain or evolve, and what timelines govern these changes. This specificity helps employees assess personal implications more accurately.


Developmental framing: Communication that emphasizes AI as augmentation rather than replacement—highlighting how human-AI collaboration might enhance rather than eliminate human contributions—can shift psychological orientation from threat to opportunity. This requires genuine substantiation rather than empty rhetoric; employees distinguish quickly between authentic augmentation models and rhetorical window-dressing for workforce reduction.


Bi-directional channels: One-way announcements prove less effective than dialogue-based approaches that create space for employee questions, concerns, and input. Organizations might establish forums, working groups, or regular discussion sessions where employees can voice uncertainties and contribute to implementation planning.


Some organizations have implemented AI literacy programs that help employees understand both the capabilities and limitations of AI systems. By demystifying the technology, these programs can reduce the psychological distance between employees and AI, making the technology feel less like an inscrutable competitor and more like a tool that requires human judgment to deploy effectively.

Capability Development and Learning Infrastructure


Perhaps the most substantive organizational response to displacement concerns involves systematic investment in capability development that helps employees build skills complementary to rather than redundant with AI capabilities. This approach addresses displacement threats not merely psychologically but materially, by expanding employees' capacity to add value in ways that AI systems cannot readily replicate.


Organizations can structure capability development initiatives around several principles:


Emphasis on distinctively human capabilities: While technical skills remain valuable, development programs might prioritize capabilities that leverage uniquely human strengths—complex problem framing, ethical judgment, contextual interpretation, empathetic communication, integrative thinking across domains, and creative synthesis. Research on creativity consistently demonstrates that these capabilities involve cognitive processes that current AI systems struggle to replicate (Zhou & George, 2001).


Just-in-time learning opportunities: Rather than front-loading all development into formal training programs, organizations can create infrastructure for continuous, embedded learning—enabling employees to acquire new capabilities as needs emerge. This might include access to online learning platforms, peer learning networks, or structured experimentation time.


Cross-functional exposure: AI displacement often threatens narrow functional specialists whose expertise might be codified into algorithms. Development programs that broaden employees' exposure across functions, industries, or disciplines can help build adaptive capacity and reduce vulnerability to role-specific automation.


Experimentation support: Some organizations allocate specific time or resources for employees to experiment with emerging technologies, develop new approaches, or pursue innovative projects. This protected experimentation space serves multiple purposes: building technical fluency with AI tools, identifying novel human-AI collaboration models, and maintaining employees' sense of agency and creative efficacy.


The evidence suggests that capability development proves most effective when it connects clearly to employees' current roles while expanding their future options. Programs that feel disconnected from immediate work contexts or that seem to prepare employees primarily for jobs elsewhere may inadvertently signal that current roles indeed face displacement, potentially amplifying rather than mitigating threat perceptions.


Intrinsic Motivation Enhancement Through Work Design


Given that intrinsic motivation emerges as a critical moderator of creative responses to displacement threats (Sun et al., 2025), organizations have strong incentives to design work in ways that cultivate and sustain intrinsic motivation. This represents a more fundamental organizational intervention than communication or training programs alone, requiring attention to the structural characteristics of jobs and work environments.


Research on intrinsic motivation identifies several key work design principles:


Task significance and meaning: Employees experience greater intrinsic motivation when they understand how their work contributes to meaningful outcomes—whether serving customers, advancing organizational missions, or contributing to broader social purposes (Amabile, 1996). Organizations can enhance task significance by making these connections more visible and explicit, helping employees see the impact of their contributions.


Autonomy in methods and timing: The freedom to determine how and when to accomplish work objectives consistently predicts intrinsic motivation across diverse contexts. Even when AI systems increasingly specify what tasks need completion, organizations can preserve human autonomy over implementation approaches, sequencing decisions, and quality standards.


Skill variety and challenge: Work that draws on diverse skills and presents appropriate challenges—difficult enough to engage but not so overwhelming as to frustrate—tends to elicit intrinsic motivation. As AI systems assume more routine components of jobs, organizations can redesign remaining human work to emphasize this skill variety and appropriate challenge level.


Feedback and learning: Timely, specific feedback about work quality and impact supports both competence development and intrinsic motivation. Organizations might design feedback systems that highlight dimensions of work quality where human judgment remains superior to algorithmic assessment—contextual appropriateness, stakeholder relationship quality, creative problem-solving, or ethical reasoning.


Some organizations have reconceptualized job roles to emphasize stewardship, curation, and judgment functions that complement AI capabilities rather than competing directly. For instance, rather than viewing employees as production units that AI might eventually replace, these organizations position employees as quality arbiters, context integrators, or exception handlers whose judgment determines when and how AI outputs get utilized.


Distributed Decision-Making and Employee Voice


Beyond individual work design, organizational governance structures influence how employees experience and respond to displacement threats. Decision-making processes that include employee voice in AI implementation choices can transform employees from passive subjects of technological change into active architects of human-AI collaboration models.


Organizations can structure meaningful employee participation through:


Implementation working groups: Rather than relegating AI adoption decisions exclusively to executives or IT specialists, some organizations establish cross-functional working groups that include employees from affected roles. These groups might pilot AI tools, evaluate human-AI workflow designs, or identify implementation challenges before broad deployment.


Ethical review processes: As AI systems assume decision-making functions with ethical implications—hiring recommendations, performance evaluations, resource allocations—organizations can establish review processes that include employee input on appropriate use boundaries and override protocols.


Continuous feedback mechanisms: Beyond one-time consultation, organizations can build ongoing mechanisms for employees to report AI system limitations, propose improvements, or flag contexts where human judgment should supersede algorithmic recommendations. This continuous feedback loop positions employees as essential collaborators in AI system refinement rather than potential victims of replacement.


The psychological benefits of such participatory approaches extend beyond the specific implementation decisions made. Employee involvement signals organizational respect, preserves a sense of agency during technological change, and can help align AI deployment with actual work realities rather than abstract optimization models.


Building Long-Term Human-AI Collaboration Capacity

Recalibrating the Psychological Contract


The traditional psychological contract between employers and employees often featured implicit expectations of job security in exchange for competent performance and organizational loyalty. AI displacement threatens this contract fundamentally, requiring explicit renegotiation of mutual expectations and commitments.


Organizations navigating this transition might consider several elements of a revised psychological contract:


Employability security over job security: Rather than promising specific role permanence—a commitment that technological change renders increasingly untenable—organizations might commit to maintaining employees' broader employability through continuous capability development, skill updating, and career transition support. This shift requires substantive investment, not merely rhetorical reframing.


Transparency as a core commitment: The revised contract might establish mutual expectations of honest, early communication about technological changes, potential role impacts, and transition timelines. This transparency allows employees to make informed decisions about their career investments and reduces the psychological toll of uncertainty.


Collaborative adaptation: The new contract might explicitly frame AI integration as a joint problem-solving challenge rather than a unilateral management decision imposed on employees. This collaborative orientation acknowledges that effective human-AI workflows often emerge through experimentation and iteration rather than top-down design.


Shared value creation: As AI systems generate productivity gains, the psychological contract might address how these benefits get distributed—whether through wage increases, reduced work hours, enhanced development investments, or other mechanisms that allow employees to share in the value created rather than simply experiencing displacement risk.


Making these implicit expectations explicit—through written policies, leadership communication, or structured dialogue—can reduce ambiguity and help employees assess whether they want to invest their creative energies in the organization's future or seek alternative employment.


Cultivating Organizational Learning Systems


Organizations that successfully navigate ongoing AI evolution likely possess robust learning systems that enable continuous adaptation as technologies and competitive landscapes shift. These learning systems extend beyond individual capability development to encompass organizational routines for sensing environmental changes, experimenting with responses, and integrating lessons into practice.


Key characteristics of effective organizational learning systems include:


Psychological safety for experimentation: Learning requires trying approaches that might fail. Organizations need cultural norms and structural protections that encourage bounded experimentation with human-AI collaboration models without imposing severe penalties for unsuccessful attempts.


Knowledge sharing infrastructure: As employees across the organization experiment with AI tools and develop effective collaboration practices, mechanisms for sharing these discoveries—communities of practice, internal case libraries, regular sharing forums—can accelerate collective learning and prevent redundant experimentation.


Reflective practices: Organizations might establish regular reflection sessions where teams assess what's working and what isn't in their human-AI workflows, identify emerging challenges before they become crises, and adjust approaches iteratively. This systematic reflection converts experience into learning more reliably than ad hoc observation alone.


Integration of diverse perspectives: Effective organizational learning draws on insights from multiple levels and functions. Senior leadership perspectives on strategic AI direction, middle management insights on implementation challenges, and frontline employee experiences with actual AI tool usage all contribute essential information. Learning systems need mechanisms for integrating these diverse perspectives rather than privileging any single viewpoint.


Organizations with strong learning systems may find that employees experience AI displacement threats as manageable challenges within a supportive developmental context rather than existential crises within static organizational structures.


Purpose, Identity, and Belonging in AI-Augmented Organizations


Beyond structural and procedural adaptations, organizations facing AI integration must attend to fundamental questions of purpose, identity, and belonging that give work meaning and sustain employee commitment through uncertainty.


Research on workplace motivation demonstrates that employees experience greater wellbeing and performance when they perceive their work as contributing to purposes beyond profit maximization—serving customers, advancing knowledge, improving communities, or solving meaningful problems (Amabile, 1996). AI integration provides opportunities to reinforce these purpose connections by potentially freeing employees from routine tasks to focus on more meaningful, human-centered work.


Organizations might strengthen purpose connections by:


Emphasizing human-centered outcomes: Rather than framing AI primarily through efficiency metrics—tasks automated, costs reduced, processing speeds increased—organizations can emphasize how AI enables better service to human stakeholders. This might mean more personalized customer experiences, more thoughtful problem-solving, or more ethical decision-making that balances efficiency with human values.


Supporting professional identity evolution: As job roles evolve through AI augmentation, employees' professional identities must likewise evolve. Organizations can support this evolution by creating space for employees to explore emerging aspects of their professional selves—perhaps as AI system stewards, human-AI collaboration designers, or ethical arbiters of algorithmic outputs—rather than clinging to identities tied to tasks that AI now performs.


Maintaining relational connections: Belonging within workplace communities provides psychological resources for navigating change. As AI systems mediate more workplace interactions, organizations might intentionally preserve spaces for human connection—whether through team rituals, collaborative problem-solving sessions, or communities of practice that bind people through shared challenges rather than algorithmic assignment.


These meaning-making dimensions may prove as consequential as structural interventions in determining whether employees experience AI integration as enriching or threatening to their workplace experience.


Conclusion

The integration of artificial intelligence into organizational workflows presents profound challenges for employee wellbeing, organizational culture, and innovation capacity. Yet recent empirical research demonstrates that these challenges need not inevitably produce negative outcomes. Under specific conditions—particularly when employees possess strong intrinsic motivation and receive adequate supervisory support—AI displacement threats can paradoxically enhance creative performance as employees strive to demonstrate distinctive human value.


Organizations seeking to navigate this transition successfully can draw on several evidence-based insights. First, supervisory support emerges as a critical buffer that helps employees respond productively rather than defensively to displacement concerns. Investing in supervisor capability to provide both instrumental and emotional support represents a high-leverage intervention. Second, the cultivation of intrinsic motivation through thoughtful work design, meaningful purpose, and employee autonomy creates psychological conditions where displacement threats motivate creativity rather than triggering withdrawal. Third, transparent communication, substantial capability development, and inclusive decision-making help transform employees from passive subjects of technological change into active architects of human-AI collaboration.


The path forward requires organizations to move beyond simplistic narratives of either technological optimism or displacement pessimism toward more nuanced approaches that acknowledge both the genuine uncertainties AI creates and the organizational choices that shape how these uncertainties affect employees. AI displacement is neither automatically beneficial nor inevitably harmful; its effects depend fundamentally on the organizational contexts, leadership practices, and cultural norms surrounding its implementation.


Perhaps most importantly, the evidence suggests that organizations have greater agency in shaping these outcomes than often acknowledged. While technological capabilities evolve rapidly and market pressures drive adoption, organizational choices about communication approaches, work design, support systems, and value distribution remain consequential. Leaders who recognize this agency can build organizations where technological advancement and human flourishing prove complementary rather than contradictory—where AI augments rather than replaces human creativity, and where employees experience change as developmental opportunity rather than existential threat.


Research Infographic



References

  1. Amabile, T. M. (1996). Creativity in context: Update to the social psychology of creativity. Westview Press.

  2. Sun, S., Li, Z. A., Foo, M.-D., Zhou, J., & Lu, J. G. (2025). Survival of the creative? How perceived AI displacement threats enhance employee creativity. Journal of Applied Psychology, 110(1), 27–47.

  3. Zhou, J., & George, J. M. (2001). When job dissatisfaction leads to creativity: Encouraging the expression of voice. Academy of Management Journal, 44(4), 682–696.

Jonathan H. Westover, PhD is Chief Research Officer (Nexus Institute for Work and AI); Associate Dean and Director of HR Academic Programs (WGU); Professor, Organizational Leadership (UVU); OD/HR/Leadership Consultant (Human Capital Innovations). Read Jonathan Westover's executive profile here.

Suggested Citation: Westover, J. H. (2026). Navigating AI Displacement Threats: Evidence-Based Strategies for Organizational Resilience and Employee Creativity. Human Capital Leadership Review, 32(2). doi.org/10.70175/hclreview.2020.32.2.7

Human Capital Leadership Review

eISSN 2693-9452 (online)

future of work collective transparent.png
Renaissance Project transparent.png

Subscription Form

HCI Academy Logo
Effective Teams in the Workplace
Employee Well being
Fostering Change Agility
Servant Leadership
Strategic Organizational Leadership Capstone
bottom of page