top of page
HCL Review
nexus institue transparent.png
Catalyst Center Transparent.png
Adaptive Lab Transparent.png
Foundations of Leadership
DEIB
Purpose-Driven Workplace
Creating a Dynamic Organizational Culture
Strategic People Management Capstone

AI Displacement Risk in the Labor Market: Evidence, Exposure, and the Imperative for Adaptive Organizational Strategy

Listen to a review of this article:


Abstract: Artificial intelligence—particularly generative large language models (LLMs)—presents organizations with a transformative technology whose labor market implications remain nascent yet consequential. This article synthesizes emerging empirical research on AI-driven job displacement and augmentation, focusing on the gap between theoretical automation potential and observed real-world implementation. Drawing on recent studies that combine task-level exposure metrics with employment and usage data, it examines which occupations face greatest risk, how demographic characteristics intersect with exposure, and the limited but suggestive early evidence of labor market disruption. The article then proposes evidence-based organizational responses—ranging from transparent workforce planning and skills investment to redesigned roles and adaptive governance—alongside long-term capability-building strategies. By grounding recommendations in validated research, this work offers leaders a framework for navigating AI's labor implications responsibly, mitigating harm, and preparing for an accelerating pace of workplace transformation.

The introduction of ChatGPT in late 2022 marked an inflection point in public awareness of artificial intelligence capabilities. Within months, millions of workers began experimenting with generative AI tools for tasks ranging from code generation to customer service inquiries. Yet beneath this experimentation lies a deeper organizational challenge: determining which roles face meaningful displacement risk, what early warning signs merit attention, and how to balance productivity gains with workforce stability.


The stakes are significant. Unlike prior waves of automation that primarily affected manual or routine-cognitive work, generative AI exhibits capabilities across a broad spectrum of knowledge-based tasks—writing, analysis, summarization, and even creative production (Eloundou et al., 2023). Early forecasts suggested that substantial portions of the workforce could see task-level exposure, raising concerns about accelerated job displacement. However, the track record of past forecasting efforts—from offshoring predictions to employment projections—counsels humility. A decade ago, prominent analyses identified roughly a quarter of U.S. jobs as vulnerable to offshoring; most of those roles subsequently maintained healthy growth (Blinder et al., 2009; Ozimek, 2019). Government employment forecasts, while directionally informative, have added limited predictive value beyond simple trend extrapolation (Massenkoff, 2025). Even retrospective causal analyses of major disruptions—industrial robots, trade shocks—remain contested (Acemoglu & Restrepo, 2020; Autor et al., 2013).


This article adopts a more grounded approach: it reviews emerging empirical frameworks that combine theoretical AI capability measures with observed real-world usage patterns, then examines early labor market data for signs of displacement. As of early 2026, the evidence suggests limited aggregate impact on unemployment, though suggestive signals appear in hiring patterns for younger workers in highly exposed occupations. This finding underscores the importance of proactive organizational strategies that anticipate gradual, heterogeneous effects rather than sudden, uniform disruption. The path forward demands transparent communication, targeted capability-building, and governance structures that can adapt as AI adoption deepens.


The AI Exposure Landscape


Defining Exposure: Capability vs. Deployment


Understanding AI's labor market impact begins with clarifying what exposure means. Most contemporary frameworks rely on task-based analysis: AI may automate certain components of a job (e.g., drafting emails, generating code snippets, summarizing reports) while leaving others untouched (e.g., in-person negotiation, physical equipment operation, ethical judgment). This granularity matters because partial task automation does not necessarily translate to wholesale job elimination; it may instead reshape job content or amplify productivity.


Two conceptual anchors guide measurement. First, theoretical capability assesses whether current AI technology could automate or significantly accelerate a task. For example, Eloundou et al. (2023) rated O*NET tasks on whether a large language model could reduce task duration by at least 50%, either independently or with additional software tools. Their metric assigned scores of 1.0 for tasks feasible with an LLM alone, 0.5 for tasks requiring LLM-based tools, and 0 otherwise. This approach revealed that a substantial share of tasks—particularly in knowledge work—appeared feasible, at least in principle.


Second, observed deployment tracks whether that theoretical capability manifests in actual usage. Recent analyses have combined Eloundou et al.'s capability ratings with proprietary usage data from AI platforms to create a more refined "observed exposure" measure (Massenkoff & McCrory, 2026). This metric weights tasks not only by their technical feasibility but also by evidence of real-world, work-related, and automated (rather than merely augmentative) use. The rationale: a task may be technically automatable, yet legal constraints, data privacy requirements, verification needs, or other organizational frictions may delay or prevent adoption.


Empirical patterns confirm a sizable gap between capability and deployment. While theoretical exposure metrics suggest that 90% or more of tasks in office administration or computer/math occupations could be affected by LLMs, observed deployment covers only a fraction—often one-third or less—of those tasks (Massenkoff & McCrory, 2026). This gap highlights that diffusion lags capability, a dynamic familiar from prior technology waves. It also suggests that near-term displacement may be more modest than worst-case scenarios imply, though the trajectory remains uncertain.


State of Practice: Occupations and Demographics


Which workers face the highest observed exposure? Recent data identify computer programmers, customer service representatives, data entry keyers, and financial analysts among the most exposed occupations, with task coverage rates ranging from approximately 50% to 75% (Massenkoff & McCrory, 2026). These roles share common features: they involve substantial text-based or analytical work, often follow structured workflows, and lend themselves to automation through API integrations or scripted processes. Conversely, occupations involving physical presence, manual dexterity, or context-specific interpersonal interaction—cooks, bartenders, healthcare aides, construction workers—show minimal or zero observed exposure.


Demographic patterns reveal a striking profile: workers in the most exposed quartile tend to be older, more educated, higher-paid, and disproportionately female and white or Asian compared to those in unexposed roles (Massenkoff & McCrory, 2026). For example, individuals with graduate degrees represent roughly 4.5% of the unexposed group but 17% of the most exposed, an almost fourfold difference. This demographic composition inverts historical automation narratives, which often centered on lower-wage, less-educated workers in manufacturing or routine clerical roles. The current wave places knowledge workers—traditionally considered insulated from technological displacement—at the frontier of risk.


This shift carries implications for organizational strategy. Highly exposed workers often possess institutional knowledge, specialized skills, and social capital within firms. Displacing or destabilizing these roles without robust transition mechanisms could erode organizational capacity, degrade morale, and trigger legal or reputational challenges. Moreover, the concentration of exposure among higher-paid roles may tempt cost-focused executives to pursue aggressive automation, yet research on complementarities and task interdependencies cautions against simplistic substitution logic (Autor & Thompson, 2025; Gans & Goldfarb, 2025).


Organizational and Individual Consequences of AI Exposure


Organizational Performance Impacts


At the firm level, AI-driven task automation presents a dual-edged opportunity: productivity gains on one hand, potential workforce disruptions and coordination challenges on the other. Early evidence on aggregate employment effects remains equivocal. Analyses using U.S. Current Population Survey data through early 2025 find no systematic increase in unemployment rates for workers in the most exposed occupations relative to unexposed workers since late 2022 (Massenkoff & McCrory, 2026). This null result might reflect several dynamics: AI adoption remains partial, displaced workers transition to other roles or industries, or augmentation effects (where AI raises productivity without reducing headcount) dominate substitution effects in the short run.


However, absence of aggregate displacement does not imply absence of change. A closer examination of hiring patterns reveals suggestive evidence of slowdowns, particularly for younger workers. Brynjolfsson et al. (2025) documented a 6–16% decline in employment among workers aged 22–25 in exposed occupations, attributing the decrease primarily to reduced hiring rather than increased layoffs. Using panel data to track job transitions, Massenkoff and McCrory (2026) observed that young workers' monthly job-finding rates into highly exposed occupations fell by roughly 14% relative to pre-ChatGPT baselines, while entry into less-exposed roles remained stable. This pattern suggests firms may be pausing or slowing recruitment into roles where AI tools can shoulder additional workload, even if they are not yet laying off incumbents.


The organizational implications extend beyond headcount. Reduced hiring into entry-level or junior roles—often critical pipelines for talent development—can erode future leadership capacity and institutional knowledge transfer. If firms lean on AI to replace new hires, they may inadvertently undermine apprenticeship dynamics and on-the-job learning that have historically sustained skill accumulation. Additionally, uneven adoption across firms can create competitive asymmetries: early adopters may achieve cost advantages, pressuring laggards to follow suit, potentially accelerating displacement without corresponding reallocation mechanisms.


Individual Wellbeing and Workforce Impacts


For individual workers, exposure to AI-driven displacement risk introduces psychological, economic, and career development challenges. Even absent immediate layoffs, the perception of precarity can diminish engagement, erode trust in management, and prompt voluntary exits. Hochschild's concept of "emotional labor" (1983) reminds us that knowledge workers invest not only cognitive effort but also affective commitment; threats to job security fracture that psychological contract.


The distributional consequences also merit attention. While highly exposed workers tend to earn above-average wages, displacement or deskilling could compress income and status. Financial analysts or customer service representatives displaced by AI may face retraining costs, credential devaluation, or geographic relocation. Older workers, who comprise a larger share of exposed roles, may confront age-related barriers to re-employment or reskilling, exacerbating midcareer disruption (Autor et al., 2013). Conversely, some workers may experience augmentation benefits—using AI tools to enhance productivity, reduce drudgery, or access higher-value tasks—yet these gains risk being unevenly distributed, favoring those with complementary skills or organizational support.


The equity dimensions extend to race and gender. The overrepresentation of women and certain racial groups in exposed occupations raises concerns about whether AI adoption could exacerbate existing labor market disparities. While some research suggests women may benefit from flexible, remote-enabled work arrangements facilitated by AI (Tomlinson et al., 2025), displacement effects could counteract those gains if firms fail to provide transition support or inclusive reskilling pathways.


Evidence-Based Organizational Responses


Table 1: Evidence-Based Organizational Strategies for Managing AI Labor Transitions

Strategy Category

Key Recommendations

Underlying Principles or Theories

Reported Organizational Examples

Intended Outcomes

Implementation Difficulty (Inferred)

Capability Building and Reskilling Programs

Implement role-specific AI training, apprenticeship/mentorship programs, certification, and financial tuition assistance.

On-the-job learning research (Autor, 2015) indicates contextualized training outperforms generic courses.

AT&T (Future Ready initiative investing $1 billion in employee retraining).

Retain institutional knowledge, maintain morale, and equip workers with new skills like prompt engineering and AI validation.

High; requires substantial financial investment, time allocation, and partnership with educational institutions.

Operating Model Redesign and Work Reorganization

Conduct task recombination, human-AI teaming, expanded scope for augmented workers, and facilitate cross-functional mobility.

Job Design Theory (Hackman & Oldham, 1976) suggesting enriched roles enhance motivation and performance.

Unilever (talent marketplaces and redesigned roles in marketing/supply chain).

Optimize workflows for human-AI collaboration, improve output quality, and preserve employment through internal transfers.

Very High; demands a fundamental shift in how the organization functions and significant structural experimentation.

Procedural Justice in Transition Decisions

Utilize inclusive decision-making, objective data-based criteria, advance notice of changes, and grievance mechanisms.

Procedural Justice Theory (Lind & Tyler, 1988) suggesting fairness in process enhances legitimacy and reduces resistance.

IBM (skills-first workforce management and internal labor markets).

Increase perceived fairness of outcomes, reduce resistance to change, and maintain organizational legitimacy.

High; involves structural changes to HR policies and potentially complex negotiations with unions or worker representatives.

Transparent Communication and Workforce Planning

Use scenario-based workforce planning, task-level transparency, regular town halls, and leadership visibility in AI usage.

Organizational change research (Kotter, 1996) emphasizes that frequent communication buffers negative reactions to ambiguity.

Salesforce (publicly articulated AI strategy emphasizing augmentation and reskilling).

Reduce employee anxiety, build trust, signal preparedness, and normalize experimentation with AI.

Moderate; requires significant coordination between HR, leadership, and operational units but low capital expense.

Financial and Benefit Supports

Provide enhanced severance, income smoothing/phased retirement for older workers, retraining grants, and job placement services.

Social safety net and corporate responsibility frameworks to cushion economic shocks and preserve dignity.

General Motors (UAW 2019 negotiations including retraining funds and health coverage extensions).

Cushion economic shocks for displaced workers, reduce litigation risk, and protect organizational reputation.

Moderate to High; primarily dependent on available capital and the depth of the company's financial reserves.

Given the emerging evidence and inherent uncertainties, organizations can adopt several proactive strategies to manage AI-related labor transitions responsibly. The following interventions draw on empirical findings, organizational behavior research, and lessons from prior technological disruptions.


Transparent Communication and Workforce Planning


Ambiguity breeds anxiety. When employees perceive AI as an opaque threat, trust erodes and productivity suffers. Research on organizational change emphasizes that transparent, frequent communication—clarifying strategic intent, acknowledging uncertainties, and soliciting input—can buffer negative reactions (Kotter, 1996). Leaders should articulate why AI is being adopted, which tasks are targeted, and how the organization plans to support affected workers.


Key approaches include:


  • Scenario-based workforce planning: Rather than treating AI adoption as a binary decision, model multiple scenarios (slow diffusion, rapid scaling, regulatory intervention) and develop contingency plans for each. Share high-level findings with employees to signal preparedness without creating alarm.

  • Task-level transparency: Map job roles to constituent tasks, identify which are candidates for automation versus augmentation, and communicate these distinctions. Clarify that partial task automation does not equate to full job elimination.

  • Regular town halls and feedback loops: Establish forums where employees can voice concerns, ask questions, and propose alternatives. Use surveys or anonymous channels to gauge sentiment and adjust strategies accordingly.

  • Leadership visibility: Senior executives should model AI tool usage themselves, demonstrating both benefits and limitations. This reduces perceptions of a "top-down" imposition and normalizes experimentation.


Salesforce has publicly articulated its AI strategy through regular updates to employees and external stakeholders, emphasizing augmentation over replacement and committing to reskilling initiatives. While the company has not published granular labor impact data, its communication approach exemplifies proactive transparency.


Procedural Justice in Transition Decisions


When workforce reductions or role redesigns become unavoidable, how decisions are made matters as much as what is decided. Procedural justice theory (Lind & Tyler, 1988) holds that individuals assess fairness not only by outcomes but by the processes leading to those outcomes. Fair procedures—characterized by voice, consistency, accuracy, and correctability—enhance perceived legitimacy and reduce resistance.


Effective practices include:


  • Inclusive decision-making: Involve affected employees or their representatives in task redesign discussions. Solicit input on which AI tools to pilot, what training is needed, and how roles might evolve.

  • Objective criteria: Base decisions on transparent, task-level data rather than subjective judgments. Use exposure metrics (like observed deployment rates) to identify roles for review, but allow workers to contest or contextualize those assessments.

  • Advance notice and phased implementation: Avoid abrupt layoffs. Provide several months' notice, offer voluntary separation packages, and phase in AI tools to allow adjustment and learning.

  • Grievance mechanisms: Establish clear processes for workers to challenge adverse decisions, whether through internal review boards, ombudspersons, or union negotiation.


IBM has historically emphasized "skills-first" workforce management, linking AI adoption to targeted reskilling rather than blanket reductions. The company's use of internal labor markets and transparent skill assessments offers a procedural justice model, though execution varies by geography and business unit.


Capability Building and Reskilling Programs


If AI automates certain tasks but not entire roles, the resulting jobs may require new skill mixes—such as prompt engineering, AI output validation, or higher-order judgment. Organizations that invest in reskilling can retain institutional knowledge, maintain morale, and avoid the costs of external hiring. Research on training effectiveness indicates that contextualized, on-the-job learning outperforms generic courses (Autor, 2015).


High-impact strategies include:


  • Role-specific AI training: Rather than offering one-size-fits-all modules, tailor training to job families. Teach financial analysts how to validate AI-generated forecasts; train customer service reps on conversational AI oversight.

  • Apprenticeship and mentorship: Pair junior employees with experienced colleagues to navigate AI tool adoption collaboratively. This preserves knowledge transfer while building new competencies.

  • Certification and credentialing: Partner with educational institutions or industry bodies to offer recognized credentials in AI-related skills. Internal certifications can also signal competence and motivate participation.

  • Tuition assistance and time allocation: Provide financial support for external coursework and allocate work hours for learning. Expecting employees to reskill on personal time signals low organizational commitment.

  • Pilot programs with feedback cycles: Launch small-scale reskilling initiatives, measure outcomes (task performance, employee satisfaction), and iterate before scaling.


AT&T's Future Ready initiative, launched in the mid-2010s to address network technology shifts, invested over $1 billion in employee retraining. While predating generative AI, the program's scale and structure—online learning platforms, tuition reimbursement, career counseling—offer a blueprint for reskilling efforts in the AI era.


Operating Model Redesign and Work Reorganization


Automation of specific tasks can enable new organizational structures. Rather than simply reducing headcount, firms might reorganize workflows, redistribute responsibilities, or create hybrid roles that blend human judgment with AI execution. Research on job design suggests that enriched roles—those with autonomy, variety, and feedback—enhance motivation and performance (Hackman & Oldham, 1976).


Design principles include:


  • Task recombination: Identify tasks freed up by automation and bundle them into meaningful roles. For example, if AI handles routine data entry, former data entry workers might transition to quality assurance, exception handling, or customer escalation.

  • Human-AI teaming: Design workflows where AI handles initial drafts or analyses, and humans provide oversight, contextualization, and strategic decisions. This collaborative model can improve output quality while preserving employment.

  • Expanded scope for augmented workers: Enable employees using AI to take on more complex or higher-value tasks. A paralegal using AI for document review might assume more client-facing advisory work.

  • Cross-functional mobility: Facilitate internal transfers to growing roles or departments. Use exposure metrics to identify "safe harbor" occupations and proactively match at-risk workers with opportunities.

  • Experimentation and iteration: Treat work redesign as an ongoing process. Pilot new structures in one unit, gather data on productivity and satisfaction, and adjust before broader rollout.


Unilever has experimented with AI-driven recruitment and workforce analytics, using insights to redesign roles in marketing and supply chain functions. The company emphasizes "talent marketplaces" where employees can signal interest in new projects, potentially smoothing transitions as roles evolve.


Financial and Benefit Supports for Displaced Workers


When displacement occurs despite mitigation efforts, robust safety nets can cushion economic shocks and preserve dignity. Employer-provided transition assistance—beyond statutory minimums—signals good faith and can reduce litigation or reputational risks.


Comprehensive supports include:


  • Enhanced severance packages: Offer several months of salary continuation, extended health benefits, and outplacement services. Severance tied to tenure acknowledges long-term contributions.

  • Income smoothing and phased retirement: For older workers, structured early retirement with bridge income until pension eligibility can reduce hardship. Phased arrangements—reduced hours with prorated pay—allow gradual transitions.

  • Retraining grants: Provide lump-sum grants or tuition vouchers for external education. Partner with local community colleges or online platforms to ensure accessibility.

  • Job placement services: Contract with career counseling firms to assist with resume writing, interview preparation, and job search strategies. Internal referral programs can connect displaced workers with partner firms.

  • Portable benefits and pension enhancements: Where feasible, accelerate vesting or enhance pension contributions for affected workers. In the U.S. context, COBRA extensions and HSA contributions help bridge health coverage gaps.


General Motors' plant closure negotiations with the United Auto Workers in 2019 included retraining funds, extended health coverage, and relocation assistance—demonstrating that large-scale workforce adjustments can incorporate worker protections. While not AI-related, the precedent applies to technology-driven transitions.


Building Long-Term Organizational Resilience and Governance


Beyond immediate responses, organizations must cultivate enduring capabilities to navigate ongoing AI evolution. The following pillars support long-term adaptability.


Recalibrating the Psychological Contract


The traditional employment relationship—stable roles in exchange for loyalty and performance—is shifting. AI-driven automation accelerates this shift by rendering certain skills obsolete while valorizing others. Organizations can proactively renegotiate the psychological contract, emphasizing continuous learning, adaptability, and shared investment in employability (Rousseau, 1995).


Strategic actions include:


  • Articulating a "lifelong learning" compact: Communicate that employment security now hinges on continuous skill development, with the employer committed to providing resources and opportunities.

  • Transparency about role volatility: Normalize the expectation that job content will evolve, and clarify that adaptability—not static expertise—defines success.

  • Mutual investment frameworks: Structure reskilling as a partnership: employees commit time and effort; employers provide funding, time, and recognition.

  • Career lattices vs. ladders: Promote lateral moves and skill diversification rather than purely hierarchical advancement. This prepares workers to pivot as automation reshapes role boundaries.


Distributed Leadership and Governance Structures


AI's labor impacts will vary by function, geography, and occupation. Centralized decision-making risks missing local nuances or alienating frontline workers. Distributed governance—empowering unit leaders, cross-functional committees, and worker representatives—can enhance responsiveness and legitimacy.


Governance models include:


  • AI ethics committees with worker representation: Establish cross-functional bodies that review AI deployment plans, assess labor impacts, and recommend safeguards. Include employee representatives or union officials to ensure workforce voice.

  • Decentralized experimentation with guardrails: Allow business units to pilot AI tools within policy boundaries (e.g., no layoffs without central approval). Share learnings across units to diffuse best practices.

  • Regular impact audits: Conduct quarterly or biannual reviews of AI adoption, measuring metrics like hiring rates, turnover, skill gaps, and employee sentiment. Use findings to adjust strategies.

  • External advisory boards: Engage labor economists, workforce development experts, or community stakeholders to provide independent perspectives and accountability.


Purpose, Belonging, and Organizational Identity


Technological disruption can erode organizational cohesion if employees perceive AI as a threat imposed from above. Leaders can counter this by reinforcing shared purpose, emphasizing human contributions, and fostering inclusive cultures.


Cultural strategies include:


  • Mission reaffirmation: Clarify why the organization exists beyond profit—serving customers, advancing innovation, contributing to society—and position AI as a tool to amplify that mission, not replace people.

  • Celebrating human expertise: Publicly recognize employees' judgment, creativity, and relational skills that AI cannot replicate. Use internal communications to showcase examples of human-AI collaboration.

  • Inclusive forums and rituals: Maintain or expand opportunities for in-person interaction, team-building, and collective problem-solving. Remote work and AI tools can fragment culture; intentional rituals counteract isolation.

  • Equity and belonging initiatives: Ensure that reskilling, promotion, and transition supports are accessible to all demographic groups. Monitor disaggregated data to identify and address disparities.


Conclusion


Artificial intelligence's labor market implications are unfolding in real time, marked by a gap between theoretical automation potential and observed deployment that offers both reassurance and a window for proactive response. Current evidence suggests minimal aggregate unemployment increases among highly exposed workers, yet early signals—particularly slowed hiring of younger employees—warrant attention. Organizations that act now can shape this transition toward outcomes that balance productivity gains with workforce stability and equity.


The path forward requires transparency, procedural fairness, and sustained investment in human capital. Transparent communication demystifies AI and builds trust. Procedural justice ensures that when difficult decisions arise, they are perceived as legitimate. Capability building transforms potential displacement into role evolution. Financial supports cushion shocks for those unable to transition. And long-term governance structures—recalibrated psychological contracts, distributed decision-making, purpose-driven cultures—prepare organizations to navigate ongoing technological change.


No single intervention suffices. The most resilient organizations will deploy integrated strategies, tailored to their industry contexts, workforce demographics, and strategic priorities. They will also recognize that AI's labor impacts are heterogeneous and dynamic, requiring continuous monitoring and adaptation rather than one-time solutions.


As AI capabilities advance and adoption accelerates, the imperatives for responsible management will only intensify. Leaders who treat workforce implications as an afterthought risk morale collapse, talent flight, and reputational damage. Those who engage proactively—grounding decisions in evidence, centering worker voice, and investing in transition supports—can harness AI's potential while preserving the human contributions that remain essential to organizational success. The challenge is formidable, but the opportunity to shape a more equitable and productive future of work is within reach.


Research Infographic




References


  1. Acemoglu, D., & Restrepo, P. (2020). Robots and jobs: Evidence from US labor markets. Journal of Political Economy, 128(6), 2188–2244.

  2. Autor, D. H. (2015). Why are there still so many jobs? The history and future of workplace automation. Journal of Economic Perspectives, 29(3), 3–30.

  3. Autor, D. H., & Thompson, N. (2025). Expertise. NBER Working Paper, (w33941).

  4. Autor, D. H., Dorn, D., & Hanson, G. H. (2013). The China syndrome: Local labor market effects of import competition in the United States. American Economic Review, 103(6), 2121–2168.

  5. Blinder, A. S., Krueger, A. B., & Mas, A. (2009). How many US jobs might be offshorable? World Economics, 10(2), 41–78.

  6. Brynjolfsson, E., Chandar, B., & Chen, R. (2025). Canaries in the coal mine? Six facts about the recent employment effects of artificial intelligence. Digital Economy.

  7. Eloundou, T., Manning, S., Mishkin, P., & Rock, D. (2023). GPTs are GPTs: An early look at the labor market impact potential of large language models. arXiv preprint arXiv:2303.10130.

  8. Gans, J. S., & Goldfarb, A. (2025). O-ring automation. NBER Working Paper, No. 34639.

  9. Hackman, J. R., & Oldham, G. R. (1976). Motivation through the design of work: Test of a theory. Organizational Behavior and Human Performance, 16(2), 250–279.

  10. Hochschild, A. R. (1983). The managed heart: Commercialization of human feeling. University of California Press.

  11. Kotter, J. P. (1996). Leading change. Harvard Business Review Press.

  12. Lind, E. A., & Tyler, T. R. (1988). The social psychology of procedural justice. Plenum Press.

  13. Massenkoff, M. (2025). How predictable is job destruction? Evidence from the Occupational Outlook. Working Paper.

  14. Massenkoff, M., & McCrory, P. (2026). Labor market impacts of AI: A new measure and early evidence. Anthropic.

  15. Ozimek, A. (2019). Overboard on offshore fears. SSRN Working Paper.

  16. Rousseau, D. M. (1995). Psychological contracts in organizations: Understanding written and unwritten agreements. Sage Publications.

  17. Tomlinson, K., Jaffe, S., Wang, W., Counts, S., & Suri, S. (2025). Working with AI: Measuring the applicability of generative AI to occupations. arXiv preprint arXiv:2507.07935.

Jonathan H. Westover, PhD is Chief Research Officer (Nexus Institute for Work and AI); Associate Dean and Director of HR Academic Programs (WGU); Professor, Organizational Leadership (UVU); OD/HR/Leadership Consultant (Human Capital Innovations). Read Jonathan Westover's executive profile here.

Suggested Citation: Westover, J. H. (2026). AI Displacement Risk in the Labor Market: Evidence, Exposure, and the Imperative for Adaptive Organizational Strategy. Human Capital Leadership Review, 33(2). doi.org/10.70175/hclreview.2020.33.2.6

Human Capital Leadership Review

eISSN 2693-9452 (online)

future of work collective transparent.png
Renaissance Project transparent.png

Subscription Form

HCI Academy Logo
Effective Teams in the Workplace
Employee Well being
Fostering Change Agility
Servant Leadership
Strategic Organizational Leadership Capstone
bottom of page