top of page
HCL Review
nexus institue transparent.png
Catalyst Center Transparent.png
Adaptive Lab Transparent.png
Foundations of Leadership
DEIB
Purpose-Driven Workplace
Creating a Dynamic Organizational Culture
Strategic People Management Capstone

Making AI Work at Work: How Employee-Centered Implementation Practices Foster Meaningful Work and Performance

Listen to this article:


Abstract: Artificial intelligence is fundamentally reshaping how work is performed and experienced, raising urgent questions about implementation strategies that support both organizational effectiveness and employee wellbeing. This study examines employee-centered AI implementation (ECAII) practices—characterized by transparent communication, meaningful consultation, and targeted training—as strategic mechanisms for fostering positive outcomes during AI-driven organizational transformation. Drawing on survey data from 168 Italian knowledge workers actively using AI technologies, structural equation modeling analyses revealed that ECAII practices directly enhanced job satisfaction and performance while also operating indirectly through work meaningfulness. Moderated mediation analyses further demonstrated that these beneficial effects were significantly stronger among employees with more favorable attitudes toward AI. These findings extend high-involvement management and meaningful work frameworks to AI contexts, highlighting that successful AI adoption depends not merely on technical implementation but on participatory strategies that help employees reconstruct purpose and value in their evolving roles. From a practical standpoint, the research underscores the organizational imperative to treat AI implementation as a human-centered change process rather than a purely technological transition, with clear implications for HR strategy, leadership practices, and workforce development.

The proliferation of artificial intelligence technologies—spanning conversational agents, recommendation engines, machine learning models, and generative AI systems—represents more than incremental technological advancement. Unlike earlier waves of automation that primarily affected routine manual tasks, contemporary AI systems simulate cognitive functions traditionally performed exclusively by humans, fundamentally altering decision-making authority, expertise boundaries, and the interpretive dimensions of knowledge work (Glikson & Woolley, 2020; Tahir et al., 2025). This transformation introduces both opportunities and risks: efficiency gains and augmented decision-making capabilities coexist with potential threats to employee autonomy, professional identity, and perceptions of work significance (Smids et al., 2020; Valtonen et al., 2025).


The nature of AI introduces distinctive challenges beyond those posed by conventional digitalization. AI systems function as "black boxes" whose decision-making processes often remain opaque even to technical experts (Burrell, 2016; Lipton, 2018), creating uncertainty about how outputs are generated and on what basis they should be trusted. This opacity problematizes traditional notions of expertise and authority: when algorithmic recommendations are positioned as more objective or reliable than human judgment, employees may passively defer to automated outputs rather than actively engaging with them (Shin, 2025). Such dynamics can undermine the sense of agency, competence, and contribution that underpin meaningful work experiences.


In this context, human resource practices emerge as critical levers for shaping how AI implementation unfolds within organizations. Research increasingly recognizes that the success of AI adoption depends not solely on technical factors but on organizational strategies that address the human dimensions of technological change (Gama et al., 2022; Vishwakarma & Singh, 2023). González-Romá et al. (2025) recently introduced the construct of Employee-Centered Automation Implementation, conceptualizing organizational practices that actively inform, consult, and involve employees throughout automation adoption processes. This perspective aligns with broader participatory HR approaches emphasizing information sharing, voice mechanisms, and skill development as foundations for positive employee outcomes during organizational transformation (Alqudah et al., 2022; Garmendia et al., 2021).


Building on this framework, the present study examines Employee-Centered AI Implementation (ECAII) practices—defined as organizational strategies encompassing transparent communication, broad consultation, and adequate training during AI adoption—as mechanisms for fostering job satisfaction, performance, and work meaningfulness. Despite growing recognition of organizational practices' importance, empirical research has predominantly focused on individual-level factors such as technology acceptance and personal attitudes (Almeida et al., 2025; Kaya et al., 2024), leaving critical gaps in understanding how organizational strategies interact with individual orientations to shape employee experiences of AI at work.


Addressing these gaps matters for both theoretical and practical reasons. Theoretically, integrating organizational and individual perspectives can advance understanding of the mechanisms and boundary conditions through which ECAII translates into employee outcomes, bridging authoritative HR frameworks with psychological perspectives on AI acceptance. Practically, clarifying how implementation strategies interact with employee attitudes provides actionable guidance for organizations seeking to realize AI's potential while supporting workforce wellbeing and effectiveness. Moreover, this inquiry contributes to distinguishing AI implementation from conventional automation by examining how organizations can help employees reconstruct meaningful work experiences when cognitive tasks, decision authority, and professional identity are being reshaped by algorithmic systems.


The AI Implementation Landscape


Defining AI in Organizational Contexts


Artificial intelligence encompasses computational systems designed to simulate human cognitive functions—including pattern recognition, natural language processing, prediction, and decision-making—through algorithms trained on extensive data (Glikson & Woolley, 2020). In organizational settings, AI manifests through diverse applications: conversational chatbots handling customer inquiries, recommendation systems personalizing user experiences, predictive analytics informing strategic decisions, and generative models producing content ranging from marketing copy to code (Jarrahi, 2018). These technologies differ fundamentally from earlier automation, which primarily replaced manual labor with mechanical processes. AI targets cognitive work, automating or augmenting tasks traditionally requiring human intelligence, judgment, and expertise.


This cognitive dimension introduces distinctive implications for work design and employee experience. When AI systems assume responsibilities previously performed by human judgment—such as credit approval decisions, medical diagnoses, or performance evaluations—they redistribute decision authority and potentially alter perceptions of human contribution (Kellogg et al., 2020). The "black box" nature of many AI systems, particularly deep learning models, compounds these challenges. These algorithms often operate as inscrutable decision-making entities whose internal logic resists human comprehension, creating what Burrell (2016) terms "opacity by design." Employees working alongside such systems may struggle to understand why specific recommendations emerge, limiting their ability to critically evaluate or meaningfully collaborate with AI outputs.


Furthermore, AI reshapes epistemic authority within organizations. When algorithmic recommendations are presented as objective, data-driven insights, they may acquire unwarranted credibility relative to human expertise (Croce, 2018). This shift can lead employees to passively accept automated outputs rather than actively engaging in collaborative sensemaking processes (Shin, 2025). Such dynamics risk undermining employee agency, professional identity, and the sense that one's expertise and judgment remain valuable—factors central to experiencing work as meaningful.


Prevalence and Organizational Drivers


AI adoption accelerated dramatically across industries in recent years, driven by advances in computational power, data availability, and algorithmic sophistication. Organizations implement AI systems to enhance operational efficiency, improve decision quality, personalize customer experiences, and maintain competitive advantage in increasingly technology-intensive markets (Madanchian et al., 2024). Healthcare providers deploy AI for diagnostic support and treatment recommendations; financial institutions use machine learning for fraud detection and risk assessment; retailers leverage recommendation algorithms to drive sales; and professional services firms experiment with generative AI to augment knowledge work.


However, adoption patterns reveal considerable heterogeneity. Large organizations with substantial technological infrastructure and resources lead implementation efforts, while smaller enterprises face barriers related to cost, technical capacity, and workforce readiness (Olaitan et al., 2021). Knowledge-intensive sectors such as information technology, finance, and professional services exhibit higher AI integration than industries characterized by manual labor or face-to-face service delivery. Even within adopting organizations, AI implementation remains uneven, with technologies typically introduced gradually across functions rather than through comprehensive organizational transformation.


This uneven adoption creates workforce stratification, wherein certain employee segments experience intensive AI integration while others remain relatively unaffected. White-collar knowledge workers—the focus of this study—represent a population experiencing particularly significant AI-related change, as cognitive automation directly impacts their core responsibilities. Understanding how organizational strategies support these employees during AI transitions holds critical importance for workforce wellbeing and organizational effectiveness.


Organizational and Individual Consequences of AI Implementation


Organizational Performance Impacts


AI implementation generates mixed organizational outcomes. On one hand, well-designed AI systems enhance operational efficiency, reduce costs, improve decision accuracy, and enable scalability. Machine learning algorithms process vast datasets to identify patterns invisible to human analysis, recommendation systems personalize customer experiences at scale, and natural language processing automates routine communication tasks (Nurlia et al., 2023). Organizations leveraging AI effectively report productivity gains, improved customer satisfaction, and strengthened competitive positioning.


On the other hand, poorly managed AI implementation can undermine organizational performance through several mechanisms. Technical failures—such as inaccurate predictions, biased outputs, or system brittleness—erode confidence in AI systems and necessitate costly remediation (Beutel et al., 2023). Organizational disruption resulting from inadequate change management generates employee resistance, knowledge loss, and workflow fragmentation. Moreover, overreliance on AI systems may atrophy human capabilities essential for complex judgment, adaptation, and innovation (Jarrahi, 2018).


Emerging evidence suggests implementation strategy mediates whether AI adoption delivers promised benefits. Bergquist et al. (2024), examining AI implementation in healthcare radiology, found that stakeholder trust—shaped significantly by organizational transparency, consultation, and support practices—determined whether clinicians embraced AI tools as decision aids or resisted them as threats to professional autonomy. Similarly, Petersson et al. (2022) identified systematic change management, clear leadership, and infrastructure supporting daily practice integration as prerequisites for successful AI adoption in healthcare settings. These findings underscore that technical capability alone proves insufficient; organizational practices shaping how employees experience and engage with AI critically influence implementation success.


Individual Wellbeing and Performance Impacts


At the individual level, AI implementation affects employees through multiple pathways. Positively, AI systems can enhance job quality by automating tedious tasks, providing decision support that augments capabilities, and creating opportunities for more engaging, value-added work (Bankins & Formosa, 2023; Wu et al., 2025). Employees whose roles are complemented rather than replaced by AI may experience greater effectiveness, reduced cognitive load, and enhanced job satisfaction.


However, AI implementation also poses threats to employee wellbeing. Uncertainty regarding job security, anxiety about skill obsolescence, and concerns about algorithmic surveillance generate stress and disengagement (Smids et al., 2020). When AI systems assume responsibilities central to employees' professional identities, individuals may experience diminished self-worth and loss of purpose (Knell & Rüther, 2024). The opacity of AI decision-making can foster feelings of powerlessness, as employees struggle to understand or influence outcomes affecting their work. Research by Valtonen et al. (2025) documented negative associations between AI integration and employee wellbeing indicators, particularly when implementation lacked adequate support structures.


Work meaningfulness emerges as a particularly critical mediating variable. Hackman and Oldham's (1980) Job Characteristics Model identifies meaningfulness as a core psychological state influenced by skill variety, task identity, and task significance. AI implementation can either enrich or impoverish these job characteristics: offloading routine tasks may increase skill variety and task significance if employees redirect efforts toward complex, strategic activities; conversely, AI assumption of cognitively demanding responsibilities may fragment task identity and reduce perceived significance (Bankins & Formosa, 2023). Callari and Puppione (2025) found that employees' perceptions of meaningful work after AI introduction evolved through interplay between organizational arrangements and subjective interpretations of AI's role, highlighting sensemaking as a dynamic process requiring organizational support.


Job satisfaction and performance represent downstream outcomes shaped by these psychological experiences. Meta-analytic evidence demonstrates robust associations between meaningful work and job satisfaction (Steger et al., 2012), as well as between meaningfulness and performance-related outcomes including motivation, engagement, and proactive behavior (Allan et al., 2019; Lepisto & Pratt, 2017). When AI implementation threatens meaningfulness—through role ambiguity, autonomy reduction, or contribution devaluation—satisfaction and performance may decline. Conversely, implementation strategies preserving or enhancing meaningfulness likely sustain positive employee outcomes.


Evidence-Based Organizational Responses


Table 1: Employee-Centered AI Implementation Practices and Case Study Examples

Implementation Strategy

Key Components

Underlying Research Evidence

Case Study/Organization Example

Reported Outcomes

Transparent Communication

Proactive disclosure; Technical translation; Limitation acknowledgment; Role clarification; Iterative dialogue

Shin (2025); Garmendia et al. (2021); Bergquist et al. (2024); LePine et al. (2005)

Major technology company (AI-powered coding assistance tools)

Reduced uncertainty; realistic expectations; maintained confidence in human expertise; view of AI as augmentation

Consultation and Voice Mechanisms

Design participation; Pilot feedback; Governance representation; Impact assessment; Iterative refinement

Jones et al. (2010); Alqudah et al. (2022); Petersson et al. (2022); Wang and Xu (2017)

Healthcare technology manufacturer (AI-assisted medical imaging analysis tools)

Improved system accuracy; clinician trust; mitigated feelings of powerlessness; high satisfaction levels

Capability Building Through Training

Technical upskilling; Critical AI literacy; Workflow integration; Peer learning; Continuous development

Manuti et al. (2015); Fletcher (2016); Nurlia et al. (2023); Zhang et al. (2025); Olaitan et al. (2021)

Global professional services firm (AI capability building for consultants)

Increased confidence; reduced anxiety; enhanced project efficiency without quality degradation; belief in irreplaceable human judgment

Distributed Leadership and Governance

Sensegiving communication; Visible engagement; Psychological safety; Ethical oversight; Distributed authority

Madanchian et al. (2024); Bankins & Formosa (2023); Shweta and Panicker (2025)

Multinational consumer products company (insights division)

High adoption rates; sustained employee engagement and satisfaction; constructive role reinterpretation

Psychosocial Support Mechanisms

Psychological resources (counseling); Peer networks; Managerial coaching; Job crafting opportunities; Career development

Smids et al. (2020); Zhao et al. (2022); Lysova et al. (2019); Callari and Puppione (2025)

Large management consulting firm (comprehensive support infrastructure)

Reduced stress levels; maintained engagement; rediscovery of purpose in strategic work


Transparent Communication Strategies


Information sharing constitutes a foundational element of employee-centered implementation. Organizational communication provides employees with cognitive frameworks for interpreting change, reducing uncertainty, and understanding how their contributions align with strategic objectives (LePine et al., 2005). In AI contexts, transparent communication addresses distinctive challenges: explaining how AI systems function, clarifying intended uses and limitations, specifying roles in human-AI collaboration, and articulating how AI supports rather than replaces human expertise (Shin, 2025).


Research demonstrates communication's pivotal role in technology adoption success. Garmendia et al. (2021), examining retail automation, found that high-quality information disclosure significantly predicted employee adaptation and reduced resistance, with effects strengthening over time as employees internalized change rationales. Bergquist et al. (2024) identified transparency regarding AI capabilities and limitations as critical for building clinician trust in diagnostic support systems. When organizations communicate openly about what AI can and cannot do, employees develop realistic expectations and appropriate calibration of their reliance on algorithmic outputs.


Organizations pursuing transparent AI communication should implement several practices:


  • Proactive disclosure: Share information about AI implementation plans early and continuously rather than limiting communication to post-implementation announcements

  • Technical translation: Explain AI system functioning in accessible language, avoiding both oversimplification and unnecessary jargon

  • Limitation acknowledgment: Explicitly discuss AI system constraints, error risks, and circumstances requiring human judgment

  • Role clarification: Specify how human and AI responsibilities will be distributed, emphasizing complementarity rather than replacement

  • Iterative dialogue: Create channels for ongoing questions and information exchange rather than one-time announcements


A major technology company integrating AI-powered coding assistance tools into developer workflows implemented multi-tiered communication strategies including executive messaging framing AI as augmentation rather than replacement, technical documentation explaining system capabilities, team briefings addressing specific workflow implications, and ongoing office hours where developers could discuss concerns with AI specialists. This comprehensive approach helped developers view AI tools as productivity enhancers while maintaining confidence in their expertise's continued value.


Consultation and Voice Mechanisms


Consultation—actively involving employees in decisions affecting their work—represents a second critical ECAII component. Participatory decision-making yields multiple benefits: leveraging employee insights into workflow realities that managers may overlook (Jones et al., 2010), enhancing perceived fairness through procedural justice (Alqudah et al., 2022), increasing commitment to implementation outcomes through psychological ownership, and providing opportunities for employees to shape change rather than merely react to it.


In technology implementation contexts, consultation proves particularly valuable given the knowledge asymmetries between those designing systems and those using them daily. Employees possess ground-level understanding of task nuances, workflow interdependencies, and contextual factors influencing whether automated solutions will function effectively. Petersson et al. (2022) found that healthcare organizations successfully implementing AI diagnostic tools prioritized clinician input throughout development, testing implementation against clinical workflow realities and adjusting designs based on practitioner feedback.


Consultation also addresses psychological dimensions of technological change. Wang and Xu (2017) demonstrated that ethical leadership practices eliciting employee input significantly enhanced work meaningfulness, with effects strongest among employees valuing self-determination. When organizations solicit perspectives during AI implementation, employees experience greater agency and control, mitigating feelings of powerlessness associated with algorithmic decision-making (Kellogg et al., 2020).


Effective consultation encompasses diverse mechanisms:


  • Design participation: Involve end-users in AI system requirements gathering, design reviews, and usability testing

  • Pilot feedback: Conduct limited rollouts with structured feedback collection before full deployment

  • Governance representation: Include employee representatives in AI oversight committees and policy development

  • Impact assessment: Collaboratively evaluate AI effects on workflows, task allocation, and performance expectations

  • Iterative refinement: Treat AI implementation as ongoing adjustment based on user experience rather than one-time deployment


A healthcare technology manufacturer developing AI-assisted medical imaging analysis tools embedded radiologists and technicians throughout the development lifecycle. Multi-professional working groups reviewed algorithm outputs against clinical cases, identifying situations where AI recommendations proved unhelpful or misleading. This consultation process improved system accuracy while ensuring clinicians trusted AI as a collaborative tool integrated into rather than disruptive to their diagnostic workflows. Post-implementation surveys indicated high satisfaction levels attributable partly to extensive pre-implementation involvement.


Capability Building Through Training


Training initiatives equipping employees with competencies for effective AI collaboration represent the third ECAII pillar. Skill development addresses both technical capabilities—understanding how to interact with AI systems, interpret outputs, and recognize limitations—and adaptive capabilities such as judgment, critical evaluation, and knowing when to override algorithmic recommendations (Manuti et al., 2015).


Meta-analytic evidence demonstrates positive associations between training opportunities and meaningful work perceptions (Fletcher, 2016), suggesting that capability development signals organizational investment in employees' growth and continued value. In AI contexts, training becomes particularly salient given concerns about skill obsolescence and displacement. When organizations provide learning opportunities positioning employees as AI collaborators rather than passive recipients, they foster confidence, reduce anxiety, and enhance effectiveness (Nurlia et al., 2023).


Zhang et al. (2025) found that openness to experience moderated AI's effects on self-efficacy and innovation, with training interventions amplifying benefits among receptive employees. This suggests tailored development programs addressing individual readiness levels and learning preferences may prove more effective than one-size-fits-all approaches. Additionally, training extending beyond technical skills to encompass critical AI literacy—understanding bias risks, questioning outputs, and maintaining human judgment primacy—better prepares employees for thoughtful AI collaboration (Olaitan et al., 2021).


Comprehensive AI capability building incorporates multiple approaches:


  • Technical upskilling: Instruction on AI system interfaces, functionality, and troubleshooting

  • Critical AI literacy: Education on AI limitations, bias risks, appropriate skepticism, and judgment retention

  • Workflow integration: Guidance on incorporating AI tools into existing processes effectively

  • Peer learning: Communities of practice where employees share experiences and strategies

  • Continuous development: Ongoing learning opportunities as AI capabilities evolve rather than one-time training


A global professional services firm implemented a multi-year AI capability building program preparing consultants for augmented advisory work. The curriculum combined technical modules on generative AI tools with critical thinking workshops exploring algorithmic limitations, client communication strategies, and ethical considerations. Mentorship networks connected less experienced consultants with those successfully integrating AI into practice. Employee surveys indicated increased confidence working with AI tools alongside sustained belief in human judgment's irreplaceable value, with performance metrics showing enhanced project efficiency without quality degradation.


Distributed Leadership and Governance


While not explicitly examined in the original study, leadership practices during AI implementation merit attention given their influence on sensemaking processes shaping meaningfulness. Madanchian et al. (2024) argue that AI-driven change requires distributed leadership spanning formal managers and professional experts who guide colleagues through transformation. Ethical leadership emphasizing fairness, transparency, and concern for employee wellbeing proves particularly relevant, as it signals that AI implementation serves human flourishing alongside organizational objectives.


Leadership shapes implementation climate—the shared perceptions regarding whether AI adoption prioritizes efficiency extraction or human capability augmentation (Bankins & Formosa, 2023). When leaders articulate clear visions connecting AI initiatives to organizational mission and values while acknowledging employee concerns, they facilitate sensemaking helping individuals reinterpret their roles constructively. Shweta and Panicker (2025) demonstrated significant interaction effects between transformational leadership and employee AI attitudes on meaningfulness, suggesting leadership amplifies benefits among receptive employees while mitigating concerns among skeptical ones.


AI implementation leadership should incorporate:


  • Sensegiving communication: Leaders articulate compelling narratives explaining why AI matters, how it aligns with organizational purpose, and where human contributions remain essential

  • Visible engagement: Leaders demonstrate personal involvement learning about AI tools, acknowledging challenges, and soliciting employee perspectives

  • Psychological safety: Create environments where employees can voice concerns, report problems, or question AI outputs without repercussion

  • Ethical oversight: Establish governance ensuring AI deployment adheres to fairness, transparency, and accountability principles

  • Distributed authority: Empower subject matter experts and frontline employees to shape AI integration within their domains


A multinational consumer products company's insights division implemented AI-powered market analysis tools through a distributed leadership approach. Division leaders conducted town halls explaining how AI supported rather than replaced human creativity, openly discussing anxiety regarding changing work nature. They established cross-functional AI councils including analysts, data scientists, and business leaders to govern tool selection and deployment. Team leaders received training on fostering psychological safety and facilitating sensemaking conversations. This approach yielded high adoption rates with sustained employee engagement and satisfaction.


Psychosocial Support Mechanisms


Beyond information, consultation, and training, comprehensive ECAII strategies address emotional and psychological dimensions of AI-driven change. Employees experiencing anxiety about job security, competence concerns, or identity threats require support beyond technical preparation (Smids et al., 2020). Research on organizational change emphasizes social support's buffering effects on stress and its facilitation of adaptive coping (Zhao et al., 2022).


Meaningful work emerges partly through relational processes—conversations with colleagues, managers, and mentors that validate one's contributions and help construct purpose narratives (Lysova et al., 2019). During AI implementation, when established role meanings may be destabilized, organizations providing forums for collective sensemaking help employees reconstruct meaningful work identities. Callari and Puppione (2025) documented how employees navigated AI-related changes through both formal organizational support and informal peer exchanges, with greatest success occurring when multiple support sources reinforced adaptive reinterpretations.


Psychosocial support during AI implementation includes:


  • Psychological resources: Counseling services, stress management resources, and wellbeing programs addressing technology-related anxiety

  • Peer networks: Facilitated communities where employees share experiences, concerns, and coping strategies

  • Managerial coaching: Training supervisors to recognize signs of distress and provide empathetic, solution-focused support

  • Job crafting opportunities: Structured processes helping employees redesign tasks emphasizing meaningful aspects as AI handles routine elements

  • Career development: Clear pathways demonstrating long-term viability and growth opportunities in AI-augmented roles


A large management consulting firm integrating AI across consulting operations established comprehensive support infrastructure including confidential counseling for technology transition concerns, employee resource groups focused on AI adoption where consultants exchanged experiences, and job crafting workshops helping individuals identify and expand meaningful work dimensions. Managers received training recognizing technology anxiety manifestations and having supportive conversations. Employee survey data indicated reduced stress levels and maintained engagement despite significant workflow changes, with many consultants reporting rediscovering purpose through focus on strategic advisory work enabled by AI handling analytical tasks.


Building Long-Term AI-Ready Organizational Capabilities


Cultivating Adaptive Psychological Contracts


Traditional psychological contracts—implicit agreements regarding mutual employee-employer obligations—face disruption as AI reshapes work content and skill requirements. Employees historically exchanged competence and effort for job security and career progression within relatively stable role boundaries. AI implementation destabilizes these arrangements, as today's valuable skills may become obsolete tomorrow, and traditional career paths fragment (Tahir et al., 2025).


Organizations building sustainable AI capabilities must proactively renegotiate psychological contracts emphasizing continuous learning, adaptability, and shared responsibility for skill currency. Rather than guaranteeing specific job continuity, forward-looking contracts promise development opportunities, meaningful work despite technological change, and fair transition support if role transformations occur. This recalibration requires transparent dialogue about how AI affects employment relationships and what commitments organizations can realistically make (Bankins & Formosa, 2023).


Developing adaptive psychological contracts involves:


  • Explicit discussions: Formal conversations during AI initiatives clarifying how employment relationships will evolve

  • Mutual investment frameworks: Agreements where organizations provide learning resources and employees commit to skill development

  • Transparency about change: Honest communication regarding which roles may transform substantially and supporting affected employees proactively

  • Grievance mechanisms: Channels addressing perceived contract violations or unfair AI-related treatment

  • Regular recalibration: Periodic check-ins revisiting and adjusting psychological contract terms as circumstances evolve


Embedding Participatory Governance Structures


Beyond episodic consultation during specific implementations, organizations should embed participatory governance ensuring ongoing employee involvement in AI-related decisions. Permanent structures—such as AI ethics committees with employee representation, technology review boards including end-user perspectives, and regular town halls discussing AI strategy—institutionalize voice mechanisms preventing reversion to top-down approaches (Jones et al., 2010).


Participatory governance provides multiple benefits: surfacing implementation problems early when correction remains feasible, maintaining employee trust through demonstrated commitment to inclusion, generating diverse perspectives improving AI system design and deployment, and fostering collective ownership of AI-driven organizational transformation. Research on high-involvement management demonstrates sustained positive effects when participation becomes embedded in organizational culture rather than isolated to specific initiatives (Boxall & Macky, 2009).


Effective participatory AI governance includes:


  • Cross-functional oversight bodies: Standing committees with diverse stakeholder representation reviewing AI initiatives

  • Employee representation protocols: Defined processes ensuring workforce perspectives inform AI strategy and policy

  • Regular feedback cycles: Scheduled opportunities for employees to evaluate AI system performance and suggest improvements

  • Transparent decision-making: Visible processes showing how employee input influences AI-related choices

  • Appeal mechanisms: Procedures for challenging AI-driven decisions affecting employees


Developing Systematic AI Literacy


While targeted training supports employees directly using AI tools, broader organizational AI literacy proves equally important. As AI pervades business operations, all employees benefit from understanding basic principles: what AI is and isn't, how machine learning systems develop, what bias risks exist, appropriate trust calibration, and when human judgment should override algorithmic recommendations (Olaitan et al., 2021).


Systematic literacy programs democratize AI knowledge, reducing information asymmetries between technical specialists and other employees. This shared understanding facilitates more productive cross-functional collaboration, enables informed questioning of AI-driven initiatives, and supports collective capacity to identify problematic applications. Organizations treating AI literacy as a universal competency rather than specialist domain demonstrate that all employees' intelligence and judgment remain valuable in AI-augmented environments (Vishwakarma & Singh, 2023).


Comprehensive AI literacy encompasses:


  • Foundational concepts: Accessible explanations of how AI systems function without requiring technical expertise

  • Critical evaluation skills: Training to question AI outputs, recognize limitations, and maintain appropriate skepticism

  • Ethical awareness: Education on bias, fairness, privacy, and other normative considerations in AI deployment

  • Domain-specific applications: Understanding how AI affects one's functional area and industry

  • Continuous learning: Regular updates as AI capabilities and organizational applications evolve


Fostering Innovation-Oriented Cultures


Long-term AI success requires organizational cultures supporting experimentation, learning from failure, and challenging established practices. AI implementation inevitably encounters setbacks—systems producing unexpected outputs, workflows proving incompatible with automation, or users discovering better applications than originally intended. Organizations fostering psychological safety—where employees can report problems, suggest alternatives, or acknowledge confusion without penalty—adapt more successfully than those punishing deviation from plans (Wu et al., 2025).


Innovation-oriented cultures position AI adoption as collective learning rather than deterministic implementation. When organizations encourage employees to experiment with AI tools, share discoveries, and propose novel applications, they tap distributed intelligence throughout the workforce. This approach proves particularly valuable given AI's generative potential: employees closest to specific tasks often identify creative AI uses that central planners never envisioned (Zhang et al., 2025).


Building innovation-oriented AI cultures involves:


  • Celebrating experimentation: Recognizing employees who explore novel AI applications regardless of immediate success

  • Normalizing setbacks: Leadership explicitly framing implementation challenges as expected learning opportunities

  • Transparency about limitations: Open discussion of AI shortcomings without defensive posturing

  • Bottom-up innovation: Resources and permission for employees to pilot AI applications addressing specific needs

  • Knowledge sharing: Structured forums for disseminating insights from local experiments organization-wide


Conclusion


This analysis examined how organizational practices during AI implementation shape employee experiences and outcomes, with particular attention to work meaningfulness as a mediating psychological mechanism and personal attitudes as moderating boundary conditions. The evidence demonstrates that employee-centered AI implementation (ECAII) strategies—encompassing transparent communication, meaningful consultation, and comprehensive training—constitute more than good HR practice; they represent strategic imperatives enabling organizations to realize AI's potential while sustaining workforce wellbeing and effectiveness.


Several actionable insights emerge for practitioners:


  • AI implementation is fundamentally a human challenge: Technical sophistication alone proves insufficient. Success depends on organizational strategies helping employees make sense of changing roles, reconstruct meaningful work narratives, and develop capabilities for effective human-AI collaboration.

  • Information sharing reduces uncertainty: Transparent communication about AI capabilities, limitations, intended uses, and human role clarifications helps employees interpret technological change constructively rather than defaulting to anxiety or resistance.

  • Employee voice enhances implementation quality: Consultation mechanisms leverage workforce insights improving AI system design while fostering psychological ownership and commitment to successful adoption.

  • Capability development signals continued value: Training initiatives positioning employees as AI collaborators demonstrate organizational investment in their growth, reducing skill obsolescence concerns while enhancing effectiveness.

  • Individual orientations moderate strategy effectiveness: Implementation practices resonate differently across employees depending on personal attitudes, suggesting tailored approaches acknowledging individual readiness levels and concerns may prove most effective.

  • Meaningfulness mediates satisfaction and performance: Organizational strategies succeed partly by helping employees maintain or reconstruct sense of purpose and significance during AI-driven role transformations, highlighting sensemaking support as a critical implementation dimension.


As AI capabilities continue advancing, questions regarding how organizations can support human flourishing amid technological transformation will intensify. Future waves of AI development—including increasingly autonomous systems, more sophisticated generative models, and broader cognitive task automation—will test organizational capacity to maintain meaningful work at a time when traditional contributions face algorithmic substitution.


The research presented here suggests grounds for cautious optimism: organizations adopting participatory, human-centered approaches can navigate AI implementation successfully, preserving employee wellbeing while capturing productivity benefits. However, this outcome is not inevitable. Organizations treating AI adoption as purely technical deployment risk workforce disengagement, capability loss, and squandered opportunities for human-AI synergy.


Ultimately, making AI work at work requires intentional organizational strategies treating employees as central stakeholders in technological transformation rather than passive objects of automation. By combining transparent communication, genuine consultation, comprehensive capability building, and sustained attention to work meaningfulness, organizations can foster conditions where AI augments rather than diminishes human potential—benefiting employees, organizations, and the broader societies they serve.


Research Infographic




References


  1. Ahmad, M., Shahzad, N., Waheed, A., & Khan, M. (2014). High involvement management and employees performance mediating role of job satisfaction. European Journal of Business and Management, 6(31), 230–243.

  2. Albrecht, S. L., Bakker, A. B., Gruman, J. A., Macey, W. H., & Saks, A. M. (2015). Employee engagement, human resource management practices and competitive advantage: An integrated approach. Journal of Organizational Effectiveness: People and Performance, 2(1), 7–35.

  3. Allan, B. A., Batz-Barbarich, C., Sterling, H. M., & Tay, L. (2019). Outcomes of meaningful work: A meta-analysis. Journal of Management Studies, 56(3), 500–528.

  4. Almeida, F., Junça Silva, A., Lopes, S. L., & Braz, I. (2025). Understanding recruiters' acceptance of artificial intelligence: Insights from the technology acceptance model. Applied Sciences, 15(2), 746.

  5. Alqudah, I. H., Carballo-Penela, A., & Ruzo-Sanmartín, E. (2022). High-performance human resource management practices and readiness for change: An integrative model including affective commitment, employees' performance, and the moderating role of hierarchy culture. European Research on Management and Business Economics, 28(1), 100177.

  6. Bankins, S., & Formosa, P. (2023). The ethical implications of artificial intelligence (AI) for meaningful work. Journal of Business Ethics, 185(4), 725–740.

  7. Bergquist, M., Rolandsson, B., Gryska, E., Laesser, M., Hoefling, N., Heckemann, R., Schneiderman, J. F., & Björkman-Burtscher, I. M. (2024). Trust and stakeholder perspectives on the implementation of AI tools in clinical radiology. European Radiology, 34(1), 338–347.

  8. Beutel, G., Geerits, E., & Kielstein, J. T. (2023). Artificial hallucination: GPT on LSD? Critical Care, 27(1), 148.

  9. Boxall, P., & Macky, K. (2009). Research and theory on high-performance work systems: Progressing the high-involvement stream. Human Resource Management Journal, 19(1), 3–23.

  10. Burrell, J. (2016). How the machine 'thinks': Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 1–12.

  11. Callari, T. C., & Puppione, L. (2025). Meaningful work as shaped by employee work practices in human-AI collaborative environments: A qualitative exploration through ideal types. European Journal of Innovation Management, 28(10), 5001–5027.

  12. Croce, M. (2018). Expert-oriented abilities vs. novice-oriented abilities: An alternative account of epistemic authority. Episteme, 15(4), 476–498.

  13. Fletcher, L. (2016). Training perceptions, engagement, and performance: Comparing work engagement and personal role engagement. Human Resource Development International, 19(1), 4–26.

  14. Gama, F., Tyskbo, D., Nygren, J., Barlow, J., Reed, J., & Svedberg, P. (2022). Implementation frameworks for artificial intelligence translation into health care practice: Scoping review. Journal of Medical Internet Research, 24(1), e32215.

  15. Garmendia, A., Elorza, U., Aritzeta, A., & Madinabeitia-Olabarria, D. (2021). High-involvement HRM, job satisfaction and productivity: A two wave longitudinal study of a Spanish retail company. Human Resource Management Journal, 31(1), 341–357.

  16. Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2), 627–660.

  17. González-Romá, V., Le Blanc, P., Ulfert, A.-S., & Peiró, J. M. (2025). Employee-centered automation implementation and job satisfaction: The role of SMARTer jobs. Paper presented at the 22nd Congress of the European Association of Work and Organizational Psychology, Prague, Czech Republic.

  18. Hackman, J. R., & Oldham, G. R. (1980). Work redesign. Addison-Wesley.

  19. Jarrahi, M. H. (2018). Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making. Business Horizons, 61(4), 577–586.

  20. Jones, D. C., Kalmi, P., & Kauhanen, A. (2010). How does employee involvement stack up? The effects of human resource management policies on performance in a retail firm. Industrial Relations: A Journal of Economy and Society, 49(1), 1–21.

  21. Kaya, F., Aydin, F., Schepman, A., Rodway, P., Yetişensoy, O., & Demir Kaya, M. (2024). The roles of personality traits, AI anxiety, and demographic factors in attitudes toward artificial intelligence. International Journal of Human–Computer Interaction, 40(2), 497–514.

  22. Kellogg, K. C., Valentine, M. A., & Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366–410.

  23. Knell, S., & Rüther, M. (2024). Artificial intelligence, superefficiency and the end of work: A humanistic perspective on meaning in life. AI and Ethics, 4(2), 363–373.

  24. LePine, J. A., Podsakoff, N. P., & LePine, M. A. (2005). A meta-analytic test of the challenge stressor–hindrance stressor framework: An explanation for inconsistent relationships among stressors and performance. Academy of Management Journal, 48(5), 764–775.

  25. Lepisto, D. A., & Pratt, M. G. (2017). Meaningful work as realization and justification. Organizational Psychology Review, 7(2), 99–121.

  26. Lipton, Z. C. (2018). The mythos of model interpretability. Communications of the ACM, 61(10), 36–43.

  27. Lysova, E. I., Allan, B. A., Dik, B. J., Duffy, R. D., & Steger, M. F. (2019). Fostering meaningful work in organizations: A multi-level review and integration. Journal of Vocational Behavior, 110, 374–389.

  28. Madanchian, M., Taherdoost, H., Vincenti, M., & Mohamed, N. (2024). Transforming leadership practices through artificial intelligence. Procedia Computer Science, 235, 2101–2111.

  29. Manuti, A., Pastore, S., Scardigno, A. F., Giancaspro, M. L., & Morciano, D. (2015). Formal and informal learning in the workplace: A research review. International Journal of Training and Development, 19(1), 1–17.

  30. Nurlia, N., Daud, I., & Rosadi, M. E. (2023). AI implementation impact on workforce productivity: The role of AI training and organizational adaptation. Escalate: Economics and Business Journal, 1(1), 1–13.

  31. Olaitan, O. O., Issah, M., & Wayi, N. (2021). A framework to test South Africa's readiness for the fourth industrial revolution. South African Journal of Information Management, 23(1), a1284.

  32. Petersson, L., Larsson, I., Nygren, J. M., Nilsen, P., Neher, M., Reed, J. E., Tyskbo, D., & Svedberg, P. (2022). Challenges to implementing artificial intelligence in healthcare: A qualitative interview study with healthcare leaders in Sweden. BMC Health Services Research, 22(1), 850.

  33. Shin, D. (2025). Automating epistemology: How AI reconfigures truth, authority, and verification. AI & Society.

  34. Shweta, R., & Panicker, A. (2025). Measuring the influence of transformational leadership on interplay between artificial intelligence, job meaningfulness and turnover intentions: Observations from Indian IT sector. Journal of Strategy & Innovation, 36(1), 200534.

  35. Smids, J., Nyholm, S., & Berkers, H. (2020). Robots in the workplace: A threat to—Or opportunity for—Meaningful work? Philosophy & Technology, 33(3), 503–522.

  36. Steger, M. F., Dik, B. J., & Duffy, R. D. (2012). Measuring meaningful work. Journal of Career Assessment, 20(3), 322–337.

  37. Tahir, M. A., Da, G., Javed, M., Akhtar, M. W., & Wang, X. (2025). Employees' foe or friend: Artificial intelligence and employee outcomes. The Service Industries Journal, 45(11–12), 1100–1131.

  38. Valtonen, A., Saunila, M., Ukko, J., Treves, L., & Ritala, P. (2025). AI and employee wellbeing in the workplace: An empirical study. Journal of Business Research, 199, 115584.

  39. Vishwakarma, L. P., & Singh, R. K. (2023). An analysis of the challenges to human resource in implementing artificial intelligence. In The adoption and effect of artificial intelligence on human resources management, Part B (pp. 81–109). Emerald Publishing Limited.

  40. Wang, Z., & Xu, H. (2017). When and for whom ethical leadership is more effective in eliciting work meaningfulness and positive attitudes: The moderating roles of core self-evaluation and perceived organizational support. Journal of Business Ethics, 156(4), 919–940.

  41. Wu, T., Zhang, R., & Zhang, Z. (2025). Navigating the human-artificial intelligence collaboration landscape: Impact on quality of work life and work engagement. Journal of Hospitality and Tourism Management, 62, 276–283.

  42. Zhang, Q., Liao, G., Ran, X., & Wang, F. (2025). The impact of ai usage on innovation behavior at work: The moderating role of openness and job complexity. Behavioral Sciences, 15(4), 491.

  43. Zhao, Z., Yu, K., Liu, C., & Yan, Y. (2022). High-performance human resource practices and employee well-being: The role of networking and proactive personality. Asia Pacific Journal of Human Resources, 60(4), 721–738.

Jonathan H. Westover, PhD is Chief Research Officer (Nexus Institute for Work and AI); Associate Dean and Director of HR Academic Programs (WGU); Professor, Organizational Leadership (UVU); OD/HR/Leadership Consultant (Human Capital Innovations). Read Jonathan Westover's executive profile here.

Suggested Citation: Westover, J. H. (2026). Making AI Work at Work: How Employee-Centered Implementation Practices Foster Meaningful Work and Performance. Human Capital Leadership Review, 34(1). doi.org/10.70175/hclreview.2020.34.1.3

Human Capital Leadership Review

eISSN 2693-9452 (online)

future of work collective transparent.png
Renaissance Project transparent.png

Subscription Form

HCI Academy Logo
Effective Teams in the Workplace
Employee Well being
Fostering Change Agility
Servant Leadership
Strategic Organizational Leadership Capstone
bottom of page