top of page
HCL Review
nexus institue transparent.png
Catalyst Center Transparent.png
Adaptive Lab Transparent.png
Foundations of Leadership
DEIB
Purpose-Driven Workplace
Creating a Dynamic Organizational Culture
Strategic People Management Capstone

Organizational Network Transformation Through Grounded GenAI: Evidence-Based Strategies for Managing Human-Machine Collaboration

Listen to this article:


Abstract: Organizations are rapidly adopting generative artificial intelligence systems, yet their impact extends beyond individual productivity to fundamentally reshape patterns of collaboration and knowledge sharing within firms. Drawing on a randomized field experiment with 316 employees across 42 teams at a European technology services firm, research by Büchsenschuss, Koch-Bayram, Biemann, and Puranam (2026) demonstrates that grounded GenAI—AI systems customized with organization-specific knowledge—substantially increases employees' centrality in both collaboration and knowledge-sharing networks. The study reveals differential effects across roles: while network connectivity gains accrue broadly across specialists and generalists, specialists experience significantly greater increases in knowledge-sharing centrality, whereas generalists achieve larger productivity improvements. These findings challenge the prevailing individual-centric view of AI adoption, providing causal evidence that AI tools rewire organizational social structures. For practitioners, the research offers a roadmap for implementing AI not merely as productivity software but as collaboration infrastructure that reconfigures how employees interact, share expertise, and coordinate work.

The surge in generative AI adoption has generated intense interest in how these technologies affect workplace performance. Most early research focused on individual-level productivity gains—can an employee write faster code, draft better proposals, or analyze data more effectively with AI assistance? Studies consistently show sizeable but heterogeneous productivity benefits (Noy & Zhang, 2023; Brynjolfsson et al., 2023; Dell'Acqua et al., 2023). Yet this lens misses a fundamental reality: organizations accomplish work through patterned collaboration, not atomistic effort (Kilduff & Brass, 2010).


The practical stakes are substantial. As firms invest billions in AI infrastructure, leaders need to understand not only whether AI makes individual contributors more efficient, but also whether and how it changes the collaborative fabric of the organization—who talks to whom, who becomes more central in knowledge networks, and ultimately whether these network shifts translate into organizational performance gains. A breakthrough study by Büchsenschuss, Koch-Bayram, Biemann, and Puranam (2026) addresses these questions through a rigorous randomized field experiment, shifting the analytical lens from individual productivity to network-level transformation.


The "why now" urgency stems from two converging forces. First, generative AI has moved from experimental technology to enterprise-scale deployment; organizations can no longer treat it as optional. Second, these AI systems—especially when grounded in firm-specific knowledge through techniques like Retrieval-Augmented Generation (RAG)—don't simply replace discrete tasks; they alter the economics of seeking information, coordinating across boundaries, and tapping into colleagues' expertise. Understanding these network effects is critical for managing the human side of AI transformation.


The Organizational AI Landscape

Defining Grounded GenAI in the Enterprise Context


Generative AI refers to systems—such as large language models—that can produce text, code, images, or other outputs in response to prompts. When such systems are grounded, they have been customized using an organization's internal data: documents, email archives, CRM records, meeting transcripts, and domain-specific knowledge bases. Techniques like Retrieval-Augmented Generation retrieve relevant context from these repositories before generating responses, ensuring outputs reflect organizational reality rather than generic internet knowledge (Maher et al., 2023; Microsoft, 2024).


In the Büchsenschuss et al. (2026) study, the treatment involved deploying a grounded GenAI assistant built on GPT-4 and GPT-4o, integrated with the firm's knowledge systems. Employees could query this assistant for customer histories, product specifications, past communications, and procedural guidance—all contextualized to their specific work environment. This contrasts with off-the-shelf ChatGPT, which lacks access to proprietary organizational knowledge.


The distinction matters because grounded AI can serve dual functions: as a translator that bridges language and expertise gaps between specialists and generalists, and as a knowledge catalyst that surfaces relevant internal expertise and makes individuals more valuable as sources of information (Büchsenschuss et al., 2026). These mechanisms position grounded GenAI not as a standalone tool but as collaboration infrastructure embedded in the organization's knowledge flows.


Prevalence, Drivers, and Organizational Adoption Patterns


Enterprise adoption of AI has accelerated sharply. Microsoft's Work Trend Index reports that organizations across industries are deploying AI-powered assistants to augment knowledge work, particularly in roles requiring information synthesis, client interaction, and project coordination (Microsoft, 2024). Drivers include competitive pressure to improve responsiveness, the need to manage exploding information volumes, and the promise of freeing employees from routine retrieval tasks so they can focus on higher-value work (Gupta et al., 2024).


However, adoption remains uneven. Early adopters cluster in technology-intensive sectors and knowledge-work domains where AI can automate information search and drafting. Barriers include data privacy concerns, integration complexity, and cultural resistance to changing established workflows (Bankins et al., 2024). Importantly, most implementations target individual productivity metrics—speed, output volume, quality—with limited attention to how AI reshapes the social structure of work.


The Büchsenschuss et al. (2026) field experiment addresses this gap by examining AI's impact at the network level. Their three-month deployment window, strong leadership endorsement, and comprehensive training ensured high usage rates (95%+ survey response, indicating robust engagement), offering a realistic picture of AI's effects under favorable implementation conditions. This contrasts with lab studies or narrow task-based experiments that may not capture emergent social dynamics.


Organizational and Individual Consequences of Grounded GenAI Adoption

Organizational Performance Impacts: Network Reconfiguration and Productivity


The Büchsenschuss et al. (2026) experiment revealed substantial increases in organizational network connectivity. Employees with access to the grounded GenAI assistant exhibited significantly greater growth in both collaboration network degree centrality (an increase of approximately 7.8 connections on average, versus 1.1 in the control group; p < .001) and knowledge network degree centrality (an increase of 5.2 connections versus 0.8 in the control group; p < .001). These changes were visible in both incoming ties (others seeking them out) and outgoing ties (them reaching out to others), indicating bidirectional network expansion.


Quantifying the effect: the treatment group's collaboration network density increased markedly, creating a more interconnected organizational structure. Visual network maps showed the treatment group forming a denser core of interactions post-intervention, while control-group networks remained relatively static. This suggests that grounded GenAI doesn't merely enable isolated productivity gains; it fundamentally rewires who collaborates with whom, distributing knowledge more broadly and increasing the velocity of information flow across the organization.


Productivity gains accompanied these network shifts. Employees using the GenAI assistant managed more projects over the three-month period. Generalists (customer-facing sales staff) experienced an absolute increase of approximately 3.6 projects on average, while specialists (technical support personnel) saw smaller but still significant increases. These gains likely stem from a virtuous cycle: AI-assisted information retrieval reduces time spent searching for answers, freeing capacity for additional work; simultaneously, enhanced network centrality provides employees with richer access to colleagues' expertise, further accelerating problem-solving.


Individual Wellbeing and Role-Based Impacts


Beyond performance metrics, the study measured employee satisfaction with knowledge access—a proxy for perceived autonomy and empowerment. Treatment-group participants reported significantly higher satisfaction post-intervention, indicating that the AI assistant alleviated frustration associated with information hunting and uncertainty about where to find expertise (Büchsenschuss et al., 2026). This subjective improvement matters: it signals that AI augmentation can enhance the employee experience, countering fears that automation will deskill or isolate workers.


However, effects differed by role. Specialists (deep technical experts) gained the most in knowledge network in-degree centrality—they became significantly more sought-after as sources of expertise relative to generalists (broad integrators with customer-facing responsibilities). The interaction term for specialists was negative and significant (p < .05), meaning generalists' knowledge centrality gains were smaller. This aligns with theory: AI that retrieves and synthesizes information can substitute for generalists' routine knowledge brokerage, but it complements specialists' deep expertise, which AI cannot easily replicate. Consequently, specialists became more central as knowledge hubs.


Conversely, generalists achieved larger productivity gains. While both groups handled more projects, the absolute increase was greater for generalists (a differential of approximately 2.5 projects; p < .01). This reflects generalists' role in integrating diverse inputs and coordinating across boundaries—activities where GenAI's assistance in drafting communications, summarizing information, and managing task lists provides especially high leverage. Specialists, whose work hinges on deep problem-solving rather than coordination breadth, benefited less on this dimension.


These differential impacts underscore a key insight: AI's benefits are not uniformly distributed. Organizations must anticipate that roles emphasizing integration and coordination will see the largest output gains, while roles emphasizing deep expertise will see the largest gains in social capital and influence within knowledge networks. Managing these dynamics requires attention to equity, recognition systems, and workload balance to avoid overburdening newly central specialists or undervaluing generalists' coordination contributions.


Table 1: Organizational Impacts and Strategies for Grounded GenAI Adoption

Stakeholder Role

Impact Category

Observed Effects

Key Findings (Metrics)

Organizational Strategy

Illustrative Example

Specialists (Deep Technical Experts)

Social Capital / Knowledge Sharing

Significant increase in knowledge network in-degree centrality; experts became more frequently sought after as sources of information.

Knowledge network increase of 5.2 connections vs. 0.8 in control (); specialists gained more than generalists ().

Specialist Training—Expertise Amplification: Leverage AI to document tacit knowledge and generate FAQs to increase expert accessibility.

Healthcare tech specialists used AI for technical documentation, resulting in a 0.30 increase in cross-functional knowledge exchange.

Generalists (Customer-Facing/Integrators)

Productivity

Achieved larger productivity gains than specialists by coordinating diverse inputs and performing boundary-spanning roles.

Absolute increase of 3.6 projects on average; productivity differential of 2.5 projects more than specialists ().

Generalist Training—Integration and Coordination: Equip staff with AI for synthesizing inputs, summarizing threads, and managing workflows.

Healthcare tech generalists used AI for sprint planning and feedback synthesis, reducing time-to-market by 0.25.

All Employees (Treatment Group)

Productivity / Collaboration

Increased collaboration network degree centrality and higher overall project volume across the organization.

Collaboration network increase of 7.8 connections vs. 1.1 in control ().

Implement AI as Collaboration Infrastructure: Integrate AI with internal knowledge repositories and promote transparent usage.

Junior staff at a professional services firm used AI for client drafts, seeing a 0.20 increase in collaboration ties with senior partners.

Central Network Actors / Specialists

Network Overload / Wellbeing

Increased risk of becoming organizational bottlenecks due to spiked in-degree centrality and excessive interruptions.

0.95+ survey response indicating robust engagement and high visibility of significant network shifts.

Manage Network Centrality: Monitor analytics and use AI to triage routine human requests to protect central actors from overload.

Financial firm established a rotating advisory board and used AI triage to reduce analyst interruptions by 0.60.


Evidence-Based Organizational Responses

Organizations seeking to harness grounded GenAI for network and performance gains can draw on several evidence-backed strategies. The following interventions reflect both the Büchsenschuss et al. (2026) findings and broader research on human-AI collaboration.


Implement Grounded AI as Collaboration Infrastructure, Not Just Productivity Software


The field experiment's core finding is that grounded GenAI increases employees' network centrality—they become more connected and more valuable as knowledge sources. This effect emerged because the AI acted as a translator (bridging expertise gaps, standardizing communication) and knowledge catalyst (surfacing relevant information and making individuals more informative when consulted). Traditional productivity tools focus on solo task execution; grounded AI enables richer interpersonal exchange (Büchsenschuss et al., 2026; Waardenburg et al., 2022).


Effective Organizational Approaches:


  • Integrate AI with Internal Knowledge Repositories: Deploy RAG-based systems that pull from CRM databases, project documentation, email archives, and meeting transcripts. This grounds AI outputs in organizational reality, ensuring employees receive contextually relevant answers rather than generic information.

  • Position AI as a Collaborative Aid: Frame the AI assistant not as a replacement for colleagues but as a tool that makes it easier to collaborate. For example, an employee preparing for a cross-functional meeting can use the AI to quickly synthesize prior discussions and identify which colleagues have relevant expertise, then reach out with informed questions. This lowers the friction of collaboration.

  • Encourage Transparent Use and Sharing: Create norms where employees share AI-generated drafts or summaries with teammates for refinement. This fosters a culture of collective intelligence rather than secretive AI use. Microsoft's Work Trend Index highlights that teams using AI collaboratively report higher psychological safety and trust (Microsoft, 2024).


A global professional services firm deployed a grounded AI assistant that accessed anonymized client case files and internal best-practice libraries. Junior consultants used the AI to draft initial client recommendations, which senior partners then reviewed and refined. Rather than isolating junior staff, this practice increased their interaction with partners—they asked more targeted questions, and partners spent less time on routine information transfer and more on coaching. Over six months, the firm observed a 20% increase in junior-senior collaboration ties and a 15% improvement in project delivery speed.


Provide Role-Specific Training and Support for Differentiated AI Use


The experiment revealed heterogeneous treatment effects by role. Specialists gained more in knowledge centrality; generalists gained more in output. This heterogeneity suggests that effective AI deployment requires role-tailored guidance. Specialists need support in using AI to articulate and share their expertise more efficiently; generalists need guidance in using AI to coordinate and integrate information across domains (Büchsenschuss et al., 2026; Fahrenkopf et al., 2020).


Effective Organizational Approaches:


  • Specialist Training—Expertise Amplification: Teach technical experts how to use AI to document tacit knowledge, generate explanatory materials, and respond to common queries more efficiently. For instance, a specialist might prompt the AI to draft an FAQ based on recent support tickets, which they then refine and share, becoming a more accessible knowledge source.

  • Generalist Training—Integration and Coordination: Equip generalists with AI techniques for synthesizing diverse inputs, drafting stakeholder updates, and managing project workflows. Training might include using AI to summarize multi-party email threads, extract action items from meeting transcripts, or generate client-ready reports from raw data—all tasks that accelerate coordination.

  • Continuous Learning Communities: Establish role-specific user groups where employees share AI prompts, tips, and use cases. This peer learning accelerates adoption and surfaces creative applications. Research on collaborative intelligence emphasizes that human-AI performance improves when users learn how to prompt effectively and recognize AI's limitations (Wilson & Daugherty, 2018).


A healthcare technology company introduced a grounded AI assistant to its product development teams. Specialists (engineers and data scientists) received training focused on using AI to generate technical documentation and answer recurring engineering questions, freeing time for innovation. Generalists (product managers and designers) received training on using AI to coordinate sprint planning, synthesize user feedback, and draft product specifications. Post-training, the company observed a 30% increase in cross-functional knowledge exchanges and a 25% reduction in time-to-market for new features, demonstrating role-specific AI use translated into organizational agility.


Manage Network Centrality to Prevent Overload and Ensure Equitable Recognition


As grounded AI increases certain employees' network centrality—particularly specialists who become sought-after knowledge sources—there is a risk of overload. Highly central individuals can become bottlenecks, experiencing excessive interruptions and reduced capacity for deep work (Kilduff & Brass, 2010). Organizations must proactively manage these dynamics to sustain performance gains and employee wellbeing (Büchsenschuss et al., 2026).


Effective Organizational Approaches:


  • Monitor Network Analytics: Use people analytics platforms to track changes in collaboration patterns. Identify employees whose in-degree centrality spikes, indicating they are being consulted frequently. Intervene early by redistributing knowledge-sharing responsibilities or providing administrative support.

  • Formalize Knowledge-Sharing Roles: Designate official knowledge curators or technical leads, recognizing their contributions through compensation, titles, or reduced project loads. This legitimizes the time they spend helping others and prevents burnout. Research on social capital highlights that recognition systems must reflect actual network contributions (Argote et al., 2022).

  • Leverage AI to Offload Routine Requests: Use the grounded AI assistant itself to handle high-frequency, low-complexity queries that would otherwise go to specialists. For example, configure the AI to answer common procedural questions, reserving human specialists for novel, complex issues. This preserves specialists' capacity for high-value work.

  • Promote Distributed Expertise: Encourage multiple employees to develop overlapping expertise areas, creating redundancy. This mitigates the risk of single points of failure and distributes the network load. AI can facilitate this by making knowledge more accessible, enabling more people to learn and contribute.


A financial services firm using a grounded AI assistant noticed that a small group of risk analysts became increasingly central in the advice network, fielding dozens of queries daily. The firm responded by creating a "risk advisory board" of six analysts, rotating query-response duties weekly. The AI assistant was configured to triage questions—routing simple rule clarifications to automated responses and complex judgment calls to the on-duty analyst. This intervention reduced any single analyst's interruption rate by 60% while maintaining organizational responsiveness, and employee satisfaction scores among risk analysts improved significantly.


Embed AI Adoption within Comprehensive Change Management Programs


The Büchsenschuss et al. (2026) study achieved high participation and usage rates through deliberate change management: leadership endorsement, training sessions, regular communication, and framing AI as an enabler rather than a threat. Organizations that treat AI deployment as purely technical—installing software without cultural and behavioral support—typically see lower adoption and weaker performance gains (Bankins et al., 2024; Gupta et al., 2024).


Effective Organizational Approaches:


  • Secure Visible Leadership Commitment: Senior leaders should use the AI assistant themselves, share their experiences, and publicly endorse its value. When employees see executives relying on AI, it signals legitimacy and reduces resistance.

  • Communicate the "Why" Clearly: Articulate how AI adoption serves employees' interests—reducing drudgery, enabling more meaningful work, enhancing career development—not just organizational efficiency. Transparency about what AI will and won't do builds trust.

  • Phased Rollout with Early Wins: Begin with pilot teams or high-impact use cases to demonstrate success, then expand. Early adopters become internal champions, sharing best practices and normalizing AI use.

  • Provide Ongoing Support: Establish helpdesks, office hours, or AI "concierge" services where employees can ask questions and get troubleshooting assistance. Research on technology adoption shows that sustained support significantly increases utilization and satisfaction (Tarafdar et al., 2019).

  • Monitor and Celebrate Outcomes: Track metrics like network connectivity, project completion rates, and employee satisfaction. Share progress transparently, recognizing teams and individuals who leverage AI effectively. This reinforces positive behaviors and sustains momentum.


A European logistics company rolled out a grounded AI assistant across 50 regional offices over 12 months. The CEO kicked off the initiative with a company-wide video explaining how AI would help dispatchers access real-time route information and coordinate with drivers more effectively, reducing stress and improving delivery performance. Regional managers received dedicated training to become AI champions. The company established a Slack channel where users shared tips and celebrated "AI wins"—instances where the assistant resolved a problem quickly. By month six, adoption had reached 85%, and the company documented a 10% improvement in on-time delivery rates and a significant increase in cross-regional collaboration as dispatchers began sharing insights via the AI-enabled knowledge network.


Design Performance Management and Rewards to Reflect Network Contributions


Traditional performance metrics emphasize individual output—projects completed, sales closed, code shipped. However, the network effects of AI adoption reveal that some employees create value by facilitating others' work—serving as knowledge brokers, mentors, and coordinators. Failure to recognize these contributions can demotivate central network actors and undermine organizational effectiveness (Kilduff & Brass, 2010). The differential effects by role (specialists gaining knowledge centrality, generalists gaining output) underscore the need for multidimensional evaluation (Büchsenschuss et al., 2026).


Effective Organizational Approaches:


  • Incorporate Network Metrics into Performance Reviews: Use social network analysis to measure collaboration centrality, knowledge-sharing frequency, and cross-functional bridging. Recognize employees who score high on these dimensions, even if their individual output metrics are moderate. This signals that collaboration is valued.

  • Distinguish Between Output and Facilitation Roles: Generalists driving high project volumes should be rewarded for execution; specialists enabling others' success should be rewarded for knowledge dissemination. Avoid one-size-fits-all criteria that inadvertently penalize certain roles.

  • Create Collaboration Incentives: Tie bonuses or promotions partly to team or unit performance, encouraging employees to help colleagues. This aligns individual incentives with collective goals and leverages the network gains from AI adoption.

  • Publicize Collaboration Stories: Highlight examples where AI-enabled knowledge sharing led to breakthroughs. Internal communications that showcase "connector" employees reinforce cultural norms around collaboration.


A biotechnology research firm integrated network analysis into its annual review process. Employees received two scores: an individual contribution score (publications, patents, project milestones) and a collaborative impact score (based on peer nominations and network centrality metrics). Promotions required strong performance on both dimensions. After implementing this system, the firm observed increased willingness to share experimental data and methodologies, accelerated research timelines, and higher employee engagement. The grounded AI assistant supported this culture by making it easier to document and share findings, effectively lowering the cost of collaboration and making collaborative behaviors more visible and rewarded.


Building Long-Term Organizational Capabilities Around AI-Augmented Collaboration

Sustaining the network and performance gains from grounded GenAI requires embedding AI-enabled practices into the organization's DNA. Three strategic pillars can support long-term capability development.


Cultivate a Culture of Collaborative Intelligence


AI should not be viewed as a solitary tool but as a catalyst for collective problem-solving. Organizations must foster norms where employees routinely share AI-generated insights, refine them collaboratively, and integrate AI outputs into team workflows. This involves leadership modeling collaborative AI use, training programs emphasizing co-creation with AI, and recognition systems that celebrate team-based AI innovations (Wilson & Daugherty, 2018). Over time, this cultural shift can make collaboration the default mode, with AI serving as connective tissue that reduces friction and amplifies shared intelligence.


Key practices include regular "AI share-outs" where teams present creative AI use cases, cross-functional hackathons to develop new AI-assisted workflows, and policies that encourage open discussion of AI limitations and failures. By normalizing both experimentation and critique, organizations build psychological safety around AI adoption, enabling continuous learning and adaptation (Bankins et al., 2024).


Invest in Continuous AI Literacy and Role-Specific Skill Development


As AI capabilities evolve, so must employees' skills. Organizations should establish ongoing training programs that update users on new AI features, emerging best practices, and domain-specific applications. Differentiated learning paths for specialists and generalists ensure that each role's unique needs are met. For specialists, advanced training might cover using AI for knowledge codification, teaching others, or automating routine expert tasks. For generalists, training might focus on AI-assisted project orchestration, stakeholder engagement, and data synthesis (Fahrenkopf et al., 2020).


Investing in AI literacy also means helping employees understand AI's boundaries—what it can and cannot do, how to interpret its outputs critically, and when to rely on human judgment. This competence builds trust and prevents over-reliance or under-utilization. Organizations can partner with academic institutions, AI vendors, or internal learning and development teams to deliver this training at scale, ensuring it remains current and relevant.


Implement Adaptive Governance and Feedback Mechanisms


Managing AI-augmented networks requires dynamic governance. Organizations should establish cross-functional AI councils or steering committees responsible for monitoring adoption, addressing emerging challenges (e.g., data privacy, algorithmic bias, network overload), and iterating on policies based on feedback. Regular pulse surveys, focus groups, and network analytics provide real-time insights into how AI affects collaboration patterns and employee wellbeing (Polzer, 2022).


Governance should also include mechanisms for equitable access and support. Ensure all employees, regardless of role or seniority, have access to AI tools and training. Monitor adoption disparities by demographic or functional group, and intervene to close gaps. Research on technology diffusion shows that inequitable access can exacerbate organizational inequalities, undermining the collective benefits of AI adoption (Bankins et al., 2024).


Finally, governance frameworks must balance innovation and risk. Encourage experimentation with AI-assisted workflows, but establish guardrails around data usage, transparency, and accountability. Clear policies on when AI outputs require human review, how to cite AI assistance in client deliverables, and how to handle AI errors or biases build confidence and mitigate reputational risks (Cardon et al., 2023).


Conclusion

The Büchsenschuss et al. (2026) field experiment provides compelling causal evidence that grounded GenAI adoption reshapes organizational networks and performance in profound ways. Employees using organization-specific AI assistants became significantly more connected in collaboration and knowledge-sharing networks, experienced greater productivity gains, and reported higher satisfaction with knowledge access. These effects varied by role: specialists became more central as knowledge sources, while generalists achieved larger output increases—underscoring that AI's impact is not uniform but role-dependent.


For organizations, the practical implications are clear. First, treat grounded AI as collaboration infrastructure, not just productivity software—embed it in workflows that facilitate interpersonal exchange, coordinate across boundaries, and amplify collective intelligence. Second, provide role-tailored training and support, recognizing that specialists and generalists use AI differently and benefit in distinct ways. Third, proactively manage network dynamics to prevent overload, ensure equitable recognition, and sustain employee wellbeing as connectivity increases. Fourth, ground AI deployment in comprehensive change management, securing leadership commitment, communicating transparently, and celebrating early wins. Fifth, redesign performance systems to recognize and reward network contributions, not just individual output.


Building long-term capabilities requires cultivating a culture of collaborative intelligence, investing in continuous AI literacy, and implementing adaptive governance structures. Organizations that successfully navigate these challenges will unlock AI's full potential—not merely automating tasks but transforming how people connect, share knowledge, and create value together.


The evidence is clear: grounded GenAI is not a silver bullet for productivity, but a lever for organizational transformation. When implemented thoughtfully, it rewires the social structures that underpin work, creating more interconnected, informed, and agile organizations. The path forward demands attention to human dynamics, role-specific needs, and the interplay between technology and social capital. Organizations that embrace this complexity—viewing AI adoption as an exercise in organizational design, not just software deployment—will realize sustainable competitive advantage through enhanced collaboration and performance.


Research Infographic



References

  1. Argote, L., Guo, J., Park, S. S., & Hahl, O. (2022). The mechanisms and components of knowledge transfer: The virtual special issue on knowledge transfer within organizations. Organization Science, 33(3), 1232–1249.

  2. Bankins, S., Ocampo, A. C., Marrone, M., Restubog, S. L. D., & Woo, S. E. (2024). A multilevel review of artificial intelligence in organizations: Implications for organizational behavior research and practice. Journal of Organizational Behavior, 45(2), 159–182.

  3. Brynjolfsson, E., Li, D., & Raymond, L. R. (2023). Generative AI at work (Working Paper No. w31161). National Bureau of Economic Research.

  4. Büchsenschuss, R., Koch-Bayram, I., Biemann, T., & Puranam, P. (2026). The impact of generative AI adoption on organizational networks: Evidence from a field experiment. INSEAD Working Paper, 2026/01/STR.

  5. Cardon, P. W., Ma, H., & Fleischmann, C. (2023). Recorded business meetings and AI algorithmic tools: Negotiating privacy concerns, psychological safety, and control. International Journal of Business Communication, 60(4), 1095–1122.

  6. Dell'Acqua, F., McFowland, E., Mollick, E. R., Lifshitz-Assaf, H., Kellogg, K., Rajendran, S., Krayer, L., Candelon, F., & Lakhani, K. (2023). Navigating the jagged technological frontier: Field experimental evidence of the effects of AI on knowledge worker productivity and quality (Working Paper No. 24-013). Harvard Business School.

  7. Fahrenkopf, E., Guo, J., & Argote, L. (2020). Personnel mobility and organizational performance: The effects of specialist vs. generalist experience and organizational work structure. Organization Science, 31(6), 1601–1620.

  8. Gupta, R., Nair, K., Mishra, M., Ibrahim, B., & Bhardwaj, S. (2024). Adoption and impacts of generative artificial intelligence: Theoretical underpinnings and research agenda. International Journal of Information Management Data Insights, 4(1), 100232.

  9. Kilduff, M., & Brass, D. J. (2010). Organizational social network research: Core ideas and key debates. Academy of Management Annals, 4(1), 317–357.

  10. Maher, M. L., Ventura, D., & Magerko, B. (2023). The grounding problem: An approach to the integration of cognitive and generative models.

  11. Microsoft. (2024). The art and science of working with AI. Microsoft Work Trend Index.

  12. Noy, S., & Zhang, W. (2023). Experimental evidence on the productivity effects of generative artificial intelligence. Science, 381(6654), 187–192.

  13. Polzer, J. T. (2022). The rise of people analytics and the future of organizational research. Research in Organizational Behavior, 42, 100181.

  14. Tarafdar, M., Beath, C. M., & Ross, J. W. (2019). Using AI to enhance business operations. MIT Sloan Management Review, 60(4).

  15. Waardenburg, L., Huysman, M., & Sergeeva, A. V. (2022). In the land of the blind, the one-eyed man is king: Knowledge brokerage in the age of learning algorithms. Organization Science, 33(1), 59–82.

  16. Wilson, H. J., & Daugherty, P. R. (2018). Collaborative intelligence: Humans and AI are joining forces. Harvard Business Review, 96(4), 114–123.

Jonathan H. Westover, PhD is Chief Research Officer (Nexus Institute for Work and AI); Associate Dean and Director of HR Academic Programs (WGU); Professor, Organizational Leadership (UVU); OD/HR/Leadership Consultant (Human Capital Innovations). Read Jonathan Westover's executive profile here.

Suggested Citation: Westover, J. H. (2026). The Personal Meaning Penalty: A Multidimensional Framework for Understanding the Costs of Meaning-Deficient Work. Human Capital Leadership Review, 27(4). doi.org/10.70175/hclreview.2020.27.4.3

Human Capital Leadership Review

eISSN 2693-9452 (online)

future of work collective transparent.png
Renaissance Project transparent.png

Subscription Form

HCI Academy Logo
Effective Teams in the Workplace
Employee Well being
Fostering Change Agility
Servant Leadership
Strategic Organizational Leadership Capstone
bottom of page