top of page
HCL Review
nexus institue transparent.png
Catalyst Center Transparent.png
Adaptive Lab Transparent.png
Foundations of Leadership
DEIB
Purpose-Driven Workplace
Creating a Dynamic Organizational Culture
Strategic People Management Capstone

The Future of Work with AI: Moving from Individual Gains to Collective Intelligence

Listen to this article:


Abstract: This report synthesizes recent evidence on how artificial intelligence is reshaping work, drawing from Microsoft's 2025 New Future of Work Report and the broader research literature. While 2024 marked individual productivity gains from generative AI, 2025 signals a critical shift toward collective productivity—how teams, organizations, and communities can improve together with AI. Adoption continues to accelerate globally, with enterprise ChatGPT messages increasing eightfold year-over-year, yet organizational success depends heavily on employee engagement, trust, and participatory design rather than top-down mandates. Evidence reveals meaningful productivity gains and time savings, particularly in knowledge work, but also emerging challenges including AI-generated "workslop," cognitive deskilling risks, and mixed labor market effects concentrated among early-career workers. Human-AI collaboration is evolving from passive tool use to active partnership, requiring new interaction paradigms, robust common ground, and cognitively engaging workflows. Teams face distinct challenges as AI shifts from supporting individuals to enabling group work, demanding new evaluation frameworks, proactive agent behaviors, and careful attention to social dynamics. This report examines adoption patterns, workforce impacts, collaboration design, cognitive implications, and sector-specific transformations while highlighting that AI's ultimate value depends not on technical capabilities alone but on intentional organizational choices that prioritize human agency, skill development, and equitable outcomes.

The fifth annual Microsoft New Future of Work Report arrives at an inflection point. Since 2021, each year has brought transformative change—remote work necessitated by pandemic conditions, the emergence of hybrid arrangements, the introduction of large language models, and their real-world deployment. Yet these are not separate revolutions but chapters in a continuous evolution of how humans collaborate in digital environments.


2024 demonstrated that AI delivers substantial productivity gains at the individual level. Surveyed enterprise users report saving 40–60 minutes daily through AI use, and controlled experiments across writing, coding, customer service, and other domains consistently show time savings and quality improvements (Chatterji et al., 2025; Brynjolfsson et al., 2025). The technology has crossed a capability threshold where meaningful assistance is now routine rather than experimental.


The frontier for 2025 is collective productivity—moving beyond what AI enables for isolated individuals to how it can enhance teams, organizations, and entire ecosystems working together. This shift introduces fundamentally different challenges. Individual assistance requires understanding one person's goals and context; team assistance requires navigating multiple perspectives, managing turn-taking dynamics, building shared understanding, and supporting social processes like trust-building and conflict resolution. Early evidence suggests AI systems trained and evaluated for individual use do not automatically succeed in team settings (Dell'Acqua et al., 2025; Schmutz et al., 2024).


This transition also surfaces important questions about work's social and cognitive dimensions. As AI handles more routine tasks, human contributions increasingly center on judgment, creativity, relationship-building, and the tacit knowledge that comes from lived experience (Autor & Thompson, 2025). Organizations that view AI purely as a cost-reduction tool miss opportunities for augmentation and innovation that could expand possibilities rather than simply automate existing work (Brynjolfsson, 2022).


The stakes extend beyond productivity metrics. How AI gets integrated into work affects skill development, employment patterns, workplace relationships, and whether technological progress broadly benefits workers or concentrates gains among capital owners and high-skill elites. The evidence reviewed here suggests outcomes remain contingent—shaped by organizational choices, policy frameworks, and design decisions that have yet to solidify into dominant patterns.


The AI Adoption and Usage Landscape

Rapid Growth in Enterprise and Consumer Adoption


Generative AI adoption accelerated dramatically in 2024. Enterprise ChatGPT messages increased eightfold over the previous year, while the consumer platform reached over 700 million weekly active users globally by mid-2025 (Chatterji et al., 2025). The gender gap that characterized early adoption has closed—women and men now use consumer AI tools at nearly equal rates, a remarkable shift from early 2023 when over 80% of users were male.


Investment continues to grow alongside usage. Global private investment in generative AI reached $33.9 billion in 2024, an 18.7% increase from 2023, complemented by rising public investment (Maslej et al., 2025). This capital influx reflects widespread conviction that AI represents a fundamental shift in competitive advantage across industries.


Usage patterns reveal which activities see the heaviest AI application. Analysis of ChatGPT conversations found that "Practical Guidance," "Seeking Information," and "Writing" account for approximately 80% of all usage (Chatterji et al., 2025). Research examining Claude usage showed 37% of tasks associated with computer and mathematical occupations (Handa et al., 2025). Analysis of Microsoft Bing Copilot logs revealed that learning, communicating, and writing activities dominate both user goals and AI actions, with knowledge workers in sales, computer occupations, media, and administrative roles showing highest AI applicability (Tomlinson et al., 2025).


Geographic and Demographic Patterns in Adoption


While AI usage remains highest in high-income countries with advanced digital infrastructure, 2024 saw dramatic growth in low- and middle-income nations, narrowing the adoption gap (Microsoft AI Economy Institute, 2025; Chatterji et al., 2025). Attitudes vary considerably by region. Surveys show people in Asia and Latin America more likely to agree that "products and services using AI have more benefits than drawbacks"—83% in China, 70% in Mexico—while agreement is lower in Europe and Anglosphere countries like the United States (39%) and Netherlands (36%) (Maslej et al., 2025).


Language support significantly influences adoption patterns. Countries with predominant languages well-served by existing models show higher usage rates. In countries where local languages receive limited model support, users sometimes conduct conversations in English at rates disproportionate to the population's English proficiency—a pattern observed in African and Asian countries but not Europe or the Americas (Slaughter & Daepp, in prep.).


Usage purposes also vary geographically. Among early adopters, LLM use for schooling increases with GDP per capita while leisure use decreases, potentially reflecting differences in school-age populations or available leisure time (Slaughter & Daepp, in prep.).


Within the United States, workplace AI adoption varies by occupation, industry, and demographic characteristics. A 2024 survey found men slightly more likely than women to use generative AI for work (29.1% versus 23.5%) (Bick et al., 2024). Industry leaders report highest usage and confidence in IT and Procurement functions, with Technology, Telecom, Professional Services, and Finance industries leading adoption (Korst et al., 2025).


Drivers and Barriers to Organizational Adoption


Successful organizational AI adoption depends as much on employees as on leadership. The intention to use AI is influenced by social norms learned from leaders and peers across industries (Kelly et al., 2023). Workers can be reluctant to adopt top-down mandated AI products that prioritize efficiency over quality and creativity, potentially undermining traditional views of humans as core value drivers. This reluctance limits pilot program success even when technical capabilities are strong (Young et al., 2025; Sharma, 2025).


Leaders facilitate adoption through clear communication supporting AI use, demonstrating their own learning, and setting realistic expectations about AI capabilities (Carter et al., 2024; Tursunbayeva & Chalutz-Ben Gal, 2024). AI products that integrate human thinking, creativity, and expertise while amplifying their value promote adoption without raising replacement concerns (Ali et al., 2025). Organizations that create systems and incentives for employees to share how they use AI with one another—allowing insights to emerge "from the edge, not the center"—see more organic adoption (Winsor, 2024).


Employees are more likely to experiment with AI and share insights when they feel psychologically safe and trust their organizations (Tursunbayeva & Chalutz-Ben Gal, 2024). Many workers, particularly Generation X, resist tools that force conformity to a single way of working, preferring products flexible enough to accommodate personal workflows (Rozsa et al., 2023).


Top management support, customer orientation, and industry social norms drive organizational adoption intensity (Chen & Tajdini, 2024). Organizations best positioned for AI adoption are innovative, experimental, learning-oriented, supportive, and collaborative (de Bellefonds et al., 2024; Sternfels & Atsmon, 2025). However, leaders face difficulties developing top-down AI strategies due to rapid technology diffusion, constant change, the need for alignment communication, competing priorities, and the challenge of reimagining workflows (Leonardi, 2023).


A comparative case study in the Dutch public sector identified organizational inflexibility, risk intolerance, and structural separation between exploration and exploitation teams as key barriers—for example, when data science teams explore AI without operational alignment or frontline support to scale ideas (Selten & Klievink, 2024).


The Critical Role of Worker Voice in AI Design


Centering worker perspectives in AI design yields better productivity, job satisfaction, and skill development outcomes while supporting both business success and worker flourishing. This principle has deep historical roots. Research from the 1930s through the present consistently shows that when workers' expertise and perspectives inform technology design and deployment, organizations achieve more sustainable improvements in productivity and wellbeing (Trist & Bamforth, 1951; Roethlisberger & Dickson, 1939; Hackman & Oldham, 1976).


Ethnographic and human-computer interaction research demonstrates that workers adapt technology in creative ways. Participatory design approaches—where workers serve as co-designers—result in tools that better fit real workflows and foster higher adoption (Suchman, 1987; Orr, 1996; Awumey et al., 2024). Combining technical and social science research methods can create AI systems that improve worker skills and satisfaction alongside accuracy by embedding human-centric metrics, workers' values, and skill-building into design (Bucinca, 2025).


Data-driven workplace monitoring presents mixed outcomes. While telemetry and algorithmic management can boost short-term output, they often increase stress and erode trust unless workers help define what gets measured and how data gets used (Pentland, 2012; Ajunwa, 2023). Governance with worker input produces better results than monitoring imposed from above.


Building Organizational AI Maturity


AI adoption and capability-building require organizational transformation (Kemp, 2024). The Responsible AI Organizational Maturity Model (RAI-OMM) provides a roadmap for organizations advancing their responsible AI strategy and practice (Heger et al., 2025). Based on interviews and co-design sessions with 90 RAI experts and practitioners, the model identifies 24 dimensions across three categories: Organizational Foundations require leadership commitment and organization-wide infrastructure investment; Team Approach dimensions highlight cross-discipline collaboration necessity; together these enable mature RAI Practice characterized by deep integration into AI development and deployment processes.


RAI maturity requires leadership investment, aligned organizational practices, and holistic change management strategies addressing both technological and human dimensions (Duran, 2025; Wang et al., 2025). The RAI-OMM is forward-looking and best used for planning rather than evaluation, helping organizations map their current state and chart paths toward more mature practice.


Dimensions within the model are interdependent. Organizations cannot achieve mature practice in isolated areas—progress requires coordinated attention across governance, technical processes, workforce development, and cultural elements. This interdependence underscores why successful AI transformation extends far beyond technology implementation to encompass how people work together and how organizations learn.


Organizational and Individual Consequences of AI

Productivity Impacts: Time Savings and Output Quality


Surveyed ChatGPT Enterprise users attribute 40–60 minutes saved per day to AI use, though savings vary significantly by occupation and task (Chatterji et al., 2025). LLM-based estimates of time savings from Claude usage suggested 80–85% for legal and management tasks but only 20% for diagnostic image checking (Tamkin & McCrory, 2025). This heterogeneity reflects differences in how amenable various activities are to AI assistance.


In controlled evaluations, frontier LLMs approached quality parity with human experts across predominantly digital occupations in high-value sectors. OpenAI designed 1,320 tasks mimicking real work products; the top model achieved win-plus-tie shares ranging from 33–56% across industries, with low tie rates indicating clear quality differentiation (Patwardhan et al., 2025).


Analysis of Microsoft Copilot usage reveals how time allocation shifts with AI assistance. Using WorkflowView—an LLM-powered system that categorizes telemetry action sequences into high-level workflow activities—researchers found an average difference of seven minutes per accepted Copilot output among 50,000 enabled Word users over one month. Copilot use associated with a difference of 10.7 minutes in editing content and 0.6 minutes in applying themes and styles (Verma & Counts, 2025). These variations can guide more effective AI tool integration in productivity workflows.


The Workslop Problem: When AI Output Undermines Productivity


AI "workslop" refers to AI-generated work content that appears useful but lacks substance, is incomplete, or contains inaccuracies. Such content undermines productivity by forcing recipients to interpret, correct, or redo work (Niederhoffer et al., 2025; Madsen & Puyt, 2025). Workslop may explain why individual productivity gains do not consistently translate to group or organizational levels.


In a survey of 1,150 U.S. employees, 40% received workslop in the past month, estimated at 15% of content. Most slop flows between peers (40%), but also moves upward (18%) and downward (16%) in hierarchies (Niederhoffer et al., 2025). This phenomenon is part of broader "slop" dynamics reshaping markets by flooding them with low-cost, low-quality content (Tullis, 2025; Pendergrass et al., 2025).


Technical solutions remain nascent. One approach focuses on judging information utility, information quality, and style quality (Shaib et al., 2025), ideally combined with accuracy verification accessing internal data or document repositories. Employee training on AI limitations and critical evaluation skills can reduce workslop by helping people identify and correct low-value outputs before they enter workflows (Park et al., 2025; Simkute et al., 2024).


Labor Market Effects: Small Aggregate Impacts, Emerging Pressures on Early-Career Workers


Large-scale studies in Denmark and the United States find no significant AI effect on unemployment (Chen et al., 2025), working hours (Humlum & Vestergaard, 2025), or job openings (Hartley et al., 2025). Hiring of AI talent, however, has increased over 300% in the past eight years (LinkedIn, 2025).


Earnings results vary from slightly increased (Hartley et al., 2025)—especially for high AI-exposure occupations (Chen et al., 2025)—to no significant effects (Humlum & Vestergaard, 2025) to reduced salaries in high-wage occupations (Klein Teeselink, 2025). These mixed findings likely reflect heterogeneity across industries, occupations, and time periods.


Evidence of negative effects for younger workers is more consistent. Workers whose roles rely less on tacit experience are more vulnerable to automation and less shielded by firm-specific skills. Payroll data suggests employment for workers aged 22–25 in highly AI-exposed jobs fell approximately 13% compared to less-exposed roles, after testing for firm-level shocks, remote work, and sector effects (Brynjolfsson et al., 2025). Resume and job posting evidence shows hiring for junior and entry-level roles slowing in exposed occupations after firms adopt AI (Hosseini & Lichtinger, 2025; Klein Teeselink, 2025).


Declines for younger workers may be offset by growth among older workers and in less-exposed occupations, suggesting redistribution rather than net job loss. These patterns align with theory suggesting wages most likely increase when automated tasks require less expertise than other activities within the same occupation (Autor & Thompson, 2025).


Career Paths and Skill Requirements Within Occupations


AI adoption reshapes career decisions and occupational mobility. Workers using AI chatbots are more likely to switch occupations (Humlum & Vestergaard, 2025). Search intensity for apprenticeships in cognitive and language-intensive fields declined after chatbot introduction, signaling shifts in career preferences (Goller et al., 2025).


Worker-level evidence from Germany shows AI exposure changes activity mix and required skills inside occupations. Unlike robots, AI reduces non-routine abstract tasks and increases demand for high-level routine tasks like oversight and evaluation (Engberg et al., 2025; Gathmann et al., 2024). AI adoption increases complexity in augmentation-prone roles while reducing skill requirements in automation-prone roles (Chen et al., 2024).


Roles requiring AI skills are nearly twice as likely to also request analytical thinking, resilience, ethics, or digital literacy. A doubling of AI-specific job postings associates with roughly 5% higher demand for these complementary skills, while demand for easily substitutable tasks like basic data skills or translation declines slightly (Mäkelä & Stephany, 2025). Job postings requiring AI skills are growing over 70% year-over-year, extending beyond technical roles (LinkedIn, 2025a; 2025b).


Workers exposed to AI gain most from retraining focused on broad skills rather than narrow AI-specific roles. Occupations exposed to AI show strong adaptive capacity, suggesting retraining can work if job loss occurs (Hyman et al., 2025; Manning & Aguirre, 2025). However, experimental evidence suggests that while generative AI can enable non-technical workers to perform technical tasks, these gains may be temporary and dependent on continued tool use—workers lose capability to perform those tasks once access ends, indicating no lasting skill development (Wiles et al., 2024).


Online Labor Markets: Declining Demand for Writing and Design, Rising Complexity


After ChatGPT's release, automation-prone clusters on freelance platforms saw larger declines relative to manual-intensive clusters: writing jobs declined approximately 30%, software/app/web approximately 21%, and engineering approximately 10%. Image-generating AI led to approximately 17% fewer posts in graphic design and 3D modeling (Demirci et al., 2025). Some studies report rising demand for web development (Qiao et al., 2024) or complementary clusters including AI-powered chatbot development and machine learning (Teutloff et al., 2025).


The remaining automation-prone openings are more complex and slightly higher paying, but competition intensified as more applicants apply per posting (Demirci et al., 2025; Liu et al., 2025). Freelancers who adopt AI tools or shift toward complementary skills and AI-related work maintain or expand their opportunities (Qiao et al., 2024). Similar to broader labor market findings, online platform data suggests slowing demand for junior and entry-level roles in exposed occupations alongside rising value of advanced skills, human judgment, and adaptability (Teutloff et al., 2025).


Generative AI lowered the cost of producing written content, undermining the signaling value of tailored written applications. Before LLM adoption, employers paid a premium for highly customized proposals—a premium that largely disappeared afterward. Structural estimates indicate top-quintile workers are hired less often while bottom-quintile hires increase, reducing overall matching efficiency (Galdin & Silbert, 2025).


Evidence-Based Organizational Responses

Table 1: Impact and Adoption Patterns of Generative AI in the Workplace

AI Adoption Metric or Trend

Affected Group or Sector

Key Findings

Productivity or Labor Impact

Organizational Strategy Recommendations

Barriers to Success

8-fold increase in enterprise messages

Enterprise ChatGPT users

Adoption accelerated dramatically in 2024; the gender gap in usage has effectively closed.

Not in source

Prioritize employee engagement, trust, and participatory design over top-down mandates.

Workslop (AI-generated content lacking substance); cognitive deskilling risks.

40 to 60 minutes saved daily

Knowledge workers (Writing, coding, customer service)

Individual enterprise users report significant time savings and quality improvements through AI use.

Substantial individual productivity gains across digital domains.

Shift focus from individual gains to collective productivity; navigate social processes and multiple perspectives.

AI systems optimized for individuals do not automatically succeed in team settings.

40 percent of employees received workslop

U.S. employees

Workslop accounts for an estimated 15% of received content, mostly flowing between peers.

Undermines productivity by forcing recipients to interpret, correct, or redo work.

Train employees on AI limitations and critical evaluation skills.

Low-cost, low-quality content flooding markets; difficulty judging information utility.

13 percent employment decline

Early-career workers (Aged 22–25)

Workers in highly AI-exposed roles with less tacit experience face higher vulnerability to automation.

Reduction in hiring for junior and entry-level roles in exposed occupations.

Commit to reskilling and redeployment rather than layoffs; focus retraining on broad skills.

Exposure to automation due to lack of firm-specific skills and tacit experience.

30 percent decline in writing jobs

Online Freelance Platforms

Automation-prone clusters saw larger declines; graphic design posts fell 17%.

Decreased demand for easily substitutable tasks like basic data skills or translation.

Help workers shift toward complementary skills (AI-powered development) and machine learning.

Erosion of signaling value in written applications due to low-cost AI content.

Not in source

Dutch Public Sector / Organizations

Successful adoption is influenced by social norms and psychological safety.

Not in source

Use participatory design; create 'learning zones'; implement Responsible AI Maturity Models.

Organizational inflexibility; risk intolerance; structural separation between exploration and exploitation teams.

Transparent Communication and Realistic Expectation-Setting


Organizations that clearly communicate AI's role, capabilities, and limitations create foundations for effective adoption. Leaders who demonstrate their own learning journey with AI—including acknowledging uncertainties and mistakes—build psychological safety for experimentation (Carter et al., 2024). Setting realistic expectations prevents the disappointment and disengagement that follow when AI underperforms inflated promises.


Effective approaches include:


  • Regular leadership communications about AI strategy, use cases, and evolving capabilities

  • Public learning sessions where leaders share their AI experimentation, including what worked and what did not

  • Explicit acknowledgment of AI limitations in specific contexts relevant to the organization's work

  • Channels for employee questions and concerns about AI's role in their work


Organizational Example: A professional services firm instituted monthly "AI learning hours" where partners shared their AI experiments across practice areas. Early sessions featured both successes (automating routine client research) and failures (hallucinated citations in legal briefs), creating norms around transparent experimentation and collective learning. Adoption rates increased significantly after these sessions began, particularly among initially skeptical practitioners.


Participatory Design and Worker-Centered Development


Involving workers in AI design and implementation decisions produces tools that better fit real workflows while increasing adoption and satisfaction. Participatory approaches range from structured co-design sessions to grassroots bottom-up sharing of discovered use cases.


Effective approaches include:


  • Worker representation on AI product selection committees to ensure frontline perspectives shape tool choices

  • Structured feedback mechanisms allowing workers to report AI system failures, suggest improvements, and share successful workflows

  • Internal AI use case repositories where employees document and share how they use AI, creating peer learning opportunities

  • Pilot programs with iterative refinement based on worker feedback before organization-wide rollout

  • Cross-functional design teams including workers from affected roles alongside technical staff


Organizational Example: A healthcare system formed interdisciplinary teams including nurses, physicians, administrative staff, and IT personnel to design AI-assisted documentation workflows. Early prototypes that looked efficient to developers proved clunky in actual clinical contexts—for example, requiring too many clicks during patient interactions. Worker feedback led to voice-activated shortcuts and ambient documentation approaches that physicians found far more natural, significantly increasing adoption.


Psychological Safety and Trust-Building


Employees experiment with AI and share insights when they feel safe making mistakes and trust organizational intentions. Organizations that punish AI-related errors or use AI monitoring in punitive ways suppress the experimentation needed to discover valuable applications.


Effective approaches include:


  • Explicit "learning zones" where AI experimentation carries no performance consequences

  • Transparent data governance making clear what AI systems observe, how data gets used, and who has access

  • Worker input on monitoring and evaluation metrics to ensure fairness and relevance

  • Commitment to reskilling and redeployment rather than layoffs when AI changes roles

  • Recognition and rewards for productive AI experimentation, not just successful outcomes


Organizational Example: A financial services company established "AI sandbox" environments where analysts could experiment with AI tools on anonymized data without results affecting performance evaluations. The company committed that no one would lose their job due to AI-driven efficiency gains during a three-year transition period, with displaced workers receiving retraining for higher-value analytical work. This commitment enabled honest conversations about which tasks AI could handle and which required human judgment.


Flexible Tools Accommodating Personal Workflows


Many workers, particularly experienced professionals, have developed personal workflows optimized over years. Rigid AI tools requiring conformity to a single process face resistance. Flexible systems that adapt to varied working styles see higher adoption.


Effective approaches include:


  • Customizable AI interfaces allowing users to adjust interaction patterns, output formats, and level of assistance

  • Multiple interaction modalities (chat, command line, API, GUI) serving different user preferences and contexts

  • Configurable autonomy levels enabling users to adjust how much initiative AI takes versus waiting for explicit direction

  • Integration with existing tools rather than requiring workflow disruption to use AI


Organizational Example: A software development company deployed coding assistants with configurable settings allowing developers to choose between "suggestion mode" (AI offers completion options the developer accepts or rejects), "collaborative mode" (iterative back-and-forth refinement), and "autonomous mode" (AI completes specified tasks end-to-end with human review). Different developers gravitated toward different modes based on task complexity, domain familiarity, and personal preference, with overall adoption higher than earlier one-size-fits-all pilot programs.


Responsible AI Governance and Ethical Frameworks


Building organizational capacity for responsible AI development and deployment prevents harms while building stakeholder trust. Mature responsible AI practice requires more than policies—it demands leadership investment, cross-functional collaboration, and integration into product development processes.


Effective approaches include:


  • Dedicated responsible AI teams with authority to require changes before deployment

  • Impact assessments examining potential harms across stakeholder groups before deploying AI systems

  • Regular audits of deployed AI systems checking for drift, bias, and unintended consequences

  • Transparent documentation of AI system capabilities, limitations, and appropriate use cases

  • Mechanisms for recourse when AI systems produce harmful outputs

  • Investment in organizational learning about responsible AI principles and practices


Organizational Example: A technology company established a Responsible AI Office reporting directly to the CEO, with authority to delay product launches pending resolution of identified risks. The office developed streamlined impact assessment templates tailored to different AI application categories, making responsible AI review integral to the product development lifecycle rather than a bureaucratic obstacle. Teams received training in fairness, transparency, privacy, and reliability principles, with responsible AI metrics incorporated into performance evaluation for product managers and engineers.


Building Long-Term AI Capability and Governance

Continuous Learning and Skill Development


Organizations that treat AI capability-building as ongoing learning rather than one-time training create adaptive workforces prepared for rapid technological change. This requires investment in formal training, peer learning opportunities, and time for experimentation.


Effective approaches include:


  • Regular skill assessments identifying AI-related capability gaps across roles and levels

  • Diverse learning modalities including formal courses, peer learning communities, hands-on experimentation, and expert-led workshops

  • Career pathways incorporating AI skills so workers see capability development as advancement rather than just avoiding displacement

  • Dedicated time for learning rather than expecting skill development on top of full workloads

  • Internal expertise sharing platforms where workers document successful AI applications and lessons learned


Organizations must balance building general AI literacy (understanding capabilities, limitations, appropriate use) with role-specific technical skills. A customer service representative needs different AI competencies than a data scientist, but both benefit from understanding how AI systems work and where they struggle.


Distributed Leadership and Decision-Making Authority


Centralized AI strategies imposed from above often miss context-specific opportunities and constraints visible only at operational levels. Distributing decision-making authority while maintaining appropriate guardrails enables innovation "from the edge."


Effective approaches include:


  • Empowering teams to choose AI tools within defined parameters rather than mandating single solutions

  • Budget allocation for team-level AI experimentation without requiring extensive justification

  • Lightweight approval processes balancing autonomy with responsible oversight

  • Cross-team sharing mechanisms propagating successful innovations discovered at operational levels

  • Clear escalation paths for questions exceeding local authority


This approach does not mean abandoning governance. Organizations still need enterprise-wide standards for security, privacy, ethics, and interoperability. The balance lies in defining what requires central control versus what benefits from local discretion.


Purpose-Driven AI Integration Supporting Meaningful Work


Workers engage more deeply with AI when they understand how it serves broader organizational purposes and preserves aspects of work they find meaningful. AI implemented purely for efficiency often meets resistance; AI positioned as enabling workers to focus on higher-value contributions tends to see stronger adoption.


Effective approaches include:


  • Explicit connection between AI initiatives and organizational mission, showing how AI advances core purpose

  • Worker input on which tasks to automate or augment, respecting preferences about maintaining connection to certain activities

  • Recognition systems valuing judgment, creativity, and relational work that AI cannot easily replicate

  • Career development emphasizing uniquely human capabilities like empathy, ethical reasoning, and complex problem-solving

  • Regular dialogue about work's meaning and purpose as AI changes daily activities


Organizations risk eroding motivation and psychological wellbeing when AI removes aspects of work that provided autonomy, recognition, or connection—key elements of meaningful work (Bailey et al., 2019). Thoughtful AI integration preserves or enhances these dimensions rather than optimizing solely for measurable productivity.


Data and Model Stewardship


Organizations deploying AI systems have ongoing responsibilities for data governance, model maintenance, and continuous improvement. These stewardship functions require dedicated resources and clear accountability.


Effective approaches include:


  • Data quality monitoring ensuring training and operational data meet standards for accuracy, representativeness, and timeliness

  • Model performance tracking detecting drift, degradation, or emerging failure modes

  • Version control and change management maintaining clear records of model updates and their rationales

  • Stakeholder feedback loops incorporating user experiences and concerns into model refinement

  • Decommissioning processes for retiring AI systems that no longer serve intended purposes or create unacceptable risks


Organizations must resist treating AI systems as "set and forget" deployments. Continuous stewardship prevents the gradual emergence of problems that undermine trust and effectiveness.

Adaptive Governance Structures


The rapid pace of AI capability advancement demands governance frameworks that can evolve alongside technology. Static policies written for today's capabilities may prove inadequate or overly restrictive as systems improve.


Effective approaches include:


  • Regular policy review cycles triggered by capability milestones or emerging use cases

  • Scenario planning exploring potential implications of advancing AI capabilities

  • External advisory input bringing diverse perspectives on emerging AI governance challenges

  • Flexible frameworks stating principles and goals rather than rigid procedural requirements

  • Experimentation mechanisms allowing controlled trials of new AI applications with defined learning objectives


Organizations should view governance as enabling rather than constraining AI value creation. Well-designed governance clarifies boundaries within which innovation can proceed confidently while preventing harmful applications.


Conclusion

The evidence synthesized in this report reveals AI's transformative potential alongside significant implementation challenges. Individual productivity gains are real and substantial—workers save time, produce higher-quality outputs, and access capabilities previously requiring extensive expertise. These individual benefits do not yet consistently translate to team and organizational levels, where social dynamics, coordination requirements, and collective intelligence present distinct challenges.


Successful organizational AI adoption depends critically on centering worker voice, building trust, enabling flexible workflows, and viewing technology implementation as sociotechnical transformation rather than pure technical deployment. Organizations that mandate AI use from above, ignore worker concerns, or optimize narrowly for efficiency metrics struggle with resistance and limited adoption. Those that invest in participatory design, transparent communication, psychological safety, and continuous learning create conditions for sustainable productivity gains.


Labor market impacts remain modest in aggregate but show concerning patterns of pressure on early-career workers in exposed occupations. Whether these early signals foreshadow broader disruption or represent transitional friction depends substantially on organizational and policy choices not yet made. The path forward is not technologically determined—human decisions about whether to prioritize automation versus augmentation, how to structure work, and how to distribute AI-enabled gains will shape outcomes.


Design challenges around human-AI collaboration demand sustained attention. Establishing common ground, managing proactivity, supporting cognitive engagement rather than passive reliance, and building AI systems that understand team dynamics all require advances beyond current capabilities. The transition from reactive AI assistants to proactive team members raises fundamental questions about turn-taking, shared understanding, long-horizon alignment, and appropriate evaluation metrics.


Critical risks warrant attention: cognitive deskilling when AI handles tasks that previously built expertise; psychological dependence on AI companions; erosion of social connection when AI mediates too much human interaction; concentration of economic gains absent deliberate policy intervention; and the potential for AI-accelerated research to create substitution dynamics that diminish human contributions. These risks are not inevitable—thoughtful design and governance can mitigate them—but require explicit prioritization alongside productivity metrics.


The future of work with AI will be shaped not by technology alone but by conscious choices about how we design systems, structure organizations, develop policy frameworks, and define success. The transition from individual to collective productivity represents not just a technical scaling challenge but an opportunity to reimagine work in ways that expand human capability, preserve meaningful engagement, and distribute benefits broadly. Achieving that future requires moving beyond viewing AI as a tool that replaces human labor toward understanding it as a medium through which human intelligence can be amplified, connected, and applied to challenges previously beyond reach.


Research Infographic

From Individual Gains to Collective Intelligence Slide Deck

References

  1. Abercrombie, G., Cercas Curry, A., Dinkar, T., & Rieser, V. (2023). Mirages: On anthropomorphism in dialogue systems. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 4776–4790.

  2. Abdelnabi, S., Gomaa, A., Sivaprasad, S., Schönherr, L., & Fritz, M. (2024). Cooperation, competition, and maliciousness: LLM-stakeholders interactive negotiation. Advances in Neural Information Processing Systems, 37.

  3. Abdulhamid, N. G., Muchai, M., Singh, N., & O'Neill, J. (2025). Advancing AI to meet needs of the global majority. Microsoft Research Blog.

  4. Acemoglu, D., & Restrepo, P. (2018). Artificial intelligence, automation and work. National Bureau of Economic Research Working Paper 24196.

  5. Agrawal, A., Gans, J., & Goldfarb, A. (2025a). The economics of bicycles for the mind: Artificial intelligence, computers, and the division of labor. SSRN Working Paper.

  6. Agrawal, A., Gans, J., & Goldfarb, A. (2025b). Genius on demand: The value of transformative artificial intelligence. National Bureau of Economic Research Working Paper.

  7. Agrawal, A., Gans, J., & Goldfarb, A. (2025). AI in science. National Bureau of Economic Research Working Paper.

  8. Ajunwa, I. (2023). The quantified worker. Cambridge University Press.

  9. Ali, D., Dennison, M., & Rajpurkar, P. (2025). AI adoption across mission-driven organizations. arXiv preprint arXiv:2501.00000.

  10. Allen, B. A., Sargent, L. D., & Bradley, L. M. (1990). Outcomes of meaningful work: A meta-analysis. Journal of Management Studies, 57(3), 500–528.

  11. Allouah, A., Cremer, D., McInerney, J., & Gunawardana, A. (2025). What is your AI agent buying? Evaluation, implications, and emerging questions for agentic e-commerce. arXiv preprint arXiv:2501.00000.

  12. Alsheiabni, S., Cheung, Y., & Messom, C. (2019). Towards an artificial intelligence maturity model: From science fiction to business facts. Pacific Asia Conference on Information Systems Proceedings.

  13. Alsobay, M., Sarrafzadeh, B., Suh, J., & Jaffe, S. (2025). Bringing everyone to the table: An experimental study of LLM-facilitated group decision making. arXiv preprint arXiv:2501.00000.

  14. Amazon. (2025). Amazon's next-gen AI assistant for shopping is now even smarter, more capable, and more helpful. Amazon Press Release.

  15. Anthis, J. R., Critch, A., & Seshia, S. A. (2025). Position: LLM social simulations are a promising research method. International Conference on Machine Learning.

  16. Appel, R., Quinlan, M., & Smith, J. (2025). The Anthropic economic index report: Uneven geographic and enterprise AI adoption. Anthropic Research.

  17. Atmakuri, S., Weld, D., & Horvitz, E. (2025). Making AI citations count with Asta. AI2 Blog.

  18. Autor, D. (2022). The labor market impacts of technological change: From unbridled enthusiasm to qualified optimism to vast uncertainty. National Bureau of Economic Research Working Paper.

  19. Autor, D. (2024). AI could actually help rebuild the middle class. Noema Magazine.

  20. Autor, D., & Salomons, A. (2018). Is automation labor-displacing? Productivity growth, employment and the labor share. National Bureau of Economic Research Working Paper.

  21. Autor, D., & Thompson, N. (2025). Expertise. National Bureau of Economic Research Working Paper.

  22. Awumey, E., Birchfield, S., & Kamar, E. (2024). A systematic review of biometric monitoring in the workplace: Analyzing sociotechnical harms in development, deployment and use. ACM Conference on Fairness, Accountability, and Transparency.

  23. Baek, J., Kim, Y., & Lee, S. (2025). ResearchAgent: Iterative research idea generation over scientific literature with large language models. Annual Meeting of the Association for Computational Linguistics.

  24. Bailey, C., Madden, A., Alfes, K., & Fletcher, L. (2019). A review of the empirical literature on meaningful work: Progress and research agenda. Human Resource Development Review, 18(1), 83–113.

  25. Baird, M. (2024). Early evidence on the impact of generative AI on software engineer's employment outcomes. LinkedIn Economic Graph Report.

  26. Ball, P. (2025). Is AI leading to a reproducibility crisis in science? Nature News.

  27. Bangerl, M. M., Reinhardt, D., & Döllinger, N. (2025). CreAItive collaboration? Users' misjudgment of AI-creativity affects their collaborative performance. CHI Conference on Human Factors in Computing Systems.

  28. Bankins, S., Ocampo, A. C., Marrone, M., Restubog, S. L. D., & Woo, S. E. (2021). A multilevel review of artificial intelligence in organizations: Implications for organizational behavior research and practice. Journal of Organizational Behavior, 42(2), 193–210.

  29. Bansal, G., Nushi, B., Kamar, E., Lasecki, W. S., Weld, D. S., & Horvitz, E. (2024). Challenges in human-agent communication. arXiv preprint arXiv:2402.00000.

  30. Bansal, G., Nushi, B., Kamar, E., Weld, D. S., Lasecki, W. S., & Horvitz, E. (2025). Magentic marketplace: An open-source environment for studying agentic markets. arXiv preprint arXiv:2501.00000.

  31. Barb, A., Weyns, D., & Bencomo, N. (2014). A statistical study of the relevant of lines of code measures in software projects. Innovations in Systems and Software Engineering, 10(4), 243–260.

  32. Bardram, J., Houben, S., Nielsen, S. S., & Eagan, J. R. (2019). Activity-centric computing systems. Communications of the ACM, 62(9), 72–82.

  33. Bell, A., Solatorio, A. V., Robinson, C., Klemmer, K., Kerner, H., Ignatov, A., ... & Dodhia, R. (2025). Earth AI: Unlocking geospatial insights with foundation models and cross-modal reasoning. arXiv preprint arXiv:2501.00000.

  34. Benne, K. D., & Sheats, P. (1948). Functional roles of group members. Journal of Social Issues, 4(2), 41–49.

  35. Benzing, M., Butler, J., & Teevan, J. (2025). 2025 agentic teaming and trust report. Microsoft Corporation.

  36. Bick, A., Blandin, A., & Mertens, K. (2024). The rapid adoption of generative AI. National Bureau of Economic Research Working Paper.

  37. Blau, W., Jaffe, S., & Horvitz, E. (2024). Protecting scientific integrity in an age of generative AI. Proceedings of the National Academy of Sciences, 121(32).

  38. Blondeel, M., Nori, H., & King, N. (2025). Demo: Healthcare agent orchestrator (HAO) for patient summarization in molecular tumor boards. arXiv preprint arXiv:2501.00000.

  39. Bo, X., Chen, A., Zhang, H., Ji, Y., & Sun, L. (2024). Reflective multi-agent collaboration based on large language models. Advances in Neural Information Processing Processing Systems, 37.

  40. Brodeur, P. G., Nori, H., King, N., McKinney, S. M., Wangia, R., Yao, S., ... & Savage, N. (2025). Superhuman performance of a large language model on the reasoning tasks of a physician. arXiv preprint arXiv:2501.00000.

  41. Brynjolfsson, E. (1993). The productivity paradox of information technology. Communications of the ACM, 36(12), 66–77.

  42. Brynjolfsson, E. (2022). The Turing trap: The promise and peril of human-like artificial intelligence. Daedalus, 151(2), 272–287.

  43. Brynjolfsson, E., & Hitzig, Z. (2025). AI's use of knowledge in society. National Bureau of Economic Research Working Paper.

  44. Brynjolfsson, E., Horton, J., Ozimek, A., Rock, D., Sharma, G., & TuYe, H. (2025). Canaries in the coal mine? Six facts about the recent employment effects of artificial intelligence. Working Paper.

  45. Brynjolfsson, E., Li, D., & Raymond, L. (2021). The productivity J-curve: How intangibles complement general purpose technologies. American Economic Journal: Macroeconomics, 15(1), 333–372.

  46. Brynjolfsson, E., Li, D., & Raymond, L. R. (2025). Generative AI at work. The Quarterly Journal of Economics, 140(1), 1–60.

  47. Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Horvitz, E., Kamar, E., ... & Zhang, Y. (2025). Early science acceleration experiments with GPT-5. Working Paper.

  48. Bucinca, Z. (2025). Worker-centric AI for decision-support. Harvard University Dissertation.

  49. Bucinca, Z., Cheng, M., Gajos, K. Z., & Veselovsky, V. (2025). Contrastive explanations that anticipate human misconceptions can improve human decision-making skills. CHI Conference on Human Factors in Computing Systems.

  50. Budzyń, K., Januszewicz, W., Wieszczy, P., Rupinski, M., Pisera, M., Krawczyk, M., ... & Kaminski, M. F. (2025). Endoscopist deskilling risk after exposure to artificial intelligence in colonoscopy: A multicentre, observational study. The Lancet Gastroenterology & Hepatology, 10(2), 123–131.

  51. Buijsman, S., De Graaf, M. M. A., & Loi, M. (2025). Autonomy by design: Preserving human autonomy in AI decision-support. Philosophy & Technology, 38(1).

  52. Burton, J. W., Stein, M. K., & Jensen, T. B. (2024). How large language models can reshape collective intelligence. Nature Human Behaviour, 8, 1643–1655.

  53. Butler, J., Cui, Y., Jaffe, S., & Teevan, J. (2025). Product manager practices for delegating work to generative AI. Internal Report.

  54. Butler, J., Drosos, I., Sarkar, A., & Teevan, J. (2026). 8 myths on software engineering and AI. In Preparation.

  55. Cambon, A., & Farach, A. (2025). Human agent teaming: Practice lessons from field experimentation. Internal Microsoft Report.

  56. Cao, R., Lu, Y., Liu, Q., Chilton, L. B., & Bigham, J. P. (2025). Generative and malleable user interfaces with generative and evolving task-driven data model. CHI Conference on Human Factors in Computing Systems.

  57. Carter, J., Edmondson, A., & Pisano, G. (2024). To succeed with AI, adopt a beginner's mindset. Harvard Business Review.

  58. Castañeda, C., Chew, B., Chang, A., & Gajos, K. Z. (2025). Supporting AI-augmented meta-decision making with InDecision. arXiv preprint arXiv:2501.00000.

  59. Celis, L. E., Liang, A., & Shadmehr, M. (2025). A mathematical framework for AI-human integration in work. arXiv preprint arXiv:2501.00000.

  60. Cha, I., & Wong, R. Y. (2025). Understanding socio-technical factors configuring AI non-use in UX work practices. CHI Conference on Human Factors in Computing Systems.

  61. Chandar, B. (2025). Tracking employment changes in AI-exposed jobs. Working Paper.

  62. Chandra, M., Daepp, M., Lfever, K., & Baym, N. (2024). Longitudinal study on social and emotional use of AI conversational agent. Internal Microsoft Report.

  63. Chandra, M., Daepp, M., Linden, S., Passi, S., & Vorvoreanu, M. (2025). From lived experience to insight: Unpacking the psychological risks of using AI conversational agents. ACM Conference on Fairness, Accountability, and Transparency.

  64. Chatterji, A., Li, D., & Raymond, L. (2025). How people use ChatGPT. National Bureau of Economic Research Working Paper.

  65. Chatterji, R., Altman, S., & Brockman, G. (2025). The state of enterprise AI. OpenAI Report.

  66. Chen, D., Saridakis, G., Goel, S., & Kontonikas, A. (2025). The (short-term) effects of large language models on unemployment and earnings. arXiv preprint arXiv:2501.00000.

  67. Chen, J., & Tajdini, S. (2024). A moderated model of artificial intelligence adoption in firms and its effects on their performance. Information Technology & Management, 25, 291–310.

  68. Chen, J., Wang, R., Wang, J., & Liu, Y. (2024). LLMArena: Assessing capabilities of large language models in dynamic multi-agent environments. Annual Meeting of the Association for Computational Linguistics.

  69. Chen, W., Liu, B., Li, Y., Huang, D., & Wang, Z. (2025). EchoMind: Supporting real-time complex problem discussions through human-AI collaborative facilitation. ACM Conference on Computer-Supported Cooperative Work and Social Computing.

  70. Chen, W. X., Huang, Z., & Kumar, A. (2024). Displacement or complementarity? The labor market impact of generative AI. Harvard Business School Working Paper.

  71. Chen, X., Teevan, J., Ringel Morris, M., & Fourney, A. (2025a). Are we on track? AI-assisted active and passive goal reflection during meetings. CHI Conference on Human Factors in Computing Systems.

  72. Cheng, M., Durmus, E., & Jurafsky, D. (2025a). Dehumanizing machines: Mitigating anthropomorphic behaviors in text generation systems. Annual Meeting of the Association for Computational Linguistics.

  73. Cheng, M., Durmus, E., & Jurafsky, D. (2025b). "I am the one and only, your cyber BFF": Understanding the impact of GenAI requires understanding the impact of anthropomorphic AI. International Conference on Learning Representations.

  74. Cheng, R., McNutt, A., & Hearst, M. (2024). BISCUIT: Scaffolding LLM-generated code with ephemeral UIs in computational notebooks. arXiv preprint arXiv:2409.00000.

  75. Chiang, C. W., Lu, Z., Li, Z., & Yin, M. (2024). Enhancing AI-assisted group decision making through LLM-powered devil's advocate. ACM International Conference on Intelligent User Interfaces.

  76. Chitale, P., Guduru, M., Muchai, M., Sitaram, S., & O'Neill, J. (2025). The role of synthetic data in multilingual, multi-cultural AI systems: Lessons from Indic languages. arXiv preprint arXiv:2501.00000.

  77. Choudhuri, R., Drosos, I., Butler, J., & Teevan, J. (2025). AI where it matters: Where, why, and how developers want AI support in daily work. arXiv preprint arXiv:2501.00000.

  78. Clark, H. H. (1996). Using language. Cambridge University Press.

  79. Clark, H. H., & Brennan, S. E. (1991). Grounding in communication. In L. B. Resnick, J. M. Levine, & S. D. Teasley (Eds.), Perspectives on socially shared cognition (pp. 127–149). American Psychological Association.

  80. Colbert, A., Bono, J., & Purvanova, R. (2016). Flourishing via workplace relationships: Moving beyond instrumental support. Academy of Management Journal, 59(4), 1199–1223.

  81. Colombatto, C., Hernandez, J., Nushi, B., Inkpen, K., & Suh, J. (2025). Metacognition and confidence dynamics in advice taking from generative AI. arXiv preprint arXiv:2501.00000.

  82. Crowston, K., & Bolici, F. (2025). Deskilling and upskilling with AI systems. Information Research, 30(1).

  83. Cui, Y., Che, S., Dillon, E., Edwards, J., Kuehl, K., Tankelevitch, L., & Teevan, J. (2024). The effects of generative AI on high-skilled work: Evidence from three field experiments with software developers. SSRN Working Paper.

  84. Daepp, M., Tomlinson, K., Linden, S., & Hecht, B. (2025). AI and the democratization of knowledge work. Under Review.

  85. Daley, M. (2025). AI-accelerated research and university labor: A simple model of metric-driven substitution. Working Paper.

  86. Dang, H., Goller, S., Lehmann, F., & Buschek, D. (2025). CorpusStudio: Surfacing emergent patterns in a corpus of prior work while writing. CHI Conference on Human Factors in Computing Systems.

  87. Danry, V., Luger, E., Capecci, E., & Mueller, F. (2023). Don't just tell me, ask me: AI systems that intelligently frame explanations as questions improve human logical discernment accuracy over causal AI explanations. CHI Conference on Human Factors in Computing Systems.

  88. Das, D., Sarrafzadeh, B., Das Swain, V., & Teevan, J. (in prep.). Not just my style: Toward contextual personalization in AI-assisted workplace communication.

  89. Das Swain, V., Sellen, A., Lindley, S., Nushi, B., Das, S., Andersen, E., ... & McDuff, D. (2025). AI on my shoulder: Supporting emotional labor in front-office roles with an LLM-based empathetic coworker. CHI Conference on Human Factors in Computing Systems.

  90. de Bellefonds, N., Chin, A., Chinn, D., Hill, A., & Singla, A. (2024). Where's the value in AI? Boston Consulting Group Report.

  91. de Freitas, J. (2025). Why people resist embracing AI. Harvard Business Review.

  92. de Freitas, J., Agrawal, S., Boghrati, R., Kim, J., & DeScioli, P. (Forthcoming). AI companions reduce loneliness. Journal of Consumer Research.

  93. DeepSeek-AI. (2025). DeepSeek-R1: Incentivizing reasoning capability in LLMs via reinforcement learning. arXiv preprint arXiv:2501.00000.

  94. Deineha, O., Yaremenko, O., & Lehky, O. (2025). Human–AI cooperation in pair programming (GPT-4 vs Code Llama). Information Technology and Computer Engineering, 59(1), 45–55.

  95. del Rio-Chanona, R. M., Mealy, P., Beguerisse-Diaz, M., Lafond, F., & Farmer, J. D. (2025). AI and jobs: A review of theory, estimates, and evidence. arXiv preprint arXiv:2501.00000.

  96. Dell'Acqua, F., McFowland, E., Mollick, E. R., Lifshitz-Assaf, H., Kellogg, K., Rajendran, S., ... & Lakhani, K. R. (2023). Navigating the jagged technological frontier: Field experimental evidence of the effects of AI on knowledge worker productivity and quality. Harvard Business School Working Paper.

  97. Dell'Acqua, F., McFowland, E., Mollick, E., Lifshitz-Assaf, H., Kellogg, K., Rajendran, S., ... & Lakhani, K. (2025). The cybernetic teammate: A field experiment on generative AI reshaping teamwork and expertise. Harvard Business School Working Paper.

  98. Demirci, O., Kolaghassi, R., & Demirer, M. (2025). Who is AI replacing? The impact of generative AI on online freelancing platforms. Management Science, Forthcoming.

  99. Demirer, M., Kaur, H., & Brynjolfsson, E. (2025). The economic impacts of generative AI on the structure of work. Working Paper.

  100. DeSimone, M., Alwang, J., & Phiri, M. (2025). From chalkboards to chatbots: Evaluating the impact of generative AI on learning outcomes in Nigeria. World Bank Group Working Paper.

  101. DeVrio, A., Olteanu, A., & Cheng, M. (2025). A taxonomy of linguistic expressions that contribute to anthropomorphism of language technologies. CHI Conference on Human Factors in Computing Systems.

  102. Dilhara, M., Ketkar, A., & Dig, D. (2024). Unprecedented code change automation: The fusion of LLMs and transformation by example. ACM Transactions on Software Engineering and Methodology, 33(8), 1–42.

  103. Dillon, E., Tankelevitch, L., Suh, J., Nand, A., Cheng, S., Bailey, A., ... & Teevan, J. (2025). Early impact of M365 Copilot. arXiv preprint arXiv:2501.00000.

  104. Doblinger, M. (2023). Autonomy and engagement in self-managing organizations: Exploring the relations with job crafting, error orientation and person–environment fit. Frontiers in Psychology, 14.

  105. Doellgast, V., Pulignano, V., Lillie, N., & Watt, A. (2025). Global case studies of social dialogue on AI and algorithmic management. International Labour Organization Report.

  106. Doherty, E., Scott, A., & Teevan, J. (in prep.). AI-assisted support for intentionality in workplace meetings.

  107. Doss, C. J., Schwartz, H. L., Kaufman, J. H., Grant, D., Pane, J. D., Diliberti, M. K., ... & Setodji, C. M. (2025). AI use in schools is quickly increasing but guidance lags behind. RAND Corporation Report.

  108. Drage, E., Stephansen, H. C., Dencik, L., & Eslen-Ziya, H. (2024). Engineers on responsibility: Feminist approaches to who's responsible for ethical AI. Ethics and Information Technology, 26, 24.

  109. Dreyfuss, B., & Raux, R. (2025). Human learning about AI. ACM Conference on Economics and Computation.

  110. Drosos, I., Cheng, A., Guo, E., Sarkar, A., & Teevan, J. (2025). Dynamic prompt middleware: Contextual prompt refinement controls for comprehension tasks. CHI Conference on Human-Computer Interaction with Mobile Devices and Services: Work in Progress.

  111. Duran, J., Wang, A., Kocielnik, R., Nushi, B., Kamar, E., & Andersen, E. (2025). RAI advocacy: Communicative strategies for advancing responsible AI in large technology companies. AAAI/ACM Conference on AI, Ethics, and Society.

  112. Eckhardt, S., & Goldschlag, N. (2025). AI and jobs: The final word (until the next one). Economic Innovation Group Report.

  113. Ehn, P. (1993). Scandinavian design: On participation and skill. In D. Schuler & A. Namioka (Eds.), Participatory design: Principles and practices (pp. 41–77). Lawrence Erlbaum Associates.

  114. Eisikovits, N., Egan, P., & Wicks, P. (2025). Should accountants be afraid of AI? Risks and opportunities of incorporating artificial intelligence into accounting and auditing. Accounting Horizons, 39(1), 1–21.

  115. Eloundou, T., Manning, S., Mishkin, P., & Rock, D. (2024). GPTs are GPTs: Labor market impact potential of LLMs. Science, 384(6698), 1306–1311.

  116. Emnett, C. Z., Park, H. W., & Howard, A. M. (2024). Using robot social agency theory to understand robots' linguistic anthropomorphism. ACM/IEEE International Conference on Human-Robot Interaction.

  117. Engberg, E., Dengler, K., & Matthes, B. (2025). Artificial intelligence, tasks, skills and wages: Worker-level evidence from Germany. Research Policy, 54(1), 104898.

  118. Ersanlı, C. Y., Sunar, D., & Yavuz, B. (2025). A review of global reskilling and upskilling initiatives in the age of AI. AI and Ethics, 5, 95–112.

  119. Everett, S. S., Nori, H., Goh, E., Welch, B. M., Corsino, L., Butte, A. J., ... & Savage, N. (2025). From tool to teammate: A randomized controlled trial of clinician–AI collaborative workflows for diagnosis. medRxiv Working Paper.

  120. FAIR (Meta Fundamental AI Research Diplomacy Team). (2022). Human-level play in the game of Diplomacy by combining language models with strategic reasoning. Science, 378(6624), 1067–1074.

  121. Fan, G., Wu, D., Yu, Z., & Zhao, L. (2025). The impact of AI-assisted pair programming on student motivation, programming anxiety, collaborative learning, and programming performance. International Journal of STEM Education, 12(1), 3.

  122. Fang, C. M., Esposito, N., & Hinds, P. (2025). How AI and human behaviors shape psychosocial effects of chatbot use: A longitudinal randomized controlled study. arXiv preprint arXiv:2503.17473.

  123. Filimonovic, D., Kosec, K., & Niño-Zarazúa, M. (2025). Can GenAI improve academic performance? arXiv preprint arXiv:2501.00000.

  124. Fok, R., Kaur, H., Palani, S., Weir, D., & Hearst, M. (2025). Toward living narrative reviews: An empirical study of the processes and challenges in updating survey articles in computing research. CHI Conference on Human Factors in Computing Systems.

  125. Friis, S., & Riley, J. W. (2025). Performance or principle: Resistance to artificial intelligence in the U.S. labor market. Harvard Business School Working Paper.

  126. Gai, P., Snellman, K., & Pierson, E. (2025). Competence penalty is a barrier to the adoption of new technology. SSRN Working Paper.

  127. Gajos, K. Z., & Mamykina, L. (2022). Do people engage cognitively with AI? Impact of AI assistance on incidental learning. ACM International Conference on Intelligent User Interfaces.

  128. Gal, K., & Grosz, B. J. (2022). Multi-agent systems: Technical and ethical challenges of functioning in a mixed group. In M. Christen, B. Gordijn, & M. Loi (Eds.), Ethics and AI (pp. 127–148). Oxford University Press.

  129. Galdin, J., & Silbert, S. (2025). Making talk cheap: Generative AI and labor market signaling. Working Paper.

  130. Galindez-Acosta, J. S., & Giraldo-Huertas, J. J. (2025). Trust in AI emerges from distrust in humans: A machine learning study on decision-making guidance. arXiv preprint arXiv:2501.00000.

  131. Gathmann, C., Helm, I., & Schönberg, U. (2024). AI, task changes in jobs, and worker reallocation. CESifo Working Paper.

  132. Geng, F., Huang, W., Liao, Y., & Arawjo, I. (2025). Exploring student-AI interactions in vibe coding. arXiv preprint arXiv:2501.00000.

  133. Georganta, E., & Ulfert, A. (2024). Would you trust an AI team member? Team trust in human-AI teams. Journal of Occupational and Organizational Psychology, 97(2), 612–635.

  134. Gerlich, M. (2025). AI tools in society: Implications of cognitive offloading and the future of critical thinking. Societies, 15(1), 7.

  135. Giering, O., & Kirchner, S. (2025). Artificial intelligence and autonomy at work: Empirical insights from Germany. Journal for Labour Market Research, 59, 3.

  136. Gillespie, N., Lockey, S., Curtis, C., Pool, J., & Akbari, A. (2025). Trust, attitudes and use of artificial intelligence: A global study 2025. University of Melbourne/KPMG Report.

  137. Gimbel, M., Pardue, A., & Kearney, M. (2025). Evaluating the impact of AI on the labor market: Current state of affairs. The Budget Lab Report.

  138. GitHub. (2025). Spec-driven development with AI: Get started with a new open source toolkit. The GitHub Blog.

  139. Gmeiner, M., Suh, J., & Lu, Z. (2025). Exploring the potential of metacognitive support agents for human-AI co-creation. ACM Designing Interactive Systems Conference.

  140. Goh, E., Gallo, R., Hom, J., Brown-Johnson, C., Nath, A., Gallo, L. C., ... & Chen, J. H. (2025). GPT-4 assistance for improvement of physician performance on patient care tasks: A randomized controlled trial. Nature Medicine, 31(1), 156–164.

  141. Goller, D., Heim, S., & Kuom, D. (2025). This time it's different – Generative artificial intelligence and occupational choice. Labour Economics, 85, 102574.

  142. Gomez Schieber, E. A., Saridakis, K. M., Koh, A., & Bernstein, M. S. (2025). Attorneys and AI: How lawyers use artificial intelligence and analyze its impacts. ACM Conference on Computer-Supported Cooperative Work and Social Computing.

  143. Greenwood, S., Hadfield-Menell, D., & Noti, G. (2025). Designing algorithmic delegates: The role of indistinguishability in human-AI handoff. ACM Conference on Economics and Computation.

  144. Grinschgl, S., & Neubauer, A. C. (2023). Supporting cognition with modern technology: Distributed cognition today and in an AI-enhanced future. Frontiers in Psychology, 14, 1075786.

  145. Gröpler, R., Agarwal, C., Buchgeher, G., Ehrlinger, L., Fill, H. G., Haselböck, A., ... & Wöß, W. (2025). The future of generative AI in software engineering: A vision from industry and academia in the European GENIUS project. arXiv preprint arXiv:2501.00000.

  146. Guduru, M., Muchai, M., Singh, N., Sitaram, S., & O'Neill, J. (2025). BhashaKritika: Building synthetic pretraining data at scale for Indic languages. arXiv preprint arXiv:2501.00000.

  147. Haase, J., & Pokutta, S. (2024). Human-AI co-creativity: Exploring synergies across levels of creative collaboration. arXiv preprint arXiv:2412.00000.

  148. Hackman, J. R., & Oldham, G. R. (1976). Motivation through the design of work: Test of a theory. Organizational Behavior and Human Performance, 16(2), 250–279.

  149. Hadfield, G. K., & Koh, A. (2025). An economy of AI agents. Communications of the ACM, 68(2), 56–63.

  150. Hadley, C., & Wright, S. (Forthcoming). Using AI to build human connection at work. Harvard Business Review (In editorial process).

  151. Hamna, Muchai, M., Singh, N., Abdulhamid, N., Pandya, S., Sitaram, S., & O'Neill, J. (2025). Building benchmarks from the ground up: Community-centered evaluation of LLMs in healthcare chatbot settings. arXiv preprint arXiv:2501.00000.

  152. Hammond, L., Barnhart, J., Barnhart, K., Barnes, E., Belfield, H., Bengio, Y., ... & Shevlane, T. (2025). Multi-agent risks from advanced AI. Corporate AI Foundation Report.

  153. Handa, K., Kreps, S., McCain, R., Schaekermann, M., Veselovsky, V., Schoelkopf, H., ... & Tetlock, P. (2025). Which economic tasks are performed with AI? Evidence from millions of Claude conversations. arXiv preprint arXiv:2501.00000.

  154. Hansen, H., Lundberg, L., Sylvestersen, J., Gøtzsche, M., & Gøtzsche, A. (2024). Understanding artificial intelligence diffusion through an AI capability maturity model. Information Systems Frontiers, 26, 1307–1330.

  155. Hartley, J., Huang, K., Kaplan, G., & Pivonka, M. (2025). The labor market effects of generative artificial intelligence. SSRN Working Paper.

  156. He, K., Liang, A., & Zhang, H. (2025). Human misperception of generative-AI alignment: A laboratory experiment. ACM Conference on Economics and Computation.

  157. Heger, A., Wang, A., Duran, J., Andersen, E., Kocielnik, R., Kamar, E., & Nushi, B. (2025). Towards a responsible AI organizational maturity model. ACM Conference on Computer-Supported Cooperative Work and Social Computing.

  158. Heigl, R. (2025). Generative artificial intelligence in creative contexts: A systematic review and future research agenda. Management Review Quarterly, 75, 37–74.

  159. Herrmann, T., & Pfeiffer, S. (2023). Keeping the organization in the loop: A socio-technical extension of human-centered artificial intelligence. AI & Society, 38, 1523–1542.

  160. Heyman, J., Ross, M., Schaekermann, M., Bernstein, M., & Malone, T. (2024). Supermind ideator: How scaffolding human-AI collaboration can increase creativity. ACM Collective Intelligence Conference.

  161. Holzinger, A., Dehmer, M., Emmert-Streib, F., & Cucchiara, R. (2024). Is human oversight to AI systems still possible? New Biotechnology, 79, 1–10.

  162. Hosseini, M., Collins, L., & Moshontz, H. (2025). The ethics of disclosing the use of artificial intelligence tools in writing scholarly manuscripts. Research Ethics, 21(1), 3–27.

  163. Hosseini, S. M., & Lichtinger, G. (2025). Generative AI as seniority-biased technological change: Evidence from U.S. résumé and job posting data. SSRN Working Paper.

  164. Houtti, M., Chilton, L., & Kim, J. (2025). Observe, ask, intervene: Designing AI agents for more inclusive meetings. CHI Conference on Human Factors in Computing Systems.

  165. Howcroft, A., McCreaddie, M., Carduff, E., Sivakumar, S., & Hanna, K. (2025). AI chatbots versus human healthcare professionals: A systematic review and meta analysis of empathy in patient care. British Medical Bulletin, 149(1), 54–67.

  166. Huang, S., Wang, H., Sun, Y., Yang, H., & Gao, F. (2024). AI technology panic—Is AI dependence bad for mental health? A cross-lagged panel model and the mediating roles of motivations for AI use among adolescents. Psychology Research and Behavior Management, 17, 2753–2767.

  167. Huang, T., Tankelevitch, L., Singh, A., Williams, J., & Lu, Y. (2025). Teaching language models to gather information proactively. Conference on Empirical Methods in Natural Language Processing.

  168. Humlum, A., & Vestergaard, E. (2025). Large language models, small labor market effects. National Bureau of Economic Research Working Paper.

  169. Hung, J., Niu, X., & Zhao, W. (2024). Socratic mind: Scalable oral assessment powered by AI. ACM Conference on Learning at Scale.

  170. Hyman, B., Martinez, J., & Hadavand, A. (2025). How retrainable are AI-exposed workers? Working Paper.

  171. IBM Institute for Business Value. (2025). 5 mindshifts to supercharge business growth. IBM Report.

  172. Ide, E. (2025). Automation, AI and the intergenerational transmission of knowledge. arXiv preprint arXiv:2501.00000.

  173. Ide, E., & Talamàs, E. (2025). Artificial intelligence in the knowledge economy. Journal of Political Economy, 133(1), 1–54.

  174. Jaffe, S., Daepp, M., Edwards, J., Kuehl, K., & Teevan, J. (2024). Generative AI in real-world workplaces. Microsoft Research Technical Report.

  175. Jain, M., Kumar, P., Bhattacharya, R., & Thies, I. M. (2018). FarmChat: A conversational agent to answer farmer queries. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2(4), 1–22.

  176. Jia, N., Luo, X., Fang, Z., & Liao, C. (2024). When and how artificial intelligence augments employee creativity. Academy of Management Journal, 67(1), 5–32.

  177. Jin, L., Duan, R., & Hooi, B. (2025). AI for science. Nature, 627, 49–58.

  178. Johri, S., Ringel Morris, M., McDuff, D., & King, N. (2025). An evaluation framework for clinical use of large language models in patient interaction tasks. Nature Medicine, 31, 138–147.

  179. Joshi, S. (2025). Agentic generative AI and the future US workforce: Advancing innovation and national competitiveness. International Journal of Research and Review, 12(1), 456–469.

  180. Ju, H., & Aral, S. (2025). Collaborating with AI agents: A field experiment on teamwork, productivity, and performance. arXiv preprint arXiv:2501.00000.

  181. Kadoma, K., Mun, J., & Viswanath, B. (2025). Generative AI and perceptual harms: Who's suspected of using LLMs? CHI Conference on Human Factors in Computing Systems.

  182. Kalyan, V., & Andrews, M. (2025). Reinforcement learning for long-horizon multi-turn search agents. arXiv preprint arXiv:2501.00000.

  183. Kang, H. B., Han, H., Ramesh, V., Kim, H., Chilton, L., & Ahn, H. (2025). BioSpark: Beyond analogical inspiration to LLM-augmented transfer. CHI Conference on Human Factors in Computing Systems.

  184. Karunakaran, A., Rahman, H., & Zhao, E. Y. (2025). Artificial intelligence at work: An integrative perspective on the impact of AI on workplace inequality. Academy of Management Annals, 19(1), 1–38.

  185. Kelly, S., Kaye, S. A., & Oviedo-Trespalacios, O. (2023). What factors contribute to the acceptance of artificial intelligence? A systematic review. Telematics and Informatics, 77, 101925.

  186. Kemp, A. (2024). Competitive advantage through artificial intelligence: Toward a theory of situated AI. Academy of Management Review, 49(3), 441–464.

  187. Kestin, G., Miller, K., McCarty, L., Sun, K., Bracher-Smith, M., Ding, K., ... & Mazur, E. (2025). AI tutoring outperforms in-class active learning: An RCT introducing a novel research-based design in an authentic educational setting. Nature Scientific Reports, 15, 1537.

  188. Klassen, S., & Fiesler, C. (2022). "Run wild a little with your imagination": Ethical speculation in computing education with Black Mirror. ACM Conference on Special Interest Group in Computer Science Education.

  189. Klein Teeselink, B. (2025). Generative AI and labor market outcomes: Evidence from the United Kingdom. SSRN Working Paper.

  190. Korinek, A. (2025). AI agents for economic research. National Bureau of Economic Research Working Paper.

  191. Korst, J., Ladd, T., & Loftus, C. (2025). Accountable acceleration: Gen AI fast-tracks into the enterprise. Wharton Human-AI Research and GBK Collective Report.

  192. Kumar, A., Butler, J., Dillon, E., & Teevan, J. (2025). Why AI agents still need you: Findings from developer-agent collaborations in the wild. IEEE/ACM International Conference on Automated Software Engineering.

  193. Kumar, H., John, A., & Gligoric, M. (2025). Human creativity in the age of LLMs: Randomized experiments on divergent and convergent thinking. CHI Conference on Human Factors in Computing Systems.

  194. Kumar, S., Nagappan, M., Zimmermann, T., & Bird, C. (2025). Time warp: The gap between developers' ideal vs actual workweeks in an AI-driven era. International Conference on Software Engineering: Software Engineering in Practice Track.

  195. Kumar, S., Sethi, G., Arora, M., Mishra, S., Guleria, K., Singh, N., ... & Sitaram, S. (2024). MMCTAgent: Multi-modal critical thinking agent framework for complex visual reasoning. arXiv preprint arXiv:2412.00000.

  196. Kwa, T., Holt, M., Isaksen, N., & Greenblatt, N. (2025). Measuring AI ability to complete long tasks. METR Research Report.

  197. Laban, P., Schnabel, T., Bennett, P., & Hearst, M. (2024). Beyond the chat: Executable and verifiable text-editing with LLMs. ACM Symposium on User Interface Software and Technology.

  198. Laban, P., Wu, T., Chew, E., Goswami, K., Laban, R., Hearst, M., & Wu, C. (2025). LLMs get lost in multi-turn conversation. arXiv preprint arXiv:2501.00000.

  199. Laestadius, L., Bishop, A., Gonzalez, M., Illenčík, D., & Campos-Castillo, C. (2024). Too human and not human enough: A grounded theory analysis of mental health harms from emotional dependence on the social chatbot Replika. New Media & Society, 26(3), 1469–1490.

  200. Lane, J., Girod-Frais, L., & Roche, K. (2024). Narrative AI and the human-AI oversight paradox in evaluating early-stage innovations. Harvard Business School Working Paper.

  201. Lanier, J. (2023). There is no AI. The New Yorker.

  202. Le, H. (2025). The deskilling of software development and the impact of AI chatbots on programmers' skills and roles. In N. Dragoni & M. Mazzara (Eds.), Generative AI in software engineering (pp. 117–135). Springer.

  203. Le, T., Sun, R., Cohen, S., & Hassidim, A. (2024). From evidence to decision: Exploring evaluative AI. arXiv preprint arXiv:2410.00000.

  204. Lee, P. Y. K., Tang, J., & Odom, W. (2023). Speculating on risks of AI clones to selfhood and relationships: Doppelganger-phobia, identity fragmentation, and living memories. ACM Conference on Computer-Supported Cooperative Work and Social Computing.

  205. Lee, S., Leong, J., Cheng, S., Tang, J., & Rintel, S. (2025). IRL Dittos: Embodied multimodal AI agent interactions in open spaces. arXiv preprint arXiv:2501.00000.

  206. Lee, S. H., Teevan, J., & Iqbal, S. (2025). Amplifying minority voices: AI-mediated devil's advocate system for inclusive group decision-making. ACM International Conference on Intelligent User Interfaces.

  207. Leonardi, P. (2023). Helping employees succeed with generative AI. Harvard Business Review.

  208. Leong, J., Cheng, S., Rintel, S., Nand, A., & Tang, J. (2024). Dittos: Personalized, embodied agents that participate in meetings when you are unavailable. ACM Conference on Computer-Supported Cooperative Work and Social Computing.

  209. Li, H., Nicholas, G., Steeves, V., Irani, L., & Tomlinson, B. (2023). The dimensions of data labor: A road map for researchers, activists, and policymakers to empower data producers. ACM Conference on Fairness, Accountability, and Transparency.

  210. Liang, A. (2025). Artificial intelligence clones. ACM Conference on Economics and Computation.

  211. LinkedIn. (2025). AI labor market update: Tracking AI adoption and skills in the U.S. economy. LinkedIn Economic Graph Report.

  212. LinkedIn. (2025a). Work change report: AI is coming to work. LinkedIn Research.

  213. LinkedIn. (2025b). LinkedIn skills on the rise 2025: The 15 fastest-growing skills in the U.S. LinkedIn Economic Graph.

  214. Liu, J., Yuan, H., Cosley, D., & Guo, Y. J. (2025). "Generate" the future of work through AI: Empirical evidence from online labor markets. Under Review.

  215. Liu, X., Buschek, D., & Linder, P. (2025a). Interacting with thoughtful AI. arXiv preprint arXiv:2501.00000.

  216. Liu, X., Buschek, D., & Linder, P. (2025b). Proactive conversational agents with inner thoughts. CHI Conference on Human Factors in Computing Systems.

  217. Liu, Z., Wang, Y., Alsobay, M., & Lu, Y. (2025). ProMediate: A socio-cognitive framework for evaluating proactive agents in multi-party negotiation. arXiv preprint arXiv:2501.00000.

  218. Lu, Y., Jiang, N., Wang, P., Dang, Y., Pan, S., Dong, Y., & Wang, H. (2024). Proactive agent: Shifting LLM agents from reactive responses to active assistance. International Conference on Learning Representations.

  219. Macnamara, B. N., Keane, A., & Hambrick, D. Z. (2024). Does using artificial intelligence assistance accelerate skill decay and hinder skill development without performers' awareness? Cognitive Research: Principles and Implications, 9, 73.

  220. Madsen, D. Ø., & Puyt, R. W. (2025). The 7Vs of AI slop: A typology of generative waste. SSRN Working Paper.

  221. Maeda, A., & Quan-Haase, A. (2024). When human-AI interactions become parasocial: Agency and anthropomorphism in affective design. ACM Conference on Fairness, Accountability, and Transparency.

  222. Mäkelä, E., & Stephany, F. (2025). Complement or substitute? How AI increases the demand for human skills. Working Paper.

  223. Malik, P., & Garg, P. (2017). Learning organization and work engagement: The mediating role of employee resilience. The International Journal of Human Resource Management, 28(10), 1395–1417.

  224. Malone, T. (2018). Superminds: The surprising power of people and computers thinking together. Little, Brown and Company.

  225. Malone, T., & Bernstein, M. (Eds.). (2015). Handbook of collective intelligence. MIT Press.

  226. Manning, B., & Horton, J. (2025). General social agents. arXiv preprint arXiv:2501.00000.

  227. Manning, S., & Aguirre, T. (2025). How adaptable are American workers to AI-induced job displacement? National Bureau of Economic Research Working Paper.

  228. Marriott, H. R., & Pitardi, V. (2024). One is the loneliest number… Two can be as bad as one: The influence of AI friendship apps on users' well‐being and addiction. Psychology & Marketing, 41(3), 732–754.

  229. Marro, S., & Torr, P. (2025). LLM agents are the antidote to walled gardens. arXiv preprint arXiv:2501.00000.

  230. Maslej, N., Fattorini, L., Brynjolfsson, E., Etchemendy, J., Ligett, K., Lyons, T., ... & Perrault, R. (2025). The AI Index 2025 annual report. AI Index Steering Committee, Institute for Human-Centered AI, Stanford University.

  231. Masson, D., Lafreniere, B., Gutwin, C., & Goguey, A. (2025). Textoshop: Interactions inspired by drawing software to facilitate text editing. CHI Conference on Human Factors in Computing Systems.

  232. McDuff, D., King, N., & Nori, H. (2025). Towards accurate differential diagnosis with large language models. Nature, 633, 337–345.

  233. McIlroy-Young, R., Sen, S., Kleinberg, J., & Anderson, A. (2022). Mimetic models: Ethical implications of AI that acts like you. AAAI/ACM Conference on AI, Ethics, and Society.

  234. McMahon, C., Johnson, I., & Hecht, B. (2017). The substantial interdependence of Wikipedia and Google – A case study on the relationship between peer production communities and information technologies. AAAI International Conference on Web and Social Media.

  235. Medhi-Thies, I., Ferreira, P., Gupta, N., O'Neill, J., & Cutrell, E. (2015). KrishiPustak: A social networking system for low-literate farmers. ACM Conference on Computer-Supported Cooperative Work and Social Computing.

  236. Melumad, S., Yoon, C., & Pham, M. T. (2025). Experimental evidence of the effects of large language models versus web search on depth of learning. PNAS Nexus, 4(1), pgae541.

  237. Mengesha, Z., Heldreth, C., Lahav, M., Sublewski, J., & Tuennerman, E. (2021). "I don't think these devices are very culturally sensitive"—Impact of automated speech recognition errors on African Americans. Frontiers in Artificial Intelligence, 4, 725911.

  238. Meta Fundamental AI Research Diplomacy Team (FAIR). (2022). Human-level play in the game of Diplomacy by combining language models with strategic reasoning. Science, 378(6624), 1067–1074.

  239. Meyer, A., Barton, L., Murphy, G., Zimmermann, T., & Fritz, T. (2017). The work life of developers: Activities, switches and perceived productivity. IEEE Transactions on Software Engineering, 43(12), 1178–1193.

  240. Meyer, A., Fritz, T., Murphy, G., & Zimmermann, T. (2019). Today was a good day: The daily life of software developers. IEEE Transactions on Software Engineering, 47(5), 863–880.

  241. Microsoft AI Economy Institute. (2025). AI diffusion report: Where AI is most used, developed, and built. Microsoft Corporation.

  242. Microsoft Education/IDC. (2025). 2025 AI in education report survey details. Microsoft Corporation.

  243. Miklian, J., & Hoelscher, K. (2025). A new digital divide? Coder worldviews, the slop economy, and democracy in the age of AI. arXiv preprint arXiv:2501.00000.

  244. Morandini, S., Fraboni, F., De Angelis, M., & Puzzo, G. (2023). The impact of artificial intelligence on workers' skills: Upskilling and reskilling in organisations. Informing Science, 26, 39–68.

  245. Murire, O. T. (2024). Artificial intelligence and its role in shaping organizational work practices and culture. Businesses, 4(4), 565–583.

  246. Mysore, S., Gero, K. I., Nouri, E., Chilton, L. B., & Agrawala, M. (2025). Prototypical human-AI collaboration behaviors from LLM-assisted writing in the wild. Conference on Empirical Methods in Natural Language Processing.

  247. Naddaf, M. (2025). Major AI conference flooded with peer reviews written fully by AI. Nature News.

  248. Natali, C., Federico, F., Mariani, M., & Fosso Wamba, S. (2025). AI-induced deskilling in medicine: A mixed-method review and research agenda for healthcare and beyond. Artificial Intelligence Review, 58, 54.

  249. Nath, A., Alsobay, M., Jaffe, S., Sarrafzadeh, B., & Suh, J. (2025a). Let's roleplay: Examining LLM alignment in collaborative dialogues. CEUR Workshop Proceedings.

  250. Nath, A., Liu, Z., Wang, Y., Lu, Y., Sarrafzadeh, B., & Suh, J. (2025b). Frictional agent alignment framework: Slow down and don't break things. Annual Meeting of the Association for Computational Linguistics.

  251. Neely, M. T., Wu, A. D., & Doellgast, V. (2023). Social inequality in high tech: How gender, race, and ethnicity structure the world's most powerful industry. Annual Review of Sociology, 49, 187–208.

  252. Neural Sage. (2025). Hybrid centralized and decentralized architectures balancing control and autonomy in LLM agents. AI Architecture Blog.

  253. Niederhoffer, K., Luca, M., & Schwartz, J. (2025). AI-generated "workslop" is destroying productivity. Harvard Business Review.

  254. Ning, Y., Lin, T., Qi, Z., Xu, M., Li, J., Cao, B., ... & Huang, M. (2025). MAD-Fact: A multi-agent debate framework for long-form factuality evaluation in LLMs. arXiv preprint arXiv:2501.00000.

  255. Nori, H., King, N., McKinney, S. M., Carignan, D., & Horvitz, E. (2023). Capabilities of GPT-4 on medical challenge problems. arXiv preprint arXiv:2303.13375.

  256. Noti, G., Bergemann, D., & Morris, S. (2025). AI-assisted decision making with human learning. ACM Conference on Economics and Computation.

  257. OECD. (2023). Empowering young children in the digital age. OECD Publishing.

  258. OECD. (2025). What should teachers teach and students learn in a future of powerful AI? OECD Education Working Papers.

  259. Olteanu, A., Cheng, M., & DeVrio, A. (2025). AI automatons: AI systems intended to imitate humans. arXiv preprint arXiv:2501.00000.

  260. OpenAI. (2025). Introducing Operator: A research preview of an agent that can use its own browser to perform tasks for you. OpenAI Blog.

  261. Orr, J. (1996). Talking about machines: An ethnography of a modern job. Cornell University Press.

  262. Otis, N., Arellano-Bover, J., Breza, E., Chandrasekhar, A., Gandhi, S., Ladhania, M., & Steinberg, P. (2025). The uneven impact of generative AI on entrepreneurial performance: Evidence from a field experiment in Kenya. Harvard Business School Working Paper.

  263. Pang, R. Y., Parrish, A., Joshi, N., Nangia, N., Phang, J., Chen, A., ... & Bowman, S. R. (2025). Interactive reasoning: Visualizing and controlling chain-of-thought reasoning in large language models. arXiv preprint arXiv:2501.00000.

  264. Park, J., Barber, L., Cho, E., & Wang, Y. (2025). Attitudes towards artificial intelligence at work: Scale development and validation. Journal of Occupational and Organizational Psychology, 98(1), 195–218.

  265. Park, J. S., O'Brien, J. C., Cai, C. J., Morris, M. R., Liang, P., & Bernstein, M. S. (2023). Generative agents: Interactive simulacra of human behavior. ACM Symposium on User Interface Software and Technology.

  266. Park, J. S., Smith, L., Lim, K., Abbeel, P., & Steinhardt, J. (2025). MAPoRL: Multi-agent policy optimization with relative learning. Working Paper.

  267. Passi, S. (2025). Agentic AI has a human oversight problem. SSRN Working Paper.

  268. Passi, S., Vorvoreanu, M., & Simmons, S. (2024). Appropriate reliance on generative AI: Research synthesis. Microsoft Research Technical Report.

  269. Patwardhan, T., Li, Y., Li, Z., Tamkin, A., & Burns, K. (2025). GDPval: Evaluating AI model performance on real world economically valuable tasks. arXiv preprint arXiv:2501.00000.

  270. Pendergrass, W., Adams, T., & Sample, C. (2025). A strategic cycle of slop: Understanding the commodification of AI feculence and its place in the attention economy. Issues in Information Systems, 26(1).

  271. Pentina, I., Hancock, T., & Xie, T. (2023). Consumer–machine relationships in the age of artificial intelligence: Systematic literature review and research directions. Psychology & Marketing, 40(8), 1593–1618.

  272. Pentland, A. (2012). The new science of building great teams. Harvard Business Review, 90(4), 60–70.

  273. Pimenova, V., Sarkar, A., Butler, J., & Teevan, J. (2025). Good vibrations? A qualitative study of co-creation, communication, flow, and trust in vibe coding. Under Review.

  274. Poelitz, C., & McKenna, N. (2025). Synthetic clarification and correction dialogues about data-centric tasks - A teacher-student approach. arXiv preprint arXiv:2501.00000.

  275. Pugh, A. (2024). The last human job: The work of connecting in a disconnected world. Princeton University Press.

  276. Qiao, D., Luo, X., & Nan, G. (2024). AI and freelancers: Has the inflection point arrived? International Conference on Information Systems.

  277. Qin, P., Yan, W., Chilton, L., & Agrawala, M. (2025). Timing matters: How using LLMs at different timings influences writers' perceptions and ideation outcomes in AI-assisted ideation. CHI Conference on Human Factors in Computing Systems.

  278. Rao, A., Jaffe, S., & Suh, J. (in prep.). Measuring and steering emergent collaboration behaviors in LLM agents.

  279. Rao, V. S., Zellou, G., Caliskan, A., & Tatman, R. (2025). Detecting LLM-generated peer reviews. PLOS ONE, 20(1), e0317223.

  280. Ratcliffe, E. (2025). All of my employees are AI agents, and so are my executives. WIRED.

  281. Reif, J., Amira, K., Kim, J., & Schweitzer, M. E. (2025). Evidence of a social evaluation penalty for using AI. Proceedings of the National Academy of Sciences, 122(2), e2413957122.

  282. Reicherts, L., Gero, K. I., Suh, J., Riche, N. H., Edwards, J., & Lu, Y. (2025). AI, help me think—but for myself: Assisting people in complex decision-making by providing different kinds of cognitive support. CHI Conference on Human Factors in Computing Systems.

  283. Restrepo, P. (2025). We won't be missed: Work and growth in the era of AGI. Working Paper.

  284. Reza, M., Kim, J., Oehlberg, L., & Chilton, L. (2025). Co-writing with AI, on human terms: Aligning research with user demands across the writing process. ACM Conference on Computer-Supported Cooperative Work and Social Computing.

  285. Roethlisberger, F., & Dickson, W. (1939). Management and the worker. Harvard University Press.

  286. Rost, M. (2025). Reclaiming the computer through LLM-mediated computing. ACM Interactions, 32(1), 18–21.

  287. Rothschild, D. M., Bansal, G., Hadfield, G., Immorlica, N., Koh, A., Munson, S. A., ... & Weld, D. (Forthcoming). The agentic economy. Communications of the ACM.

  288. Rozsa, Z., Formánková, S., & Majerová, J. (2023). Job crafting and sustainable work performance: A systematic literature review. Equilibrium. Quarterly Journal of Economics and Economic Policy, 18(3), 717–750.

  289. Rubin, M., Tellis, W., Frankel, R., Hareli, S., & Hess, U. (2025). Comparing the value of perceived human versus AI-generated empathy. Nature Human Behaviour, 9, 21–32.

  290. Rusak, G., Immorlica, N., & Weinberg, S. M. (2025). AI agents can enable superior market designs. Working Paper.

  291. Sadeghian, S., & Hassenzahl, M. (2022). The "artificial" colleague: Evaluation of work satisfaction in collaboration with non-human coworkers. ACM International Conference on Intelligent User Interfaces.

  292. Salem, P., Madan, N., Tankelevitch, L., & Cole, A. (2025). TinyTroupe: An LLM-powered multiagent persona simulation toolkit. arXiv preprint arXiv:2501.00000.

  293. Samadi, M., Ma, Y., Liang, D., Liu, Z., Chen, Y., Sun, J., ... & Luo, H. (2025). Scaling test-time compute to achieve IOI gold medal with open-weight models. arXiv preprint arXiv:2501.00000.

  294. Sanders, M. Z. (2025). How people are really using Gen AI in 2025. Harvard Business Review.

  295. Sarkar, A. (2024). AI should challenge, not obey. Communications of the ACM, 67(12), 30–32.

  296. Sarkar, A. (2025). Artificial intelligence as a tool for thought. TEDxAI Vienna.

  297. Sarkar, A., & Drosos, I. (2025). Vibe coding: Programming through conversation with artificial intelligence. Psychology of Programming Interest Group Annual Conference.

  298. Schilke, O., & Reimann, M. (2025). The transparency dilemma: How AI disclosure erodes trust. Organizational Behavior and Human Decision Processes, 186, 104361.

  299. Schmutz, J. B., Knobel, S. E. J., Zahner, N., & Manser, T. (2024). AI-teaming: Redefining collaboration in the digital era. Current Opinion in Psychology, 59, 101878.

  300. Schömbs, S., Kocielnik, R., Alvarez León, L. F., Kientz, J., Munson, S. A., Hsieh, G., ... & Weld, D. S. (2025). From conversation to orchestration: HCI challenges and opportunities in interactive multi-agentic systems. arXiv preprint arXiv:2501.00000.

  301. Schröder, F., Brawer, J., Tanneberg, P., & Peters, J. (2025). Towards fluid human-agent collaboration: From dynamic collaboration patterns to models of theory of mind reasoning. Frontiers in Robotics and AI, 12, 1518619.

  302. Scott, A., Tankelevitch, L., Teevan, J., Iqbal, S., & Rintel, S. (2024). Mental models of meeting goals: Supporting intentionality in meeting technologies. CHI Conference on Human Factors in Computing Systems.

  303. Scott, A., Tankelevitch, L., Teevan, J., Iqbal, S., & Rintel, S. (2025). What does success look like? Catalyzing meeting intentionality with AI-assisted prospective reflection. CHI Conference on Human-Computer Interaction with Mobile Devices and Services: Work in Progress.

  304. Selten, F., & Klievink, B. (2024). Organizing public sector AI adoption: Navigating between separation and integration. Government Information Quarterly, 41(2), 101909.

  305. Shaer, O., Hong, B., Cuddeback, M., Stolerman, E., Zhao, Y., & Bigham, J. (2024). AI-augmented brainwriting: Investigating the use of LLMs in group ideation. CHI Conference on Human Factors in Computing Systems.

  306. Shahidi, P., Hadfield, G., & Koh, A. (2025). The Coasean singularity? Demand, supply, and market design with AI agents. National Bureau of Economic Research Working Paper.

  307. Shaib, C., de Wynter, A., & Jiang, Z. (2025). Measuring AI "slop" in text. arXiv preprint arXiv:2501.00000.

  308. Shaikh, O., Zhang, H., Held, W., Bernstein, M., & Yang, D. (2024). Grounding gaps in language model generations. arXiv preprint arXiv:2311.09144.

  309. Shaikh, O., Zhang, H., Held, W., Kim, S., Tan, C., Bernstein, M., & Yang, D. (2025). Navigating rifts in human-LLM grounding: Study and benchmark. ACL Anthology.

  310. Shao, Y., Zhang, M., Brynjolfsson, E., Li, D., & Mihalache, R. (2025). Future of work with AI agents: Auditing automation and augmentation potential across the U.S. workforce. arXiv preprint arXiv:2501.00000.

  311. Sharma, R. (2025). The impact of AI-generated content on human creativity and original thought: A psychological study. American Psychological Association Presentation.

  312. Shavit, Y., Critch, A., Bowman, S., Krueger, D., Hadfield-Menell, D., Siddarth, D., & O'Brien, T. (2023). Practices for governing agentic AI systems. OpenAI Report.

  313. Shekshnia, S. (2025). AI strategy, leadership, talent and workforce, and transformation. In AI leadership for corporate boards (pp. 91–121). Springer.

  314. Shen, J., Sarkar, A., Goguey, A., & Marquardt, N. (2025). Texterial: A text-as-material interaction paradigm for LLM-mediated writing. Under Review.

  315. Shin, J., Dai, Z., Gerber, E., & Rao, A. (2025). No evidence for LLMs being useful in problem reframing. CHI Conference on Human Factors in Computing Systems.

  316. Siddals, S., Seto, E., & Chancellor, S. (2024). "It happened to be the perfect thing": Experiences of generative AI chatbots for mental health. npj Mental Health Research, 3, 34.

  317. Siemon, D. (2022). Elaborating team roles for artificial intelligence-based teammates in human-AI collaboration. Group Decision and Negotiation, 31, 871–912.

  318. Simkute, A., Daepp, M., Edwards, J., Kuehl, K., Vorvoreanu, M., Ballard, S., ... & Teevan, J. (2024). The new calculator? Practices, norms, and implications of generative AI in higher education. arXiv preprint arXiv:2501.00000.

  319. Simkute, A., Daepp, M., Kuehl, K., & Teevan, J. (2024). Ironies of generative AI: Understanding and mitigating productivity loss in human-AI interaction. International Journal of Human-Computer Interaction, 40(19), 4927–4945.

  320. Singh, N., Muchai, M., Abdulhamid, N., Gupta, R., Baral, C., Choudhary, A., ... & Sitaram, S. (2024). Farmer.Chat: Scaling AI-powered agricultural services for smallholder farmers. arXiv preprint arXiv:2408.00000.

  321. Singley, M. K., & Anderson, J. R. (1989). The transfer of cognitive skill. Harvard University Press.

  322. Siu, A., & Fok, R. (2025). Augmenting expert cognition in the age of generative AI: Insights from document-centric knowledge work. arXiv preprint arXiv:2501.00000.

  323. Slaughter, I., & Daepp, M. (in prep.). Second-level digital divides in usage of a generative AI chatbot.

  324. Smith, A., De Cremer, D., & Van Dick, R. (2025). Navigating AI convergence in human–AI teams: A signaling theory approach. Journal of Organizational Behavior, 46(1), 60–77.

  325. Stadler, M., Bannert, M., & Sailer, M. (2024). Cognitive ease at a cost: LLMs reduce mental effort but compromise depth in student scientific inquiry. Computers in Human Behavior, 151, 108020.

  326. Sternfels, B., & Atsmon, Y. (2025). The learning organization: How to accelerate AI adoption. McKinsey & Company.

  327. Suchman, L. (1987). Plans and situated actions: The problem of human-machine communication. Cambridge University Press.

  328. Suh, J., Sarrafzadeh, B., Hernandez, J., Inkpen, K., & Chandra, M. (2025). SENSE-7: Taxonomy and dataset for measuring user perceptions of empathy in sustained human AI conversations. arXiv preprint arXiv:2501.00000.

  329. Tamkin, A., & McCrory, P. (2025). Estimating productivity gains from Claude conversations. Anthropic Research.

  330. Tang, P., Lee, D., Bavik, A., & Tang, Y. (2023). No person is an island: Unpacking the work and after-work consequences of interacting with artificial intelligence. Journal of Applied Psychology, 108(6), 905–923.

  331. Tankelevitch, L., Kocielnik, R., Sarkar, A., Suh, J., Tanprasert, T., Williams, J., ... & Inkpen, K. (2025). Understanding, protecting, and augmenting human cognition with generative AI: A synthesis of the CHI 2025 tools for thought workshop. arXiv preprint arXiv:2501.00000.

  332. Tankelevitch, L., Kocielnik, R., Suh, J., Sellen, A., Bennett, C. L., Simkute, A., & Inkpen, K. (2024). The metacognitive demands and opportunities of generative AI. CHI Conference on Human Factors in Computing Systems.

  333. Tankelevitch, L., Scott, A., Doherty, E., Teevan, J., Iqbal, S., & Rintel, S. (2025). Nudging attention to workplace meeting goals: A large-scale, preregistered field experiment. Under Review.

  334. Tanno, R., Ktena, I., Pawlowski, N., Rudie, J. D., Redd, A., Dalla Torre, A., ... & Alvarez-Valle, J. (2024). Collaboration between clinicians and vision-language models in radiology report generation. Nature Medicine, 30, 3328–3337.

  335. Tanprasert, T., Leong, J., Cheng, S., Tang, J., & Rintel, S. (2025). FamilyDittos: Reimagining intergenerational interaction through mimetic agents. ACM Conference on Computer-Supported Cooperative Work and Social Computing.

  336. Tessler, M. H., Bakker, M. A., Jarrett, D., Sheahan, H., Chadwick, M. J., Koster, R., ... & Summerfield, C. (2024). AI can help humans find common ground in democratic deliberation. Science, 386(6719), eadq2852.

  337. Teutloff, O., Kampkötter, P., & Wilmers, N. (2025). Winners and losers of generative AI: Early evidence of shifts in freelancer demand. Journal of Economic Behavior & Organization, 219, 1–14.

  338. Thieme, A., Abouzied, A., Sellen, A., Wallace, C., Tang, J., Nushi, B., ... & Kaye, J. (2025). Challenges for responsible AI design and workflow integration in healthcare: A case study of automatic feeding tube qualification in radiology. ACM Transactions on Computer-Human Interaction, 32(1), 1–50.

  339. Tolzin, A., & Janson, A. (2025). Uncovering the mechanisms of common ground in human–agent interaction: Review and future directions for conversational agent research. Internet Research, 35(1), 148–186.

  340. Tomlinson, K., Daepp, M., Linden, S., Cheng, S., & Hecht, B. (2025). Working with AI: Measuring the applicability of generative AI to occupations. arXiv preprint arXiv:2501.00000.

  341. Tomitsu, H., Shimada, Y., Fukaya, H., & Ogata, H. (2025). The cognitive mirror: A framework for AI-powered metacognition and self-regulated learning. Frontiers in Education, 10, 1484742.

  342. Trist, E., & Bamforth, K. (1951). Some social and psychological consequences of the longwall method of coal-getting. Human Relations, 4(1), 3–38.

  343. Tullis, J. (2025). Sifting through the slop: How generative AI created a market for lemons for text-based works. SSRN Working Paper.

  344. Tursunbayeva, A., & Chalutz-Ben Gal, H. (2024). Adoption of artificial intelligence: A TOP framework-based checklist for digital leaders. Business Horizons, 67(6), 687–701.

  345. Ulloa, M., Butler, J., Dillon, E., & Teevan, J. (2025). Product manager practices for delegating work to generative AI: "Accountability must not be delegated to non-human actors". International Conference on Software Engineering: Software Engineering in Practice Track.

  346. Unell, S., Goldstein, D., Nori, H., Tanno, R., Karthikesalingam, A., King, N., & McKinney, S. M. (2025). CancerGUIDE: Cancer guideline understanding via internal disagreement estimation. Machine Learning for Health Proceedings.

  347. UNESCO. (2023). Guidance for generative AI in education and research. UNESCO Report.

  348. UNICEF. (2021). Policy guidance on AI for children: Version 2.0 | Recommendations for building AI policies and systems that uphold child rights. UNICEF Report.

  349. Urban, M., Schreiner, M., Scherrer, S., & Mayerhofer, A. M. (2024). ChatGPT improves creative problem-solving performance in university students: An experimental study. Computers & Education, 217, 105065.

  350. Vaccaro, M., Malone, T., & Bernstein, M. (2024). When combinations of humans and AI are useful. Nature Human Behaviour, 8, 1640–1642.

  351. Vaish, R., Sankaran, K., Gupta, S., Guttikonda, R., Kumaraguru, D., & Varshney, K. R. (2021). AI maturity framework for enterprise applications. IBM Technical Report.

  352. Vaithilingam, P., Cheng, J., Glassman, E., & Guo, P. J. (2024). DynaVis: Dynamically synthesized UI widgets for visualization editing. CHI Conference on Human Factors in Computing Systems.

  353. Valentine, M., & Bernstein, M. (2025). Flash teams: Leading the future of AI-enhanced, on-demand work. MIT Press.

  354. Vanukuru, R., Teevan, J., & Iqbal, S. (2025). Strengthening the chain of intentionality across meetings: AI-assisted retrospection and prospection for knowledge work. ACM Designing Interactive Systems Conference.

  355. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30.

  356. Vendraminelli, L., Brynjolfsson, E., & Dell'Acqua, F. (2025). The GenAI wall effect: Examining the limits to horizontal expertise transfer between occupational insiders and outsiders. Harvard Business School Working Paper.

  357. Verma, G., & Counts, S. (2025). WorkflowView: Abstracting activity logs with LLMs for interpretable and actionable insights. Under Review.

  358. Vincent, N. (2022). The paradox of reuse, language models edition. Data Leverage Blog.

  359. Walker, K., & Vorvoreanu, M. (2025). Learning outcomes with GenAI in the classroom: A review of empirical evidence. Microsoft Research Technical Report.

  360. Wan, Q., Huang, C., Chang, M., Ni, Y., Agrawala, M., & Chilton, L. B. (2024). "It felt like having a second mind": Investigating human-AI co-creativity in prewriting with large language models. ACM Conference on Computer-Supported Cooperative Work and Social Computing.

  361. Wang, A., Kocielnik, R., Duran, J., Andersen, E., Kamar, E., & Nushi, B. (2024). Strategies for increasing corporate responsible AI prioritization. AAAI/ACM Conference on AI, Ethics, and Society.

  362. Wang, R., Wu, R., Wu, J., Chen, Z., Zhao, Y., Zhang, Q., ... & Li, Z. (2024). SOTOPIA-π: Interactive learning of socially intelligent language agents. Annual Meeting of the Association for Computational Linguistics.

  363. Wang, S., & Chilton, L. B. (2025). Schemex: Discovering design patterns from examples through iterative abstraction and refinement. arXiv preprint arXiv:2501.00000.

  364. Wang, Y., & Lu, Y. (2025). Interaction, process, infrastructure: A unified architecture for human-agent collaboration. arXiv preprint arXiv:2501.00000.

  365. Wells, J., Fuerst, W., & Choobineh, J. (2007). The impact of the perceived purpose of electronic performance monitoring on an array of attitudinal variables. Human Resource Development Quarterly, 10(2), 121–138.

  366. Wiles, E., Zhang, M., & Autor, D. (2024). GenAI as an exoskeleton: Experimental evidence on knowledge workers using GenAI on new skills. SSRN Working Paper.

  367. Winsor, J. (2024). How to be systematic about adopting AI at your company. Harvard Business Review.

  368. Woolley, A. W. (2025). Generative AI and collaboration: Opportunities for cultivating collective intelligence. Journal of Organization Design, 14, 1–9.

  369. Wu, S., Suh, J., Alsobay, M., Tankelevitch, L., Nath, A., Sarrafzadeh, B., ... & Jaffe, S. (2025). COLLABLLM: From passive responders to active collaborators. International Conference on Machine Learning.

  370. Xue, S., & Song, Y. (2025). Unlocking the synergy: Increasing productivity through human-AI collaboration in the industry 5.0 era. Computers & Industrial Engineering, 197, 110553.

  371. Yang, D., Kraut, R., & Dabbish, L. (2025). Socially aware language technologies: Perspectives and practices. Annual Meeting of the Association for Computational Linguistics.

  372. Yang, K. H., Stoddard, G., Drakopoulos, K., & Halaburda, H. (2025). Explaining models. SSRN Working Paper.

  373. Yang, Y., Cheng, K., Liu, V., Chilton, L., Dow, S., Ramos, G., & Agrawala, M. (2025). From overload to insight: Scaffolding creative ideation through structuring inspiration. arXiv preprint arXiv:2501.00000.

  374. Yang, Z., Nabulsi, Z., Oakden-Rayner, L., Wyles, E., Zhang, C., Pfohl, S. R., ... & Irvin, J. A. (2025). Demographic bias of expert-level vision-language foundation models in medical imaging. Science Advances, 11(1), eadq3539.

  375. Yehudai, A., Zhou, X., Yarom, M., Rassin, R., Wang, B., Kim, J., ... & Hajishirzi, H. (2025). Survey on evaluation of LLM-based agents. arXiv preprint arXiv:2501.00000.

  376. Yiğit, B., & Çakmak, B. Y. (2024). Discovering psychological well-being: A bibliometric review. Journal of Happiness Studies, 25, 25.

  377. Yogev Maday, S. (2025). Fast-tracking UX insights: Leveraging AI and research methods to gather feedback in 90 minutes. Internal Microsoft Report.

  378. Young, J., Rozsa, A., & Beers, S. (2025). The future of enterprise. Internal Microsoft Report.

  379. Yu, R., Alì, G., Mohr, D., Zhang, R., & Tamersoy, A. (2024). Whose ChatGPT? Unveiling real-world educational inequalities introduced by large language models. arXiv preprint arXiv:2401.00000.

  380. Yue, D., Choi, E., & Kleinberg, J. (2022). Nailing prediction: Experimental evidence on the value of tools in predictive model development. Harvard Business School Working Paper.

  381. Yukita, D., Yuan, A., & Chilton, L. (2025). Reassessing collaborative writing theories and frameworks in the age of LLMs: What still applies and what we must leave behind. arXiv preprint arXiv:2501.00000.

  382. Yun, B., Chiang, C. W., & Yin, M. (2025). Generative AI in knowledge work: Design implications for data navigation and decision-making. CHI Conference on Human Factors in Computing Systems.

  383. Yunusa, E. (2025). Creating an artificial intelligence-ready organizational culture: Harmonizing human existence with AI strategic decision-making. International Journal of Business Sustainability, 8(1), 1–24.

  384. Zambrano Chaves, J. M., Nabulsi, Z., Zhang, C., Wyles, E., Yang, Z., Oakden-Rayner, L., ... & Irvin, J. A. (2025). A clinically accessible small multimodal radiology model and evaluation metric from chest X-ray findings. Nature Communications, 16, 617.

  385. Zercher, D., Deuschel, J., & Gockel, C. (2025). How can teams benefit from AI team members? Exploring the effect of generative AI on decision-making processes and decision quality in team-AI collaboration. Journal of Organizational Behavior, 46(1), 109–130.

  386. Zhang, C., Yang, L., Zhao, H., Guo, Y., Feng, T., Sun, Y., & Zhao, Y. (2024). College students' literacy, ChatGPT activities, educational outcomes, and trust from a digital divide perspective. New Media & Society, Advance online publication.

  387. Zhang, H., Liu, X., Zhang, J., Zhang, Y., & Zhao, H. (2025). Router-R1: Teaching LLMs multi-round routing and aggregation via reinforcement learning. arXiv preprint arXiv:2501.00000.

  388. Zhang, R., Gelles, M., Shi, S., Chancellor, S., & Gilbert, E. (2025). The dark side of AI companionship: A taxonomy of harmful algorithmic behaviors in human-AI relationships. CHI Conference on Human Factors in Computing Systems.

  389. Zhang, S., Sun, Y., Zheng, J., Li, R., Yang, Y., & Huang, Y. (2025). Exploring collaboration patterns and strategies in human-AI co-creation through the lens of agency: A scoping review of the top-tier HCI literature. ACM Conference on Computer-Supported Cooperative Work and Social Computing.

  390. Zhao, G., Zhao, L., Xu, J., Wu, D., & Yu, Z. (2025). A generative artificial intelligence (AI)-based human-computer collaborative programming learning method. Journal of Educational Computing Research, 63(1), 176–204.

  391. Zhou, J., Hu, Y., Wu, Y., Liang, Z., & Chen, F. (2024). Understanding nonlinear collaboration between human and AI agents: A co-design framework for creative design. CHI Conference on Human Factors in Computing Systems.

  392. Zhou, X., Chen, Y., Ning, X., Lin, J., Le, V., Peng, N., & Jiang, J. (2024). Sotopia: Interactive evaluation for social intelligence in language agents. International Conference on Learning Representations.

  393. Zhou, X., Li, W., Teevan, J., Iqbal, S., Bennett, P., Deng, W., ... & Roemmele, M. (2025). How do coworkers interpret employee AI usage: Coworkers' perceived morality and helping as responses to employee AI usage. Human Resource Management, 64(1), 97–117.

  394. Zhu, K., Shah, D., Kang, S., Wang, H., & Narasimhan, K. (2025). MultiAgentBench: Evaluating the collaboration and competition of LLM agents. Annual Meeting of the Association for Computational Linguistics.

Jonathan H. Westover, PhD is Chief Research Officer (Nexus Institute for Work and AI); Associate Dean and Director of HR Academic Programs (WGU); Professor, Organizational Leadership (UVU); OD/HR/Leadership Consultant (Human Capital Innovations). Read Jonathan Westover's executive profile here.

Suggested Citation: Westover, J. H. (2026). The Future of Work with AI: Moving from Individual Gains to Collective Intelligence. Human Capital Leadership Review, 32(1). doi.org/10.70175/hclreview.2020.32.1.3

Human Capital Leadership Review

eISSN 2693-9452 (online)

future of work collective transparent.png
Renaissance Project transparent.png

Subscription Form

HCI Academy Logo
Effective Teams in the Workplace
Employee Well being
Fostering Change Agility
Servant Leadership
Strategic Organizational Leadership Capstone
bottom of page