top of page
HCL Review
HCI Academy Logo
Foundations of Leadership
DEIB
Purpose-Driven Workplace
Creating a Dynamic Organizational Culture
Strategic People Management Capstone

When Artificial Intelligence Becomes the Teammate: Rethinking Innovation, Collaboration, and Organizational Design in the GenAI Era

ree

Listen to this article:


Abstract: Generative artificial intelligence is fundamentally reshaping the collaborative foundations of knowledge work. This article synthesizes findings from a large-scale field experiment involving 776 professionals at Procter & Gamble to examine how GenAI transforms three core pillars of teamwork: performance outcomes, expertise integration, and social engagement. Results demonstrate that AI-enabled individuals achieve solution quality comparable to human teams, effectively replicating traditional collaborative benefits while breaking down functional silos between technical and commercial domains. Contrary to concerns about technology-driven isolation, participants reported significantly more positive emotions when working with AI. These patterns suggest organizations must move beyond viewing AI as merely another productivity tool and instead recognize its role as a "cybernetic teammate" capable of redistributing expertise, accelerating innovation cycles, and fundamentally altering optimal team structures. Evidence-based organizational responses include reimagining team composition, developing sophisticated AI-interaction capabilities, redesigning performance expectations around AI-augmented workflows, and building governance frameworks that balance efficiency gains with sustained human skill development.

The modern organization runs on teamwork. From designing breakthrough products to orchestrating large-scale strategic initiatives, collaboration has long been the default approach for tackling complex challenges. We bring people together—often across functions, geographies, and disciplines—because teams outperform individuals, integrate complementary expertise, and provide the social connection that sustains motivation and engagement (Wuchty et al., 2007; Deming, 2017).


Now, generative artificial intelligence threatens to upend these assumptions.


Early evidence suggests GenAI can boost individual productivity, enhance creativity, and accelerate decision-making (Dell'Acqua et al., 2023b; Noy & Zhang, 2023). Yet most research treats AI as a tool—a sophisticated search engine or text generator that amplifies what individuals already do. But what if AI's impact runs deeper? What if its capacity for natural language interaction, iterative feedback, and knowledge synthesis allows it to replicate not just individual cognitive work but the very benefits we traditionally associate with human collaboration?


This question matters urgently. Organizations invest enormous resources in team-based structures, cross-functional workflows, and collaborative processes because they believe—correctly, according to decades of research—that teams produce superior outcomes (Ancona & Caldwell, 1992; Cohen & Bailey, 1997). If AI can substitute for certain collaborative functions, firms face fundamental choices about team size, composition, workflow design, and skill development. Understanding these dynamics becomes critical for organizational effectiveness.


Recent field research provides the first systematic evidence on these questions. Dell'Acqua and colleagues (2025) conducted a pre-registered experiment with 776 professionals at Procter & Gamble, randomly assigning participants to work individually or in two-person teams, with or without GenAI assistance, on authentic new product development challenges. The findings reveal AI acting less like a passive instrument and more like an active participant—what the researchers term a "cybernetic teammate" that enhances performance, bridges expertise gaps, and shapes emotional experience in ways that mirror human collaboration.


This article examines the organizational implications of these findings and outlines evidence-based responses for practitioners navigating this transition. We address three questions: How does AI reshape performance in collaborative work? How does it affect the distribution and integration of expertise across functional boundaries? And what are the social and emotional consequences for workers? The answers suggest that successful AI adoption requires rethinking not just workflows but the fundamental architecture of teamwork itself.


The Collaborative Work Landscape

Defining Collaboration and Teamwork in Knowledge-Intensive Contexts


Collaboration in knowledge work involves more than physical proximity or shared objectives. Effective teamwork requires integrating diverse perspectives, coordinating specialized skills, providing real-time feedback, and engaging in collective sense-making (Argote, 1999; Nickerson & Zenger, 2004). These functions become especially critical in innovation contexts, where uncertainty demands iterative exploration and the recombination of knowledge across domains (Kogut & Zander, 1992).


Organizations structure collaboration around three core value propositions. First, performance gains: well-designed teams tackle complex problems more effectively than individuals by distributing cognitive load and scrutinizing solutions from multiple angles (Cohen & Bailey, 1997; Csaszar, 2012). Second, expertise integration: teams serve as mechanisms for combining functional knowledge—such as technical R&D capabilities and commercial market understanding—that rarely coexist within single individuals (Kogut & Zander, 1992). Third, social and motivational benefits: collaborative interaction fosters belonging, reduces fear of failure, and sustains engagement through mutual support (Johnson & Johnson, 2005; Deutsch, 1949).


These benefits explain the documented shift toward team-based knowledge production across industries and research fields. Wuchty and colleagues (2007) demonstrated this pattern empirically, analyzing 19.9 million research papers and 2.1 million patents across five decades. They found that teams increasingly dominate solo authors in knowledge production, with the shift occurring across all fields and all levels of impact. As the volume and specialization of available knowledge expand—what Jones (2009) calls the "burden of knowledge"—teams become essential scaffolding for achieving both depth through specialization and breadth through interdisciplinary integration.


State of Practice: Collaboration Patterns and Emerging AI Adoption


Despite widespread reliance on collaborative structures, organizations face persistent friction. Coordinating across functions consumes time and managerial attention. Functional silos create communication barriers, with technical specialists and commercial professionals often operating with different mental models and priorities (Dougherty, 1992). Geographic dispersion and remote work arrangements further complicate real-time coordination.


Against this backdrop, GenAI adoption is accelerating. Organizations increasingly deploy AI in collaborative contexts: virtual assistants participating in meetings, tools that synthesize cross-functional insights, systems that facilitate ideation and decision-making. Yet this shift unfolds unevenly. Some firms experiment cautiously, treating AI as supplementary to existing workflows. Others pursue more aggressive integration, redesigning processes around AI capabilities.


The context at Procter & Gamble illustrates these tensions. As a global consumer packaged goods company with roughly 7,000 R&D professionals worldwide, P&G relies heavily on cross-functional collaboration for innovation. Senior executives emphasized that improving early-stage product development—where R&D and Commercial representatives generate and refine ideas—is crucial for the entire innovation pipeline. However, coordination frictions such as scheduling cross-functional meetings and bridging cultural divides between functions can lower innovation quality. The experimental intervention tested whether AI could reduce these frictions while maintaining or enhancing innovation outcomes (Dell'Acqua et al., 2025).


Organizational and Individual Consequences of AI-Enabled Collaboration

Organizational Performance Impacts


The P&G field experiment reveals substantial and consistent performance gains from AI integration. Individuals working alone without AI produced baseline-quality solutions. Teams of two humans without AI showed modest improvement—0.24 standard deviations above baseline (p < 0.05)—validating traditional collaborative benefits (Dell'Acqua et al., 2025). This finding aligns with established research on team effectiveness in innovation contexts (Ancona & Caldwell, 1992; Cohen & Bailey, 1997).


AI introduction produced larger effects. Individuals with AI demonstrated 0.37 standard deviation quality improvement over baseline (p < 0.01), effectively matching the performance of two-person teams without AI. Teams augmented with AI showed similar gains (0.39 standard deviations, p < 0.01) but did not significantly outperform AI-enabled individuals (Dell'Acqua et al., 2025). This pattern suggests AI can substitute for certain collaborative functions rather than merely complementing them.


The performance story extends beyond quality. AI-enabled participants completed tasks 12-16% faster than controls while producing substantially longer, more comprehensive solutions. Specifically, individuals with AI spent 16.4% less time than the control group, while teams with AI spent 12.7% less time. Simultaneously, both individuals and teams using AI produced significantly longer solutions—an effect that persisted across all statistical specifications (Dell'Acqua et al., 2025). This combination—higher quality, greater detail, less time—represents a significant departure from traditional productivity trade-offs where speed typically compromises thoroughness.


Importantly, AI's impact on exceptional performance was most pronounced in team settings. Solutions from AI-augmented teams were approximately three times more likely to rank in the top 10% of all submissions compared to controls. Specifically, teams with AI were 9.2 percentage points more likely to produce top-decile solutions compared to the control mean of 5.8%. While individuals with AI showed positive effects, these were not statistically significant for top-tier performance (Dell'Acqua et al., 2025). This finding suggests potential complementarity between AI and human collaboration for breakthrough innovation, even as AI-enabled individuals match team averages. Organizations pursuing incremental improvement face different strategic choices than those optimizing for exceptional outcomes.


Individual and Cross-Functional Impacts


AI's influence on expertise distribution carries profound implications. Without AI, functional specialization constrained solution characteristics: R&D professionals generated predominantly technical approaches, while commercial specialists favored market-oriented concepts. This pattern reflects well-documented challenges in cross-functional collaboration, where domain expertise creates cognitive boundaries that limit perspective-taking (Dougherty, 1992).


AI fundamentally disrupted these boundaries. The researchers measured solution "technicality" on a 1-7 scale, where higher values indicated more technically oriented ideas and lower values suggested commercially oriented, market-focused concepts. Without AI assistance, participants tended to generate ideas closely aligned with their professional backgrounds—commercial participants proposed more commercial ideas while technical participants suggested less commercially-oriented approaches. However, when aided by AI, this distinction largely disappeared. Both commercial and technical participants generated a more balanced mix of ideas spanning the commercial/technical spectrum. Moreover, quality scores did not significantly vary based on a solution's technical orientation, indicating these effects did not compromise solution effectiveness (Dell'Acqua et al., 2025).


In effect, AI democratized access to cross-functional thinking, allowing individuals to reason beyond their core expertise without requiring direct collaboration with specialists from other domains. This democratization proved especially valuable for employees working outside their core competency areas. The researchers categorized participants based on whether product development was a core job responsibility (employees who regularly engage in new product initiatives) or a non-core role (individuals in the same business unit but involved less frequently). For non-core employees, the effects were striking. Without AI, non-core employees working alone performed relatively poorly. Even when working in teams, non-core employees without AI showed only modest improvements. However, when given access to AI, non-core employees working alone achieved performance levels comparable to teams with at least one core-job employee (Dell'Acqua et al., 2025).


This pattern extends earlier findings on AI reducing performance variance across skill levels (Dell'Acqua et al., 2023b) into the collaborative domain, suggesting AI can substitute for mentorship and peer learning in certain contexts. The implications are profound: organizations may be able to deploy less-experienced employees on complex tasks traditionally requiring senior expertise or multi-person teams, potentially reshaping workforce planning and development pathways.


Social Engagement and Emotional Experience


The social and emotional consequences diverged sharply from concerns about technology-driven isolation. Participants using AI reported significantly elevated positive emotions—increases of 0.46 standard deviations for individuals and 0.64 for teams (p < 0.01)—compared to controls. The positive emotions measure combined participants' reported levels of enthusiasm, energy, and excitement. Simultaneously, AI users experienced fewer negative emotions, with reductions of approximately 0.23 standard deviations (p < 0.05). The negative emotions measure aggregated feelings of anxiety, frustration, and distress (Dell'Acqua et al., 2025).


Critically, individuals working with AI reported emotional responses matching or exceeding those of traditional team members. Without AI assistance, individuals working alone showed lower positive emotional responses compared to those working in teams, reflecting the traditional psychological benefits of human collaboration. However, individuals using AI reported positive emotional responses that matched team members working without AI, suggesting AI can substitute for some of the emotional benefits typically associated with teamwork (Dell'Acqua et al., 2025).


This emotional pattern contrasts with earlier technology waves that often undermined workplace social connectivity. Trist and Bamforth's (1951) classic study of coal mining mechanization documented how technological change disrupted established social systems, leading to decreased satisfaction and coordination problems. More recently, Dell'Acqua and colleagues (2023a) found that AI integration in recruiting contexts led to reduced trust and coordination failures, even when AI outperformed humans on specific tasks.


The positive emotional response in the P&G study appears linked to participants' evolving expectations about AI utility. Participants who reported larger increases in their expected future use of AI also reported more positive and fewer negative emotions during the task. While correlation cannot definitively establish causality, this relationship suggests an interesting dynamic between positive experiences with AI and anticipated future engagement with the technology (Dell'Acqua et al., 2025).


Interestingly, despite objective performance improvements, participants using AI were actually less confident about their solutions. AI-enabled participants were 9.2 percentage points less likely to expect their solutions to rank in the top 10% compared to the control group (p < 0.05), suggesting a disconnect between actual and perceived performance (Dell'Acqua et al., 2025). This calibration gap presents both challenges and opportunities for organizations—challenges in that employees may undervalue their AI-augmented contributions, but opportunities in that addressing this gap through feedback and training could further enhance AI adoption and effectiveness.


Evidence-Based Organizational Responses

Reimagining Team Structure and Composition


The evidence suggests AI enables significant reductions in team size for many collaborative tasks without sacrificing quality. Organizations should systematically evaluate which activities genuinely require multi-person collaboration versus those where AI-augmented individuals achieve comparable outcomes.


Effective approaches include:


  • Selective team deployment: Reserve human teams for contexts requiring sustained relationship-building, high-stakes negotiation, or exceptional breakthrough performance where AI-augmented teams show advantages over AI-enabled individuals

  • Flexible staffing models: Design workflows allowing individuals to toggle between solo AI-assisted work and team collaboration based on task requirements and performance targets

  • Right-sizing innovation funnels: Concentrate human collaboration at critical decision gates while enabling AI-supported individual work for exploration and concept development, mirroring P&G's focus on improving early-stage "seed" quality

  • Cross-functional pairing optimization: Maintain small cross-functional teams (2-3 members plus AI) rather than larger groups where coordination costs may overwhelm AI benefits


Procter & Gamble's innovation process provides a template. The company emphasizes early-stage product development as crucial for the entire pipeline—"better seeds lead to better trees," as one senior leader emphasized. Traditionally, this early stage involves small cross-functional teams comprising Commercial and R&D professionals. The experimental evidence suggests that for certain early-stage tasks, AI-augmented individuals can perform at levels previously requiring these two-person teams, potentially allowing more ideas to be explored in parallel or enabling faster iteration cycles (Dell'Acqua et al., 2025).


However, organizations must weigh this efficiency against the finding that AI-augmented teams produce more top-tier solutions. For firms where breakthrough innovation drives disproportionate value, maintaining team structures while adding AI may prove more strategic than shifting to AI-augmented individuals. The optimal choice depends on whether the organization prioritizes consistent quality across many initiatives or exceptional performance on a smaller number of high-potential projects.


Building Sophisticated AI-Interaction Capabilities


Effective AI utilization requires more than basic prompting skills. Organizations must develop structured capability-building programs that move employees from novice to sophisticated AI interaction.


Effective approaches include:


  • Iterative prompting training: Teach employees to engage AI through multi-turn conversations that progressively refine outputs rather than expecting complete solutions from single prompts

  • Domain-specific prompt libraries: Create and maintain repositories of effective prompts tailored to organizational contexts, business challenges, and functional domains

  • Critical evaluation frameworks: Develop systematic approaches for assessing AI outputs against domain expertise, identifying plausible-sounding but incorrect suggestions, and combining AI-generated content with human judgment

  • Experimentation infrastructure: Establish low-stakes environments where employees can develop AI interaction skills without performance pressure, building confidence and competence through deliberate practice


Procter & Gamble embedded AI capability-building directly into innovation workflows, providing one-hour training sessions focused on prompt engineering for consumer packaged goods challenges before participants engaged in authentic product development tasks. The training included specific prompt templates for ideation, problem framing, and simulated customer interviews—concrete tools participants could immediately adapt. This approach ensured immediate application of learned skills while allowing participants to experience AI's potential in realistic work contexts (Dell'Acqua et al., 2025).


The training drew on established prompting techniques integrated with business methodologies. For instance, the ideation prompts used Chain-of-Thought reasoning, explicitly instructing the AI to articulate its reasoning step-by-step and breaking complex tasks into sequential components. The approach first asked AI to generate numerous ideas, then to refine and narrow them while explaining reasoning at each step. Similarly, the framing prompts established AI personas (such as "innovation specialist") who would guide participants through analyzing problems from multiple perspectives without immediately providing solutions (Dell'Acqua et al., 2025).


Even this minimal one-hour preparation enabled significant performance gains, suggesting most employees currently operate far below their potential AI-augmented productivity. Organizations investing more substantially in AI literacy development—through multi-day training, ongoing coaching, and progressive skill-building—could realize even larger benefits.


Redesigning Performance Expectations and Workflows


AI fundamentally changes what constitutes reasonable expectations for work output. Organizations must update standards, deliverable formats, and timeline assumptions to reflect AI-augmented capabilities.


Effective approaches include:


  • Elevated baseline expectations: Increase minimum standards for deliverable comprehensiveness, depth of analysis, and consideration of alternatives, leveraging AI's capacity to generate extensive content rapidly

  • Accelerated cycle times: Compress timelines for deliverables where AI enables faster initial drafting, while potentially extending time for critical evaluation and refinement

  • Enhanced documentation requirements: Expect more thorough documentation of reasoning, explored alternatives, and rejected options, exploiting AI's ability to track and synthesize decision rationale

  • Transparency protocols: Require clear indication of AI involvement in work products, including which components utilized AI assistance and how outputs were validated against human expertise


The P&G experiment demonstrated these dynamics concretely. AI-enabled participants produced solutions 12-16% faster while generating substantially longer, more comprehensive outputs. This combination suggests organizations can simultaneously accelerate cycle times and increase deliverable depth—a rare productivity improvement that doesn't involve quality trade-offs (Dell'Acqua et al., 2025).


However, organizations must also address the calibration gap where employees underestimate their AI-augmented performance. The finding that AI users were less confident about solution quality despite objective improvements suggests a need for updated feedback mechanisms and performance evaluation approaches. Organizations should provide explicit feedback helping employees recognize when their AI-augmented work meets or exceeds standards, building appropriate confidence in these new working modes.


Developing Cross-Functional Fluency Through AI-Mediated Learning


The evidence that AI breaks down functional silos creates opportunities for accelerated cross-functional skill development. Rather than relying solely on job rotation or cross-functional teams, organizations can use AI to help specialists develop working knowledge of adjacent domains.


Effective approaches include:


  • AI-guided exploration of unfamiliar domains: Encourage specialists to use AI as a learning companion for understanding adjacent functional areas, asking questions and exploring concepts outside their core expertise

  • Simulated cross-functional dialogue: Design AI interactions that expose participants to perspectives from other functions, helping R&D professionals consider commercial viability or marketing specialists engage technical constraints

  • Boundary-spanning assignments: Assign individual contributors tasks outside their core domain with AI support, building confidence and capability in unfamiliar territory

  • AI-mediated translation: Use AI to help specialists translate domain-specific concepts into language accessible to other functions, reducing communication barriers


The P&G results demonstrate AI's capacity to facilitate this learning. Without AI, R&D professionals proposed predominantly technical solutions while Commercial specialists favored market-oriented approaches. With AI, both groups produced balanced solutions spanning technical and commercial dimensions, suggesting they engaged meaningfully with perspectives outside their core expertise. Importantly, this boundary-spanning occurred without requiring direct collaboration with specialists from the other function (Dell'Acqua et al., 2025).


This pattern suggests organizations might accelerate cross-functional skill development by assigning specialists to complete tasks outside their core domain with AI support, then debriefing with domain experts to solidify learning. Over time, this could build more versatile professionals capable of reasoning across functional boundaries even without AI assistance.


Balancing Efficiency With Human Skill Development


The capacity for AI-augmented individuals to match team performance creates risk that organizations over-optimize for efficiency at the expense of skill development, mentorship, and organizational learning.


Effective approaches include:


  • Deliberate skill-building rotations: Maintain some team-based work explicitly for developmental purposes, even where AI-enabled individuals could achieve similar outcomes more efficiently

  • Mentorship preservation: Protect senior-junior pairing opportunities that transfer tacit knowledge and organizational norms that AI cannot fully replicate

  • Capability audits: Regularly assess whether employees maintain core competencies independent of AI assistance, ensuring organizational capability resilience if AI access becomes limited

  • Gradual transition protocols: Phase AI adoption to allow employees time to develop both AI-interaction skills and domain expertise, avoiding scenarios where individuals become dependent on AI before building foundational knowledge


The P&G research highlights this tension. Non-core employees with AI achieved performance levels matching teams with core-job employees, suggesting AI can substitute for experience and mentorship in producing immediate outputs (Dell'Acqua et al., 2025). However, this raises questions about long-term skill development. If organizations consistently deploy AI to bridge experience gaps rather than providing developmental team experiences, they may undermine the mechanisms through which employees build expertise.


Organizations should distinguish between situations where efficiency takes priority versus those where learning matters most. For employees in their first years, maintaining traditional team-based projects may prove essential for building foundational skills and organizational knowledge, even if AI-augmented alternatives would produce better immediate results. For experienced professionals, AI-augmented individual work may appropriately capture efficiency gains without compromising long-term capability.


Some organizations might consider "AI-optional" periods where employees periodically complete work without AI assistance to maintain baseline capabilities and prevent skill atrophy. While this may seem inefficient, it ensures organizational resilience and prevents excessive dependency on AI systems that may experience outages or limitations.


Building Long-Term Organizational Capability in the AI Era

Governance Frameworks for Human-AI Collaboration


As AI transitions from tool to teammate, organizations require new governance structures addressing questions traditional IT frameworks don't contemplate: When should AI participate in decisions? What level of human validation is required? How do we maintain accountability when AI contributes substantively to outputs?


Effective governance balances experimentation with appropriate guardrails. Organizations should establish clear protocols specifying which work products require what level of human validation, particularly for high-stakes decisions affecting customers, employees, or strategy. These protocols must evolve as organizational AI literacy improves and as AI capabilities expand.


The concept of AI as a "cybernetic teammate" draws from Norbert Wiener's foundational work on cybernetics, which describes feedback-regulated systems that dynamically adjust their behavior in response to environmental inputs (Wiener, 1948, 1950). Rather than simply automating tasks, such systems modify their functioning through iterative feedback loops—a property that makes them capable of participating in collaborative processes. This framing emphasizes that AI integration involves more than adding another tool; it requires reconceptualizing the relational fabric of collaboration itself (Dell'Acqua et al., 2025).


Governance should also address the inevitable tension between efficiency and human development. The P&G research demonstrates AI-augmented individuals can match two-person team quality, creating pressure to reduce team sizes that may undermine mentorship, social connection, and organizational learning (Dell'Acqua et al., 2025). Governance frameworks should explicitly protect certain collaborative activities for developmental purposes, even where efficiency metrics suggest alternative structures.


Transparency requirements represent another governance frontier. As AI contributes more substantively to work products, stakeholders—whether clients, regulators, or organizational leaders—may reasonably expect disclosure about AI involvement. The P&G study tracked AI content retention, measuring the percentage of sentences in submitted solutions that were originally produced by AI. Results revealed a polarized distribution: many participants heavily incorporated AI-generated content (with substantial numbers retaining over 75%), while others engaged AI primarily for ideation and validation without directly incorporating generated text (Dell'Acqua et al., 2025).


Importantly, high retention rates don't necessarily indicate passive AI adoption. Among participants retaining AI-generated content, the average was 18.7 prompts per solution, with those showing 100% AI content averaging 23.9 prompts—suggesting extensive iterative interaction rather than simple copy-paste behavior. Organizations should develop standards for documenting AI's role in creating deliverables, both to maintain trust and to enable organizational learning about effective AI integration (Dell'Acqua et al., 2025).


Continuous Learning Systems for AI-Human Workflows


The rapid evolution of AI capabilities demands learning systems that capture and disseminate emerging best practices for human-AI collaboration. Traditional knowledge management approaches—codifying best practices in static documentation—prove insufficient given the pace of AI advancement and the context-dependent nature of effective AI interaction.


Organizations should implement mechanisms for rapid identification and scaling of effective AI-interaction patterns. This might include systematic analysis of how high-performing employees use AI differently than average performers, creation of internal communities of practice focused on AI-augmented work, and structured experimentation with new AI capabilities as they emerge.


The P&G study provides a template through its analysis of how participants actually used AI. Beyond tracking content retention, researchers examined semantic similarity of solutions across conditions. While human-only solutions showed relatively dispersed distributions, AI-aided solutions demonstrated notably higher semantic similarity—consistent with existing literature on the standardizing effect of large language models. However, when researchers compared AI-aided solutions to outputs from pure AI (generated by prompting GPT-4o to solve the same problems without human iteration), the AI-aided solutions showed much looser clustering. Despite high retention rates of AI-generated content, the semantic fingerprint remained closer to human-only solutions than to pure AI outputs, indicating that humans meaningfully shaped and contextualized AI suggestions rather than merely adopting them wholesale (Dell'Acqua et al., 2025).


This finding suggests organizations should focus learning systems on understanding how skilled employees shape, refine, and integrate AI outputs rather than simply tracking usage metrics. The distinction between passive consumption and active orchestration of AI capabilities may prove crucial for performance outcomes.


Learning systems should also capture negative examples—patterns of AI interaction that produce poor outcomes, create risks, or undermine human skill development. Understanding these failure modes becomes critical organizational knowledge as AI integration accelerates.


Preserving Human Connection and Organizational Culture


The finding that AI-augmented work generates positive emotional responses comparable to or exceeding traditional teamwork represents both opportunity and risk (Dell'Acqua et al., 2025). On one hand, it suggests AI integration need not sacrifice worker satisfaction or engagement. On the other, it raises questions about whether positive emotional responses to AI interaction adequately substitute for the deeper human connections that sustain organizational culture and employee wellbeing over extended periods.


Organizations should distinguish between task-level emotional experience—the satisfaction of completing work successfully—and broader relational needs including belonging, trust-building, and shared purpose. While AI may effectively support the former, the latter likely requires sustained human interaction. Leaders should thoughtfully consider which collaborative experiences to preserve for their relationship-building value even when AI-augmented alternatives might prove more efficient.


The P&G study measured emotional responses at task completion, capturing immediate affective states (enthusiasm, energy, excitement, anxiety, frustration, distress). Results showed AI users experienced more positive and fewer negative emotions than individuals working alone, matching the emotional profile of traditional team members (Dell'Acqua et al., 2025). However, this single-task measurement cannot address longer-term questions about organizational belonging, cultural cohesion, or sustained motivation.


This becomes especially critical for remote and distributed organizations where serendipitous in-person interaction already faces constraints. If AI further reduces occasions for human collaboration, organizations risk weakening the social fabric that enables coordination, knowledge transfer, and cultural cohesion. Intentional design of human connection points—whether periodic in-person gatherings, structured collaboration opportunities, or protected non-AI collaborative work—may prove essential for long-term organizational health.


Organizations should also attend to potential equity implications. If AI democratizes expertise and enables individual contributors to operate more independently, high-performing employees may increasingly opt out of collaborative work, leaving behind those who benefit most from peer learning and collective problem-solving. Governance frameworks should consider how to maintain inclusive access to both AI-augmented efficiency and human mentorship opportunities.


The P&G research also revealed interesting team dynamics. When teams worked without AI, the distribution of solution technical orientation showed clear bimodality (bimodality coefficient = 0.564), suggesting solutions clustered around either technical or commercial orientations—likely reflecting the dominant perspective of the more influential team member. In contrast, AI-enabled teams showed more uniform, unimodal distributions (bimodality coefficient = 0.482) while maintaining similar overall technical depth. This shift from bimodality to unimodality suggests AI may help reduce dominance effects in team collaboration, facilitating more balanced contributions from both technical and commercial perspectives (Dell'Acqua et al., 2025).


This finding suggests AI might improve collaborative dynamics by giving voice to less dominant team members or by providing neutral ground for exploring perspectives that might otherwise be suppressed. Organizations could leverage this capacity intentionally, using AI to surface diverse viewpoints in team settings where power dynamics or status differences might otherwise constrain open dialogue.


Conclusion

The emergence of AI as a "cybernetic teammate" rather than merely another productivity tool demands fundamental rethinking of how organizations structure collaborative work. The evidence from Procter & Gamble demonstrates that AI-augmented individuals can achieve performance levels previously requiring human teams, while simultaneously breaking down functional expertise boundaries and generating positive emotional experiences comparable to traditional collaboration (Dell'Acqua et al., 2025).


These findings suggest four critical imperatives for organizational leaders. First, systematically evaluate which collaborative structures genuinely require multiple humans versus where AI-augmented individuals achieve comparable outcomes. Second, invest substantially in developing sophisticated AI-interaction capabilities across the workforce, moving beyond basic prompt literacy toward advanced orchestration skills. Third, redesign performance expectations, workflow assumptions, and governance frameworks to reflect AI-augmented realities. Fourth, deliberately preserve human collaboration opportunities that serve developmental, relational, and cultural purposes even where efficiency metrics suggest alternatives.


The research also highlights important nuances. While AI-enabled individuals match average team quality, AI-augmented teams show particular advantages for exceptional breakthrough performance—solutions from teams with AI were approximately three times more likely to rank in the top decile (Dell'Acqua et al., 2025). Organizations pursuing innovation leadership face different strategic choices than those optimizing for consistent execution. Context matters—the optimal balance of human collaboration and AI augmentation likely varies across industries, organizational cultures, and specific work domains.


Importantly, participants in the P&G study were relative novices with AI, receiving just one hour of training before engaging in authentic work tasks, suggesting observed effects may represent lower bounds (Dell'Acqua et al., 2025). As employees develop more sophisticated AI-interaction skills and as AI systems evolve to better support collaborative workflows, the magnitude of performance gains and structural implications may increase substantially. Organizations that develop superior capabilities in human-AI collaboration may build competitive advantages that prove difficult for late adopters to replicate.


The transition toward AI as teammate rather than tool will prove neither automatic nor painless. It requires rethinking ingrained assumptions about how work gets done, who needs to be involved, and what constitutes effective collaboration. It demands new skills, revised processes, and governance frameworks addressing novel questions about accountability, transparency, and human development. Yet for organizations willing to engage these challenges thoughtfully, the potential rewards appear substantial: accelerated innovation, democratized expertise, and more efficient workflows without necessarily sacrificing human satisfaction.


The evidence suggests we are witnessing not merely an incremental improvement in collaborative work but a fundamental restructuring of how humans and machines combine capabilities to solve complex problems. Dell'Acqua and colleagues (2025) frame this through the lens of distributed cognition (Hutchins, 1991, 1995) and Actor-Network Theory (Callon, 1984; Latour, 1987, 2007), emphasizing that AI's role transcends that of a mere tool or facilitator, entering the relational fabric of collaboration itself. By treating AI as an active counterpart rather than passive instrument, we gain deeper insight into how GenAI mediates—and is mediated by—the collective processes that form the backbone of modern teamwork.


Organizations that recognize this transformation and respond with appropriate urgency, investment, and governance will shape the future of knowledge work. Those that treat AI as simply another software tool risk falling behind competitors who have learned to harness its collaborative potential. The choice is not whether to integrate AI into collaborative work, but how to do so in ways that enhance organizational capability while preserving the human elements that ultimately sustain innovation, culture, and competitive advantage.


References

  1. Ancona, D. G., & Caldwell, D. F. (1992). Demography and design: Predictors of new product team performance. Organization Science, 3(3), 321–341.

  2. Argote, L. (1999). Organizational learning: Creating, retaining and transferring knowledge. Kluwer Academic Publishers.

  3. Callon, M. (1984). Some elements of a sociology of translation: Domestication of the scallops and the fishermen of St Brieuc Bay. The Sociological Review, 32(1), 196–233.

  4. Cohen, S. G., & Bailey, D. E. (1997). What makes teams work: Group effectiveness research from the shop floor to the executive suite. Journal of Management, 23(3), 239–290.

  5. Csaszar, F. A. (2012). Organizational structure as a determinant of performance: Evidence from mutual funds. Strategic Management Journal, 33(6), 611–632.

  6. Dell'Acqua, F., McFowland, E., Mollick, E. R., Lifshitz-Assaf, H., Kellogg, K., Rajendran, S., Krayer, L., Candelon, F., & Lakhani, K. R. (2023a). Falling asleep at the wheel: Human/AI collaboration in a field experiment on HR recruiters. Harvard Business School Working Paper.

  7. Dell'Acqua, F., McFowland, E., Mollick, E. R., Lifshitz-Assaf, H., Kellogg, K., Rajendran, S., Krayer, L., Candelon, F., & Lakhani, K. R. (2023b). Navigating the jagged technological frontier: Field experimental evidence of the effects of AI on knowledge worker productivity and quality. Harvard Business School Working Paper 24-013.

  8. Dell'Acqua, F., Ayoubi, C., Lifshitz, H., Sadun, R., Mollick, E., Mollick, L., Han, Y., Goldman, J., Nair, H., Taub, S., & Lakhani, K. R. (2025). The cybernetic teammate: A field experiment on generative AI reshaping teamwork and expertise. Harvard Business School Working Paper 25-043.

  9. Deming, D. J. (2017). The growing importance of social skills in the labor market. Quarterly Journal of Economics, 132(4), 1593–1640.

  10. Deutsch, M. (1949). A theory of cooperation and competition. Human Relations, 2(2), 129–152.

  11. Dougherty, D. (1992). Interpretive barriers to successful product innovation in large firms. Organization Science, 3(2), 179–202.

  12. Hutchins, E. (1991). The social organization of distributed cognition. In L. B. Resnick, J. M. Levine, & S. D. Teasley (Eds.), Perspectives on socially shared cognition (pp. 283–307). American Psychological Association.

  13. Hutchins, E. (1995). Cognition in the wild. MIT Press.

  14. Johnson, D. W., & Johnson, R. T. (2005). New developments in social interdependence theory. Genetic, Social, and General Psychology Monographs, 131(4), 285–358.

  15. Jones, B. F. (2009). The burden of knowledge and the death of the renaissance man: Is innovation getting harder? Review of Economic Studies, 76(1), 283–317.

  16. Kogut, B., & Zander, U. (1992). Knowledge of the firm, combinative capabilities, and the replication of technology. Organization Science, 3(3), 383–397.

  17. Latour, B. (1987). Science in action: How to follow scientists and engineers through society. Harvard University Press.

  18. Latour, B. (2007). Reassembling the social: An introduction to actor-network-theory. Oxford University Press.

  19. Nickerson, J. A., & Zenger, T. R. (2004). A knowledge-based theory of the firm: The problem-solving perspective. Organization Science, 15(6), 617–632.

  20. Noy, S., & Zhang, W. (2023). Experimental evidence on the productivity effects of generative artificial intelligence. Science, 381(6654), 187–192.

  21. Trist, E. L., & Bamforth, K. W. (1951). Some social and psychological consequences of the longwall method of coal-getting. Human Relations, 4(1), 3–38.

  22. Wiener, N. (1948). Cybernetics: Or control and communication in the animal and the machine. MIT Press.

  23. Wiener, N. (1950). The human use of human beings: Cybernetics and society. Houghton Mifflin.

  24. Wuchty, S., Jones, B. F., & Uzzi, B. (2007). The increasing dominance of teams in production of knowledge. Science, 316(5827), 1036–1039.

ree

Jonathan H. Westover, PhD is Chief Academic & Learning Officer (HCI Academy); Associate Dean and Director of HR Programs (WGU); Professor, Organizational Leadership (UVU); OD/HR/Leadership Consultant (Human Capital Innovations). Read Jonathan Westover's executive profile here.

Suggested Citation: Westover, J. H. (2025). When Artificial Intelligence Becomes the Teammate: Rethinking Innovation, Collaboration, and Organizational Design in the GenAI Era. Human Capital Leadership Review, 28(1). doi.org/10.70175/hclreview.2020.28.1.2

Human Capital Leadership Review

eISSN 2693-9452 (online)

Subscription Form

HCI Academy Logo
Effective Teams in the Workplace
Employee Well being
Fostering Change Agility
Servant Leadership
Strategic Organizational Leadership Capstone
bottom of page