Restructuring for AI: The Power of Small, High-Agency Teams and the Path to Enterprise-Scale Coordination
- Jonathan H. Westover, PhD
- 2 hours ago
- 17 min read
Listen to this article:
Abstract: Organizations adopting artificial intelligence face a fundamental structural challenge: traditional hierarchies and coordination mechanisms often stifle the experimentation and rapid iteration AI implementation requires. Emerging evidence suggests that small, cross-functional teams with high autonomy—typically comprising senior engineers, domain experts, and experienced product managers—deliver faster time-to-value and stronger early returns on AI investments than centralized, top-down approaches. This article examines the organizational design principles enabling these teams to succeed and addresses the critical gap in enterprise-scale coordination mechanisms. Drawing on organizational theory, agility research, and practitioner accounts from technology, financial services, and healthcare sectors, we propose a dual-operating system model that preserves the benefits of autonomous pods while building connective tissue for resource allocation, knowledge sharing, and strategic alignment. The article concludes with evidence-based recommendations for leaders navigating the transition from experimental AI initiatives to institution-wide capability.
The advent of generative AI and foundation models has compressed innovation cycles from years to months, creating unprecedented pressure on organizational structures designed for stability rather than continuous experimentation. Unlike previous waves of digital transformation—where technology could be deployed through predictable waterfall processes—AI adoption demands tight feedback loops between technical possibility, domain expertise, and user needs. Organizations that maintain rigid functional silos or route decisions through multiple approval layers consistently lose ground to competitors who empower small teams to move quickly.
The emerging playbook is deceptively simple: assemble micro-teams of three to seven people with complementary skills, grant them decision authority over a defined problem space, and remove bureaucratic friction. Early adopters report these units shipping production AI applications in weeks rather than quarters, often achieving positive ROI within the first deployment cycle. Yet this approach raises urgent questions about scalability. How do organizations coordinate dozens or hundreds of autonomous teams without duplicating effort, fragmenting architectural standards, or misallocating scarce AI expertise? What governance structures prevent a thousand flowers from blooming in incompatible directions?
The stakes are substantial. Organizations that solve the coordination problem stand to compound their early wins into sustained competitive advantage. Those that fail risk either descending into chaos as teams proliferate, or reverting to command-and-control structures that kill the very agility that made initial projects successful.
The AI Adoption Landscape
Defining High-Agency Cross-Functional Teams in AI Contexts
High-agency teams operate with what organizational scholars call structural empowerment—the formal authority to make consequential decisions about scope, technology choices, and resource allocation within their domain. In AI implementations, this typically manifests as:
Compositional autonomy: Teams self-organize around three core roles—technical leadership (senior ML/AI engineers), domain expertise (subject matter experts who understand business context and data), and product management (individuals who translate user needs into solvable problems)
Decision rights: Authority to select models, define success metrics, access production data, and deploy to users without external approval gates for routine choices
Protected time: Dedicated allocation rather than part-time matrixed assignments, enabling sustained focus
Resource access: Direct budget control for compute, APIs, and tooling within guardrails
This structure mirrors the "two-pizza team" model pioneered in cloud services development but adapted for AI's unique demands: the need for rapid hypothesis testing with expensive compute, close coupling between model behavior and domain knowledge, and user feedback loops that outpace traditional release cycles.
Critically, high agency does not mean isolation. Effective teams maintain what researchers describe as external activity—deliberate outreach for knowledge exchange, resource negotiation, and alignment—while retaining internal execution authority (Ancona & Caldwell, 1992).
State of Practice: Adoption Patterns and Early Results
Adoption of autonomous AI teams clusters in three organizational archetypes:
Digital natives and technology leaders have moved most aggressively. Cloud platforms, financial technology firms, and AI-first startups routinely staff AI initiatives with sub-ten-person teams empowered to ship directly to production. These organizations report cycle times from concept to deployed model measured in weeks, with some infrastructure-as-code teams shipping updates daily.
Incumbent enterprises in regulated industries adopt hybrid models. Financial services institutions and healthcare systems typically grant teams autonomy for internal tools and decision-support applications while maintaining approval layers for customer-facing or clinical AI. Industry observers note that many large enterprises have established small numbers of autonomous AI teams but have not yet scaled this model across their organizations.
Traditional manufacturing and government organizations lag considerably, often piloting the model in innovation labs or digital units while core operations remain hierarchical.
Where implemented, early performance patterns prove compelling. Organizations using autonomous teams commonly report significantly faster time-to-first-deployment compared to centralized AI centers of excellence. More significantly, these teams demonstrate higher rates of production adoption—the critical transition from proof-of-concept to operational use.
The ROI case rests primarily on opportunity cost: reducing time-to-value from twelve months to three months in a fast-moving market often matters more than optimizing model accuracy by a few percentage points.
Organizational and Individual Consequences of Autonomous AI Teams
Organizational Performance Impacts
The performance advantages of high-agency AI teams operate through several mechanisms, each with observable effects:
Accelerated learning cycles constitute the most direct benefit. Traditional AI development routed through functional hierarchies—data science develops a model, engineering productionizes it, business validates it—creates handoff delays. Cross-functional teams collapse these phases into continuous collaboration, enabling weekly or daily iteration. A financial services firm implementing fraud detection AI reported reducing model retraining cycles from quarterly to weekly after shifting to autonomous teams, resulting in substantially improved fraud capture rates within six months.
Reduced coordination costs prove particularly valuable as AI initiatives scale. Transaction cost economics predicts that organizations minimize the sum of production costs and coordination costs (Williamson, 1979). Autonomous teams internalize coordination—technical feasibility, business value, and user needs negotiate continuously within the team rather than across organizational boundaries. Research on software development teams suggests this structure can significantly reduce coordination overhead compared to matrix organizations.
Higher rates of production deployment emerge from aligned incentives. When a single team owns problem definition through operational deployment, they naturally optimize for the full value chain rather than narrow functional metrics. A multinational retailer restructured its pricing optimization AI from separate analytics, engineering, and merchandising workstreams into integrated teams; deployment rates reportedly increased from approximately one-third of projects reaching production to over three-quarters.
Resource utilization efficiency shows mixed results. Small teams often use compute and engineering talent more efficiently on individual projects by avoiding over-engineering and maintaining tight scope. However, organizations frequently observe duplication across teams—multiple groups independently solving similar problems or building redundant infrastructure. Without coordination mechanisms, these inefficiencies compound rapidly.
Individual and Stakeholder Impacts
For team members, the autonomous model typically enhances several dimensions of work quality:
Skill development accelerates through exposure to full-stack problems. Engineers gain product intuition; product managers develop technical literacy; domain experts learn to think in terms of data and model constraints. This breadth proves increasingly valuable as AI fluency becomes a baseline organizational capability.
Intrinsic motivation strengthens when team members see direct connections between their work and organizational outcomes. Self-determination theory predicts that autonomy, mastery, and purpose drive engagement (Ryan & Deci, 2000). AI practitioners consistently report higher job satisfaction in empowered team structures compared to traditional roles where they "throw models over the wall."
Cognitive load and burnout risks vary by implementation. Well-bounded teams with clear scope and adequate staffing report sustainable pace. Understaffed teams or those with poorly defined decision rights often experience role overload as members feel responsible for everything within their domain but lack resources to address it all.
For end users and customers, autonomous teams typically deliver better user experience when product management rigor accompanies technical capability. Teams that include strong product voices ship AI applications with more intuitive interfaces and clearer explanations of model behavior. Conversely, teams dominated by technical staff sometimes deploy sophisticated models wrapped in poor user experiences that limit adoption.
For executives and governance stakeholders, the model creates new anxieties around visibility and control. Traditional stage-gate processes provide predictable review points; autonomous teams operating in parallel create information asymmetry. Leaders accustomed to reviewing and approving major decisions before implementation must shift to monitoring outcomes and intervening by exception—a transition many find uncomfortable.
Evidence-Based Organizational Responses
Organizations that successfully implement autonomous AI teams while managing coordination challenges employ several complementary strategies. The most effective approaches balance team-level autonomy with organization-level coherence.
Transparent Platform and Standards Strategy
Rather than dictating specific technologies, leading organizations build shared platforms that make the easy path the right path—defaulting teams toward compatible choices while preserving flexibility.
The platform approach rests on providing pre-approved infrastructure that handles undifferentiated heavy lifting: model training environments, deployment pipelines, monitoring tools, and compliance guardrails. Teams maintain full autonomy to select models, engineer features, and design applications, but inherit standards for security, observability, and data governance by using platform services.
A practical implementation pattern:
Curated model library: Pre-vetted foundation models and common architectures with usage documentation, reducing each team's need to evaluate the landscape from scratch
Automated deployment pipelines: Infrastructure-as-code templates that embed security scanning, performance testing, and compliance checks as automatic steps rather than manual gates
Federated data catalog: Self-service discovery of available datasets with embedded access controls, enabling teams to find and use data independently while respecting privacy boundaries
Shared experiment tracking: Common tools for logging model experiments, making it easy for teams to learn from each other's trials without requiring formal knowledge transfer sessions
Technology companies have pioneered this approach in software development through "golden path" philosophies—providing opinionated defaults that handle the majority of use cases well while permitting teams to deviate when justified. Applied to AI, this means most teams use platform-provided MLOps pipelines, but a team building a specialized computer vision application can adopt custom orchestration if needed.
Capital One implemented a shared AI platform serving multiple autonomous teams across credit risk, fraud detection, and customer experience domains. The platform handles model governance, deployment automation, and monitoring, while teams retain full control over model selection and feature engineering. The company has reported that platform adoption substantially reduced duplicated infrastructure work while maintaining team velocity.
Distributed Coordination and Guild Structures
To prevent fragmentation without imposing hierarchy, organizations establish horizontal coordination mechanisms that facilitate knowledge exchange and alignment without creating approval bottlenecks.
The guild model—borrowed from craft traditions and adapted by technology companies—creates communities of practice spanning autonomous teams (Wenger, 1998). In AI contexts, guilds typically form around:
Technical domains: ML engineers across teams sharing approaches to model interpretability, addressing data drift, or optimizing inference costs
Application areas: Product managers working on customer-facing AI applications coordinating on interaction patterns and explanation strategies
Governance topics: Data scientists and legal/compliance staff developing shared standards for bias testing and model documentation
Guilds operate through regular forums (often weekly or biweekly), shared documentation repositories, and optional peer review processes. Critically, guild participation is advisory rather than mandatory—teams bring problems and share solutions voluntarily rather than seeking approval.
Effective guild practices include:
Show-and-tell sessions: Teams demonstrate working AI applications, explaining technical decisions and lessons learned in focused presentations that invite questions and suggestions
Problem-solving clinics: Open forums where teams present stuck points—a model that won't converge, a data quality issue, an unexplained performance regression—and draw on collective expertise
Shared documentation: Centralized repositories of architectural decision records, model cards, and incident post-mortems that teams contribute to and learn from
Voluntary peer review: Teams can request (but aren't required to undergo) architecture reviews or code reviews from guild members before major deployments
ING Bank established cross-team AI guilds as their autonomous team count grew. The guilds coordinate on shared challenges like regulatory compliance for ML systems and responsible AI practices without creating formal approval processes. Teams credit the guild structure with preventing significant architectural fragmentation while preserving decision speed.
Pharmaceutical company Novartis uses a similar model across drug discovery AI teams. Regular technical forums allow teams to share breakthroughs in protein folding models or clinical trial optimization approaches. The company has found that this informal coordination reduces duplicated research effort as teams learn about each other's work and avoid redundant exploration.
Clear Decision Rights and Authority Matrices
Ambiguity about who decides what systematically undermines autonomous team performance. Organizations that thrive establish explicit decision rights frameworks that define team authority boundaries and escalation paths.
Decision frameworks typically specify:
Autonomous team decisions: Model architecture selection, feature engineering approaches, experimentation priorities, minor scope adjustments, deployment timing for non-critical applications
Consultative decisions: Technology choices that impact other teams (e.g., introducing a new ML framework), significant budget variance, customer-facing feature launches, changes to shared data pipelines
Escalated decisions: Strategic direction changes, resource allocation across teams, investment in new platform capabilities, responses to significant model failures or compliance incidents
The key is making decision rights context-dependent rather than role-dependent. A team might have full autonomy to deploy a new pricing recommendation model in a test market but need approval to roll it to all markets; the same team might experiment freely with reinforcement learning but need architecture review before adopting a new distributed training framework that would impact platform stability.
Implementation approaches that work:
Written team charters: Documents specifying each team's problem scope, success metrics, resource budget, and decision authority; reviewed quarterly and updated as context changes
Exception-based reporting: Teams operate independently but flag decisions that fall outside defined authority, creating lightweight oversight without approval queues for routine work
Retrospective reviews: After-action analysis of major deployments identifying whether decision rights were appropriate; findings inform charter updates for that team and others
Adobe implemented detailed decision rights frameworks when scaling their AI team count. Each team receives explicit authority for model deployment within their product area up to defined risk thresholds (e.g., affects a limited percentage of users, bounded revenue impact). Decisions exceeding thresholds require peer review from other teams and platform leadership, but the review focuses on risk mitigation rather than approval/rejection. The company reports this approach substantially reduced approval cycle time while maintaining appropriate oversight.
Healthcare technology firm Epic Systems takes a similar approach with clinical AI teams. Teams developing diagnostic support tools have autonomous authority for model updates that maintain or improve accuracy metrics, but must escalate to clinical governance committees for models that change clinical workflows or decision-making processes. This balances the need for rapid iteration on model performance with appropriate caution around patient safety.
Resource Brokering and Allocation Mechanisms
As AI team counts grow, competition for scarce resources—particularly specialized talent, expensive compute infrastructure, and high-quality training data—intensifies. Organizations require mechanisms to allocate resources efficiently without reverting to centralized planning.
Market-inspired approaches show promise. Rather than executive committees deciding which teams get GPU clusters or ML engineering support, some organizations implement internal resource markets where teams bid for resources using allocated budgets, or queue-based systems with transparent prioritization rules.
Effective resource allocation patterns:
Transparent capacity dashboards: Real-time visibility into available compute resources, ML engineer bandwidth, and data pipeline capacity, allowing teams to self-coordinate timing of resource-intensive work
Time-boxed resource allocation: Teams receive guaranteed access to specialized resources (e.g., a senior ML researcher, a specific GPU cluster) for defined sprints, with rotation schedules published in advance
Internal service marketplaces: Platform teams offer specialized capabilities (e.g., custom model optimization, advanced feature engineering) that autonomous teams can request; demand signals inform platform investment priorities
Tiered access models: All teams get baseline resource access; teams demonstrating strong ROI or strategic importance receive enhanced allocations through quarterly review processes
Airbnb manages compute resources across multiple AI teams through a combination of reserved capacity (each team gets baseline allocation) and spot capacity (teams can access additional resources through transparent allocation mechanisms). This approach has helped surface which teams have highest-value use cases for expensive resources, leading to more efficient allocation than previous committee-based approaches.
Capability Development and Knowledge Circulation
Autonomous teams risk becoming siloed in their expertise, with valuable lessons learned by one team never reaching others facing similar challenges. Organizations counter this through deliberate knowledge circulation mechanisms.
The most effective approaches combine push and pull: making it easy for teams to share knowledge when they choose (push), while creating discoverable repositories that teams can search when encountering problems (pull).
Knowledge circulation practices that scale:
Rotation programs: Team members spend defined periods embedded in different teams, cross-pollinating approaches and building informal networks that persist after rotation ends
Internal tech talks and paper reviews: Regular sessions where teams present either their own work or external research relevant to organizational challenges, recorded and archived
Shared incident databases: Post-mortems of model failures, data quality issues, or deployment problems documented in searchable repositories with lessons learned
Office hours with platform teams: Regular sessions where teams can get help from specialized experts without formal engagement processes
Internal conferences: Quarterly or semi-annual events where teams showcase work, similar to academic conferences but focused on organizational challenges
Google's approach to knowledge sharing across large engineering organizations provides useful precedents adaptable to AI teams. The company maintains searchable repositories of design documents, testing frameworks, and incident post-mortems that teams contribute to and learn from, reducing duplicated learning across the organization.
Pharmaceutical company Roche implemented monthly "AI showcase" sessions where teams present recently deployed models. The sessions include both successes and failures—a team might show a successful drug-drug interaction prediction model and also discuss a failed attempt to predict clinical trial enrollment, explaining what they learned. Attendance is voluntary but draws substantial participation across teams. Teams frequently cite ideas from showcase sessions as influencing their subsequent project directions.
Building Long-Term AI Capability and Coordination
While autonomous teams deliver rapid early wins, sustained competitive advantage requires building institutional capabilities that outlast individual projects and team compositions. Organizations must simultaneously preserve team-level agility while developing enterprise-level coordination mechanisms—what Kotter (2014) calls a "dual operating system."
Federated Governance and Continuous Compliance
The challenge: traditional governance assumes centralized review of discrete decisions; AI systems require continuous monitoring of models that learn and drift over time, making pre-deployment approval insufficient.
Forward-looking organizations shift from gate-based to federated governance—distributing responsibility for compliance across autonomous teams while maintaining centralized oversight of outcomes rather than activities. This involves:
Automated compliance guardrails embedded in deployment infrastructure. Rather than manual review of each model for bias or fairness concerns, organizations build automated testing into CI/CD pipelines. Teams remain responsible for passing tests, but the tests themselves are centrally defined and automatically enforced. Toolkits and frameworks now exist that teams can integrate into their workflows to assess model fairness across demographic groups before deployment.
Continuous monitoring dashboards that surface model behavior across all deployed systems. Instead of reviewing models at deployment, governance teams monitor aggregate metrics—accuracy drift, prediction distribution shifts, demographic performance gaps—and intervene when thresholds are exceeded. This enables light-touch oversight of many models simultaneously.
Risk-tiered governance that applies different oversight levels based on model impact. Low-stakes applications (e.g., content recommendation for internal knowledge bases) receive minimal review; high-stakes applications (e.g., credit decisions, clinical diagnostics) undergo rigorous validation and continuous monitoring. The tiering itself is transparent and algorithmic rather than ad-hoc.
Implementation building blocks:
Model registries that track all deployed models with metadata on training data, performance metrics, and business impact
Automated fairness testing integrated into deployment pipelines, blocking deployments that exceed bias thresholds
Real-time dashboards showing model performance across demographic groups, alerting when disparities emerge
Escalation protocols that clearly specify when teams must pause deployments and seek additional review
Financial services firm JPMorgan Chase implemented federated AI governance across hundreds of models used in trading, risk management, and customer service. The governance framework defines risk tiers and corresponding review requirements, but embeds most compliance checks in automated tooling that teams use as part of normal development. Model risk management teams focus on monitoring production behavior and investigating anomalies rather than pre-approving every deployment. The bank reports maintaining compliance standards while significantly reducing average approval cycle time for medium-risk models.
Distributed Leadership and Strategic Coherence
As autonomous teams proliferate, organizations face a coherence problem: ensuring that many teams are broadly pulling in compatible strategic directions without micromanaging their specific initiatives.
Traditional annual planning cycles prove too slow for AI's rapid evolution—a strategic priority set in January may be obsolete by June as model capabilities advance. Leading organizations adopt continuous strategy processes that set direction at appropriate cadences:
Strategic themes refreshed quarterly: Broad organizational priorities (e.g., "accelerate time-to-market," "improve customer retention," "reduce operational costs") that guide team roadmaps without prescribing specific projects
Outcome-based objectives rather than output-based: Teams align to desired business outcomes (e.g., "reduce customer churn by 15%") and have freedom to determine which AI applications best drive those outcomes
Distributed sensing and strategic input: Teams contribute upward-flowing intelligence about emerging opportunities and threats they observe in their domains, informing strategic adjustments
The leadership challenge shifts from deciding which AI projects to pursue, to curating the portfolio of autonomous teams—ensuring coverage of high-priority areas, adjusting team composition and scope as needs evolve, and reallocating resources from lower-value to higher-value initiatives.
Effective strategic coherence mechanisms:
Quarterly objective-setting processes where executive leadership defines key organizational outcomes, and teams propose how their AI initiatives contribute
Portfolio reviews examining the full landscape of autonomous teams—identifying gaps in strategic coverage, redundant efforts, and opportunities for combination or spin-off
Team lifecycle management explicitly creating new teams for emerging priorities and sunsetting teams whose initial objectives have been met
Strategic forums bringing team leads together to debate priorities and resource allocation, creating peer accountability for portfolio coherence
Technology company Intuit operates numerous autonomous AI teams across its product portfolio. Executive leadership sets quarterly strategic priorities through concise frameworks identifying key challenges and desired outcomes. Teams self-organize around these priorities, proposing specific AI initiatives. Quarterly portfolio reviews assess whether the collection of team initiatives adequately addresses strategic priorities, leading to adjustments in team charters or creation of new teams to fill gaps. The company maintains strategic coherence while preserving team autonomy by operating at different temporal and abstraction levels—executives focus on quarterly strategic outcomes, teams focus on weekly shipping cadences.
Architecture and Data Stewardship
Left uncoordinated, autonomous teams naturally fragment technical architectures and data management practices. Organizations require mechanisms that encourage convergence on interoperable standards without imposing rigid centralized planning.
The emerging solution: treating architecture and data as products with active stewardship rather than as purely governance functions. Platform teams don't dictate standards through policy documents; they build attractive platforms that teams choose to adopt because they genuinely make development easier.
Data mesh principles prove particularly relevant—rather than centralized data lakes controlled by a single team, organizations treat data as domain-oriented products owned by the teams closest to data generation. Each autonomous AI team that produces data (e.g., a customer service AI team generating conversation analytics) is responsible for publishing that data in usable, well-documented form for other teams to consume. This distributes data stewardship while maintaining discoverability.
Evolutionary architecture approaches allow technical standards to emerge through adoption rather than mandate. Platform teams build reusable components (e.g., standardized model serving infrastructure, common feature stores) and make them attractive through superior developer experience. Teams adopt these components voluntarily, creating de facto standards through network effects—as more teams use shared infrastructure, it becomes increasingly costly for any single team to maintain custom alternatives.
Architectural coherence practices:
Architecture guilds that propose standards through working prototypes and reference implementations rather than abstract documents
Internal open source models where teams contribute improvements to shared platforms, creating collective ownership of core infrastructure
Deprecation pathways for legacy approaches—clearly communicating when old patterns should no longer be used and providing migration support
Fitness functions that automatically assess architectures against desired characteristics (e.g., observability, security) and provide teams feedback without blocking their progress
Salesforce manages architectural coherence across dozens of AI-powered features through a combination of platform evolution and guild coordination. A central AI platform team builds and operates Einstein—the shared ML infrastructure—but product teams retain autonomy to adopt it or build custom solutions when justified. The platform team actively solicits feedback from autonomous teams and evolves Einstein based on their needs, creating a virtuous cycle where the platform becomes more attractive over time. Most AI features now use shared infrastructure through adoption rather than mandate.
Conclusion
The autonomous team model represents more than a temporary workaround for AI's novelty—it reflects fundamental truths about how organizations learn in conditions of uncertainty. When the problem space is poorly defined and solution approaches are rapidly evolving, pushing decision authority close to the work consistently outperforms centralized planning. Early evidence suggests that small, empowered, cross-functional teams deliver faster time-to-value and higher production adoption rates for AI initiatives than traditional organizational structures.
However, the coordination challenge is real and urgent. Organizations that succeed in scaling autonomous AI teams beyond initial pilots will be those that build explicit mechanisms for resource allocation, knowledge circulation, governance, and strategic coherence. The dual operating system—preserving team autonomy for execution while building connective tissue for coordination—offers a viable path forward.
Several imperatives emerge for leaders:
Invest in platforms before scaling teams. The shared infrastructure that enables autonomy without chaos—common deployment pipelines, federated data access, automated compliance checking—must precede or at least accompany team proliferation. Organizations that scale teams first and platform second inevitably hit painful coordination bottlenecks.
Make coordination mechanisms opt-in and valuable. Guilds, knowledge sharing forums, and architectural standards succeed when teams participate because they derive value, not because participation is mandated. Design coordination mechanisms that genuinely help teams move faster, and adoption follows.
Evolve decision rights as context changes. The appropriate boundary between autonomous team decisions and escalated decisions shifts as AI capabilities mature, regulatory environments evolve, and organizational risk appetite changes. Review and adjust decision frameworks quarterly rather than treating them as static.
Measure coordination quality, not just team velocity. Track metrics that reveal coordination effectiveness—percentage of teams reusing shared components, time from question-asked to answer-received in guilds, duplication rates across teams. Teams can appear productive in isolation while the organization suboptimizes.
The organizations that master this balance—preserving the speed and creativity of autonomous teams while building institutional capabilities for coordination—will compound early AI wins into sustained competitive advantage. Those that fail to solve the coordination puzzle will either descend into fragmented chaos or revert to centralized control structures that extinguish the agility that made initial successes possible. The window for getting this right is measured in quarters, not years.
References
Ancona, D. G., & Caldwell, D. F. (1992). Bridging the boundary: External activity and performance in organizational teams. Administrative Science Quarterly, 37(4), 634–665.
Kotter, J. P. (2014). Accelerate: Building strategic agility for a faster-moving world. Harvard Business Review Press.
Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American Psychologist, 55(1), 68–78.
Wenger, E. (1998). Communities of practice: Learning, meaning, and identity. Cambridge University Press.
Williamson, O. E. (1979). Transaction-cost economics: The governance of contractual relations. The Journal of Law and Economics, 22(2), 233–261.

Jonathan H. Westover, PhD is Chief Academic & Learning Officer (HCI Academy); Associate Dean and Director of HR Programs (WGU); Professor, Organizational Leadership (UVU); OD/HR/Leadership Consultant (Human Capital Innovations). Read Jonathan Westover's executive profile here.
Suggested Citation: Westover, J. H. (2025). Restructuring for AI: The Power of Small, High-Agency Teams and the Path to Enterprise-Scale Coordination. Human Capital Leadership Review, 28(3). doi.org/10.70175/hclreview.2020.28.3.4

















