top of page
HCL Review
HCI Academy Logo
Foundations of Leadership
DEIB
Purpose-Driven Workplace
Creating a Dynamic Organizational Culture
Strategic People Management Capstone

The GenAI Divide: Why 95% of Enterprise AI Investments Fail—and How the 5% Succeed

ree

Listen to this article:


Abstract: Despite $30–40 billion in enterprise GenAI investment, 95% of organizations achieve zero measurable return, trapped on the wrong side of what we term the "GenAI Divide." This review synthesizes findings from MIT's Project NANDA research examining 300+ AI implementations and interviews with 52 organizations to identify why pilots stall and how exceptional performers succeed. The divide stems not from model quality or regulation, but from a fundamental learning gap: most enterprise AI systems lack memory, contextual adaptation, and continuous improvement capabilities. While consumer tools like ChatGPT achieve 80% exploration rates, custom enterprise solutions suffer 95% pilot-to-production failure rates. Organizations crossing the divide share three patterns: they partner rather than build (achieving 2x higher success rates), empower distributed adoption over centralized control, and demand learning-capable systems that integrate deeply into workflows. Back-office automation delivers superior ROI compared to heavily-funded sales functions, though measurement challenges persist. The emerging agentic web—enabled by protocols supporting persistent memory and autonomous coordination—represents the infrastructure required to bridge this divide at scale.

The enterprise AI landscape reveals a stark paradox. Organizations have committed unprecedented capital to generative AI adoption—between 30and30 and 30and40 billion by conservative estimates—yet the transformation promised by this technology remains confined to a remarkably small subset of implementers (Challapally et al., 2025). This concentration of success versus failure is so pronounced that it warrants distinct categorization: the GenAI Divide.


On one side of this divide sit the majority: organizations piloting tools, experimenting with use cases, and investing substantial resources while generating minimal measurable business impact. On the other side, a small cohort—approximately 5% of enterprises examined—extract millions in quantifiable value through fundamentally different approaches to procurement, implementation, and organizational design (Challapally et al., 2025). This pattern mirrors findings from earlier technology adoption cycles, where initial enthusiasm and widespread experimentation preceded concentrated value capture among organizations with superior implementation capabilities (Rogers, 2003).


The stakes extend beyond wasted investment. As AI capabilities accelerate and early adopters establish competitive moats through data accumulation and workflow integration, the window for crossing this divide narrows measurably. Procurement leaders interviewed for MIT's Project NANDA research indicate that vendor lock-in timelines span 18 months on average, suggesting organizations have limited runway to establish effective AI strategies before switching costs become prohibitive (Challapally et al., 2025). Network effects and learning curves create increasing returns to scale in AI deployment, amplifying first-mover advantages (Brynjolfsson & McAfee, 2014).


Why now? Three forces converge to make this moment critical. First, the infrastructure enabling truly adaptive AI systems—persistent memory, agent-to-agent protocols, contextual learning—has matured from research concept to deployable reality (Anthropic, 2024). Second, employee expectations have shifted dramatically; shadow AI adoption exceeds 90% in surveyed organizations, creating bottom-up pressure that formal initiatives struggle to match (Challapally et al., 2025). This grassroots technology adoption pattern reflects how transformative tools often enter organizations through individual users before gaining institutional sanction (von Hippel, 2005). Third, competitive dynamics increasingly reward speed: organizations crossing the divide first in their sectors establish data advantages and workflow integration that competitors find difficult to replicate.


This article examines why the GenAI Divide exists, what separates organizations on either side, and how both technology buyers and builders can implement evidence-based strategies to cross it. The analysis draws primarily from Project NANDA's systematic research examining over 300 publicly disclosed AI initiatives, structured interviews with representatives from 52 organizations, and survey responses from 153 senior leaders (Challapally et al., 2025).


The Enterprise GenAI Landscape: High Activity, Low Transformation

Defining the GenAI Divide in Practice


The GenAI Divide manifests as a discontinuity between adoption activity and business transformation. Adoption metrics suggest widespread engagement: 80% of surveyed organizations have explored or piloted generative AI tools, and nearly 40% report formal deployment of consumer-grade large language models (Challapally et al., 2025). These figures align with broader technology adoption patterns, where initial enthusiasm and experimentation precede selective scaling (Moore, 2014).


However, deployment figures for enterprise-grade, task-specific AI systems tell a different story. While 60% of organizations evaluated custom or vendor-provided solutions aligned to specific workflows, only 20% progressed these evaluations to pilot stage, and a mere 5% achieved production deployment with measurable business impact (Challapally et al., 2025). This 95% pilot-to-production failure rate for integrated systems defines the fundamental challenge facing enterprise AI adoption and exceeds failure rates observed in traditional enterprise software implementations, which typically range from 50-70% (Standish Group, 2020).


The divide is not merely about successful versus unsuccessful implementations. It reflects fundamentally different approaches to technology procurement, organizational design, and capability building. Organizations on the "wrong side" treat AI as software to purchase and deploy. Those crossing the divide treat AI as capability to cultivate through partnerships, requiring continuous adaptation and deep workflow integration—an approach more aligned with organizational learning theory than traditional IT implementation (Argyris & Schön, 1978).


Industry-Level Disruption Patterns


To quantify transformation beyond anecdotal evidence, Project NANDA developed a composite AI Market Disruption Index scoring nine major industries across five dimensions: market share volatility among incumbents, revenue growth of AI-native entrants, emergence of new business models, observable changes in customer behavior, and frequency of AI-driven organizational restructuring (Challapally et al., 2025). This multidimensional approach addresses the challenge of measuring technological disruption before its full economic impact becomes visible in traditional productivity statistics (Brynjolfsson et al., 2017).


The results reveal concentration of impact in just two sectors:


  • Technology (score: 2.0/5.0): New code generation tools like Cursor challenge established players such as GitHub Copilot; developer workflows show measurable shifts in time allocation and tool preferences

  • Media & Telecom (score: 1.5/5.0): AI-native content creation platforms gain share; advertising dynamics shift toward automated personalization


Seven remaining sectors—including healthcare, financial services, and energy—score below 1.0, indicating pilot activity without structural market change (Challapally et al., 2025). This pattern contradicts the narrative of broad-based AI transformation and instead suggests disruption follows a power law distribution, concentrating in sectors where AI augments core product delivery (code, content) rather than supporting peripheral functions (Christensen, 2016).


Sensitivity analysis testing alternative weightings for the five disruption indicators confirmed these rankings. Technology and Media consistently occupy top positions regardless of methodology, while Healthcare and Energy remain at the bottom. Professional Services showed the most variability (ranging from 1.2 to 2.1 depending on how efficiency gains versus structural change were weighted), reflecting genuine ambiguity about whether productivity tools constitute true disruption (Challapally et al., 2025).


A mid-market manufacturing COO captured the prevailing sentiment among leaders in low-disruption sectors: "The hype on LinkedIn says everything has changed, but in our operations, nothing fundamental has shifted. We're processing some contracts faster, but that's all that has changed" (Challapally et al., 2025, p. 6). This disconnect between public discourse and operational reality echoes historical patterns where transformative technologies require extended periods before productivity gains become measurable (Solow, 1987).


The Pilot-to-Production Chasm


The starkest manifestation of the GenAI Divide appears in deployment trajectories. Generic LLM interfaces (ChatGPT, Claude, similar tools) show relatively smooth adoption curves: 80% investigation, 50% pilot activity, 40% reported implementation (Challapally et al., 2025). However, "implementation" for these tools typically means individual productivity enhancement rather than enterprise-grade workflow integration. This distinction matters because individual-level adoption, while valuable, rarely translates to organizational transformation without structural integration (Davenport & Prusak, 1998).


Custom and task-specific AI solutions face far steeper barriers. The 60% to 20% to 5% progression (evaluation to pilot to production) represents systematic failure at each transition point (Challapally et al., 2025). Post-pilot abandonment stems from consistent issues:


  • Workflow brittleness: Systems designed for general cases break on enterprise-specific edge conditions—a pattern familiar from earlier expert systems research (Buchanan & Shortliffe, 1984)

  • Integration complexity: Tools requiring extensive configuration or manual context input face adoption resistance, reflecting the "last mile" problem in enterprise technology deployment (Pollock & Williams, 2009)

  • Static performance: AI that doesn't learn from usage patterns or improve over time loses user trust, violating user expectations shaped by consumer AI experiences that continuously improve (Norman, 2013)


Enterprise users interviewed expressed consistent frustration with vendor-pitched solutions despite enthusiasm for consumer AI tools. One CIO summarized the disconnect: "We've seen dozens of demos this year. Maybe one or two are genuinely useful. The rest are wrappers or science projects" (Challapally et al., 2025, p. 7).


This pattern reveals a critical distinction. High adoption of generic tools demonstrates user appetite for AI capabilities. Simultaneous rejection of custom solutions indicates that current enterprise offerings fail to meet expectations shaped by consumer AI experiences—creating the behavioral component of the GenAI Divide. The expectation gap reflects how consumer technology increasingly sets usability standards that enterprise systems must match (Consumerization of IT, 2011).


The Shadow AI Economy as Market Signal


While formal enterprise initiatives struggle, employees have created an alternative AI economy operating outside official channels. Survey data indicates that over 90% of organizations have employees regularly using personal ChatGPT subscriptions, Claude accounts, or similar tools for work-related tasks, despite only 40% of companies purchasing official LLM subscriptions (Challapally et al., 2025). This shadow IT phenomenon has historical precedents—from spreadsheets in the 1980s to cloud storage in the 2000s—where user-driven adoption eventually forces organizational accommodation (Gyory et al., 2012).


This "shadow AI" phenomenon provides valuable market intelligence. Employees voting with personal subscriptions demonstrate several realities:


  1. Demand exists: Workers find sufficient value to pay from personal funds, revealing authentic product-market fit

  2. Official tools underperform: Employees bypass sanctioned systems in favor of consumer alternatives, signaling capability gaps

  3. Friction points are known: Shadow usage patterns reveal which tasks benefit from AI and which organizational barriers prevent adoption


Forward-thinking organizations increasingly study shadow AI usage to inform procurement decisions, effectively treating employee behavior as product-market fit signal (Challapally et al., 2025). This approach recognizes that crossing the GenAI Divide requires building on demonstrated user preferences rather than imposing top-down solutions—a principle consistent with user-centered design methodology (Norman & Draper, 1986).


Organizations that embrace rather than suppress shadow AI usage gain competitive advantages. By observing which tools employees select, which workflows they automate, and which pain points they address, enterprises can make more informed procurement decisions and reduce implementation risk. This "lead user" approach to innovation has demonstrated effectiveness across multiple technology cycles (von Hippel, 1986).


Organizational and Individual Consequences of the GenAI Divide

Enterprise Performance Impacts


The concentration of returns on one side of the GenAI Divide creates measurable performance divergence. Organizations achieving successful AI integration report:


  • Process acceleration: 40% faster lead qualification cycles; 30% reduction in document processing time

  • External cost reduction: $2–10 million annually in eliminated BPO contracts for customer service and administrative functions

  • Revenue protection: 10% improvement in customer retention through AI-powered engagement and follow-up systems (Challapally et al., 2025)


These benefits cluster in specific functional areas. Back-office automation—despite receiving only a fraction of AI investment budgets—delivers clearer ROI than heavily-funded sales and marketing initiatives. One financial services firm documented $1 million in annual savings from automating outsourced risk compliance checks, replacing external consultants with internal AI capabilities (Challapally et al., 2025). This pattern aligns with research showing that process automation generates measurable returns more reliably than customer-facing applications, which face greater variability in user acceptance and behavior change requirements (Lacity & Willcocks, 2016).


Conversely, organizations trapped on the wrong side of the divide experience:


  • Resource misallocation: Budgets concentrated in visible but low-ROI functions (sales, marketing) while high-return opportunities in operations and finance remain underfunded—a manifestation of the "innovation theater" phenomenon where organizations prioritize visible initiatives over effective ones (Govindarajan & Trimble, 2010)

  • Pilot fatigue: Teams cycling through multiple failed implementations develop skepticism toward future AI initiatives, creating organizational antibodies against change (Kotter, 1995)

  • Competitive disadvantage: As successful adopters establish data moats and workflow integration, late followers face escalating switching costs and capability gaps


The performance differential compounds over time. Organizations successfully deploying AI systems with learning capabilities create virtuous cycles: better outputs generate more usage, more usage creates richer training data, richer data improves outputs (Brynjolfsson & McAfee, 2014). Those relying on static tools remain locked in neutral, unable to capture improvement curves. This dynamic mirrors platform economics, where early adopters benefit from network effects and data accumulation advantages (Parker et al., 2016).


Stakeholder and Workforce Impacts


Contrary to widespread predictions of immediate, large-scale displacement, Project NANDA research found limited workforce reduction directly attributable to GenAI adoption (Challapally et al., 2025). However, impacts manifest in more nuanced patterns that align with economic research suggesting technology adoption creates gradual rather than sudden labor market disruptions (Acemoglu & Restrepo, 2019).


Selective displacement in outsourced functions: Reductions concentrate in previously externalized roles—customer support operations, administrative processing, and standardized development tasks. Organizations crossing the GenAI Divide report 5–20% headcount decreases in these categories, primarily affecting contract workers and BPO arrangements rather than full-time employees (Challapally et al., 2025). This pattern reflects how organizations typically automate peripheral functions before core activities, following a "task substitution" model rather than wholesale job elimination (Autor, 2015).


Constrained hiring patterns: In Technology and Media sectors showing measurable GenAI disruption, over 80% of executives anticipate reduced hiring volumes within 24 months. Conversely, in Healthcare, Energy, and Advanced Industries (sectors scoring below 1.0 on the disruption index), most executives report no current or anticipated hiring reductions (Challapally et al., 2025). This sectoral variation suggests that labor market impacts follow industry-specific automation potential rather than uniform displacement (Frey & Osborne, 2017).


Evolving skill requirements: Organizations consistently emphasize AI literacy as a fundamental capability requirement, independent of role. A VP of Operations noted: "Our hiring strategy prioritizes candidates who demonstrate AI tool proficiency. Recent graduates often exceed experienced professionals in this capability" (Challapally et al., 2025, p. 22). This shift toward complementary skills reflects how technological change typically increases demand for workers who can effectively leverage new tools rather than simply replacing workers (Autor et al., 2003).


MIT's Project Iceberg analysis provides broader context, estimating current automation potential at 2.27% of U.S. labor value, with latent exposure reaching $2.3 trillion affecting 39 million positions (Challapally et al., 2025). This latent exposure becomes actionable as AI systems develop persistent memory, continuous learning, and autonomous operation—capabilities that define crossing the GenAI Divide. The economic magnitude suggests substantial long-term labor market impacts, though the transition timeline remains uncertain (Brynjolfsson et al., 2018).


Importantly, displacement patterns correlate with organizational AI maturity rather than industry alone. Advanced adopters in any sector show workforce impacts; laggards show minimal change regardless of theoretical automation potential. This suggests the GenAI Divide determines not whether organizations will experience workforce transformation, but when—consistent with research showing that technology adoption timing creates persistent productivity differentials (Bartelsman et al., 2013).


Evidence-Based Organizational Responses

Understanding the Learning Gap: Core Barrier to Success


Project NANDA research identified "the learning gap" as the primary factor preventing organizations from crossing the GenAI Divide (Challapally et al., 2025). This gap manifests as a systematic mismatch between user expectations—shaped by consumer AI experiences—and enterprise tool capabilities. The learning gap concept extends research on organizational learning and knowledge management by highlighting how AI systems' inability to accumulate contextual knowledge prevents effective organizational integration (Nonaka & Takeuchi, 1995).


Survey respondents rated barriers to AI scaling on a 1–10 frequency scale. Results revealed:


  • Tool adoption resistance (8.5/10): Expected given general change management challenges

  • Model output quality concerns (7.8/10): Counterintuitively high given widespread consumer LLM satisfaction

  • Poor user experience (6.2/10): Significant but secondary to quality concerns

  • Executive sponsorship gaps (4.1/10) and change management difficulties (3.9/10): Lower than conventional wisdom suggests (Challapally et al., 2025)


The prominence of quality concerns despite high consumer AI adoption creates an apparent paradox. Resolution lies in understanding what "quality" means in enterprise contexts. Users don't question whether ChatGPT can generate coherent text; they question whether enterprise AI tools can learn their specific workflows, remember contextual preferences, and improve through usage. This distinction aligns with research on situated cognition, which emphasizes that knowledge and competence are inseparable from context (Lave & Wenger, 1991).


A corporate lawyer exemplified this dynamic. Her firm invested $50,000 in a specialized contract analysis tool, yet she consistently defaulted to ChatGPT for drafting: "Our purchased AI tool provided rigid summaries with limited customization options. With ChatGPT, I can guide the conversation and iterate until I get exactly what I need" (Challapally et al., 2025, p. 12). However, for high-stakes contracts, she trusted neither—preferring human colleagues who could accumulate client knowledge and learn from feedback.


This preference hierarchy reveals the learning gap's true nature. For high-stakes work, survey respondents preferred AI over humans for simple tasks (70% for email drafting, 65% for basic analysis) but overwhelmingly favored humans for complex projects requiring judgment, memory, and adaptation (90% human preference) (Challapally et al., 2025). The dividing line isn't intelligence or speed—it's learning capability. This pattern confirms research suggesting that tasks requiring contextual knowledge and adaptive expertise remain difficult to automate despite advances in AI capabilities (Autor, 2015).


The learning gap manifests across three dimensions:


  1. Contextual memory deficits: Current systems treat each interaction independently, requiring users to provide full context repeatedly

  2. Feedback integration failures: Most enterprise AI lacks mechanisms to incorporate corrections, preferences, or outcome data into future performance

  3. Workflow adaptation limits: Systems cannot adjust to organizational changes, evolving procedures, or team-specific conventions without manual reconfiguration


Addressing these gaps requires architectural changes beyond incremental model improvements. Systems must support persistent memory, structured feedback loops, and autonomous adaptation—capabilities that distinguish genuine learning systems from static inference engines (Russell & Norvig, 2020).


Transparent Vendor Evaluation and Procurement Strategy


Organizations successfully crossing the GenAI Divide implement structured procurement approaches that emphasize capability demonstration over feature marketing. Interview analysis revealed executives consistently prioritize six factors when selecting AI vendors:


  1. Existing trust relationships (85% mention frequency): Preference for established partners adding AI capabilities versus unknown startups

  2. Deep workflow understanding (78%): Vendor fluency with industry-specific approval chains, data flows, and compliance requirements

  3. Integration burden (72%): Tools that plug into existing systems (Salesforce, internal platforms) versus those requiring wholesale replacement

  4. Data governance clarity (70%): Transparent boundaries preventing client data mixing or unauthorized model training

  5. Continuous improvement capability (66%): Systems that learn from usage rather than remaining static

  6. Operational flexibility (63%): Adaptation to evolving business processes without requiring vendor intervention (Challapally et al., 2025)


This hierarchy inverts typical software procurement priorities. Traditional SaaS evaluation emphasizes feature completeness, pricing models, and implementation timelines (Cusumano, 2010). GenAI procurement prioritizes trust, integration depth, and learning capability—patterns more aligned with professional services engagement than software licensing. This shift reflects how AI systems require ongoing partnership rather than one-time purchase (Zysman & Kenney, 2018).


Procurement Example: Financial Services Firm


A $5 billion financial services company exemplified effective GenAI procurement strategy. Rather than issuing broad RFPs for "AI capabilities," the organization identified five specific workflow pain points: loan application processing, fraud detection alert triage, customer inquiry routing, regulatory report generation, and internal knowledge management.


For each use case, the procurement team established success criteria based on operational outcomes (processing time reduction, false positive decrease) rather than model benchmarks (accuracy scores, latency metrics). They structured pilots as 90-day partnerships with clearly defined data access, performance metrics, and scale-up triggers. This outcome-based approach reflects principles from agile methodology and lean startup thinking, where rapid validation cycles prevent over-investment in unproven solutions (Ries, 2011).


The approach yielded pragmatic results. Of five pilots initiated, two reached production deployment within six months. Both successful vendors demonstrated continuous improvement during pilot phases—systems that performed better in week twelve than week one—while failed pilots showed static performance regardless of feedback volume (Challapally et al., 2025). This empirical test of learning capability provides clearer signal than vendor claims or controlled demonstrations.


Organizations should also consider vendor financial stability and strategic alignment. A Head of Procurement at a major CPG firm noted: "I receive numerous emails daily claiming to offer the best GenAI solution. Some have impressive demos, but establishing trust is the real challenge. With so many options flooding our inbox, we rely heavily on peer recommendations and referrals from our network" (Challapally et al., 2025, p. 10). This reliance on social proof and trusted networks reflects how organizations manage uncertainty in nascent technology markets (Podolny, 2001).


Building Internal AI Literacy and Adoption Capability


Organizations on the right side of the GenAI Divide cultivate distributed AI capability rather than concentrating expertise in centralized functions. This approach recognizes that workflow integration requires domain knowledge that centralized AI teams cannot possess across all business units—a principle consistent with research on knowledge integration and distributed expertise (Grant, 1996).


Effective literacy programs share common elements:


  • Prosumer identification and empowerment: Recognizing employees already using AI tools personally and channeling their enthusiasm into sanctioned use cases

  • Use case sourcing from frontline managers: Allowing budget holders to surface problems rather than relying on central labs to identify opportunities

  • Hands-on experimentation over classroom training: Providing safe environments to test tools on real work rather than hypothetical scenarios—reflecting adult learning principles emphasizing experiential knowledge (Kolb, 1984)

  • Clear governance guardrails: Establishing data handling policies and approval workflows while encouraging exploration within boundaries


Manufacturing Example: Mid-Market Operations


A mid-market manufacturing firm (annual revenue ~$800 million) exemplified distributed adoption strategy. Rather than creating a centralized AI center of excellence, the organization identified "power users" across departments—employees already leveraging ChatGPT or similar tools for productivity gains. This approach builds on lead user theory, which suggests that early adopters with strong needs and technical capability can drive broader organizational innovation (von Hippel, 1986).


The company provided these individuals with budget authority (5,000–5,000–5,000–25,000) to pilot domain-specific AI tools, requiring only structured documentation of results and adherence to data governance policies. Over six months, 23 discrete pilots launched across procurement, quality assurance, logistics, and customer service.


While most pilots yielded modest or no returns, three generated substantial impact: an automated supplier risk monitoring system reducing procurement team workload by 30%, a quality inspection image analysis tool catching defects 40% faster than manual review, and a customer inquiry routing system improving first-contact resolution by 25%.


Critically, successful tools emerged from frontline expertise identifying high-friction workflows, not from central planning. The distributed approach allowed the organization to test diverse hypotheses rapidly while maintaining governance through clear boundaries rather than centralized approval bottlenecks (Challapally et al., 2025). This "disciplined experimentation" model balances innovation with risk management (Thomke, 2003).


Organizations should also invest in developing AI literacy at all levels. While technical expertise remains important, broader understanding of AI capabilities, limitations, and appropriate use cases enables better decision-making throughout the organization. This democratization of AI knowledge supports more effective collaboration between technical and business teams (Davenport & Ronanki, 2018).


Strategic Partnership Over Internal Development


Project NANDA data reveals a striking pattern: strategic partnerships with external vendors achieved deployment success approximately twice as often as internal development efforts (66% versus 33%), despite internal builds being more commonly attempted (Challapally et al., 2025). This differential contradicts conventional wisdom about maintaining competitive advantage through proprietary technology development (Porter, 1985) and instead suggests that specialization advantages favor external partners in AI implementation.


This differential reflects several factors:


Specialization advantages: AI vendors developing category-specific tools accumulate domain expertise and training data across multiple clients, creating capability depth difficult for individual enterprises to replicate. This pattern aligns with research on increasing returns to specialization in knowledge-intensive industries (Teece et al., 1997).


Resource efficiency: Partnerships avoid the overhead of building from scratch—recruiting specialized talent, establishing infrastructure, maintaining systems over time. Given the scarcity and high cost of AI talent, external partnerships often provide more cost-effective access to expertise (Ransbotham et al., 2017).


Time-to-value: External solutions often reach production faster, with top-performing startups achieving deployment within 90 days versus nine-month timelines for enterprise internal builds (Challapally et al., 2025). This speed advantage reflects how vendors can amortize development costs across multiple clients while enterprises must bear full costs for single-purpose solutions.


However, partnership success depends on treating vendors as capability co-developers rather than software suppliers. Organizations achieving high deployment rates structure engagements with:


  • Deep customization expectations: Demanding adaptation to enterprise-specific workflows rather than accepting generic solutions—recognizing that AI value depends on contextual fit (Davenport & Harris, 2017)

  • Outcome-based evaluation: Measuring operational metrics (processing time, cost reduction) versus software benchmarks (model accuracy, feature completeness)

  • Iterative co-development: Treating initial deployment as learning phase, expecting vendor responsiveness to feedback and edge cases


Transcription and Documentation


A regional healthcare system (15 hospitals, 200+ clinics) partnered with an AI-powered medical transcription vendor to address physician documentation burden. Rather than accepting the vendor's standard product, the organization structured a six-month co-development engagement.


The healthcare system provided the vendor access to de-identified transcription data, detailed feedback on clinical workflow integration points, and regular sessions with physicians to surface usability issues. In return, the vendor committed to weekly model updates incorporating feedback and customization to the organization's specific electronic health record system. This collaborative approach reflects principles of participatory design, where end users shape system development (Schuler & Namioka, 1993).


Results demonstrated partnership value. Physician documentation time decreased 35% within four months, and system accuracy on organization-specific medical terminology improved from 82% to 94% through continuous learning. Critically, physician adoption reached 78%—far exceeding typical health IT implementation rates of 30-50% (Holden & Karsh, 2010)—because the tool adapted to existing workflows rather than requiring workflow changes (Challapally et al., 2025).


Organizations should approach the build-versus-buy decision strategically, considering:


  • Core competency alignment: Build only when AI capabilities constitute core competitive advantage

  • Total cost of ownership: Account for ongoing maintenance, talent retention, and opportunity costs

  • Speed requirements: Prioritize partnerships when time-to-value matters more than control

  • Data sensitivity: Build internally only when data governance requirements preclude external sharing


Back-Office Automation: The Hidden ROI Opportunity


Despite sales and marketing functions capturing approximately 50% of GenAI investment budgets, Project NANDA research found back-office automation often delivered superior returns with faster payback periods (Challapally et al., 2025). This investment-return mismatch reflects what organizational researchers term the "visibility bias"—the tendency to allocate resources to conspicuous activities rather than those generating highest returns (March & Simon, 1958).


This investment-return mismatch stems from measurement challenges rather than actual value. Sales and marketing impacts—demo volume increases, email response rates—align directly with board-level KPIs and receive executive visibility. Back-office efficiencies—streamlined month-end processes, fewer compliance violations—generate real value but prove harder to surface in strategic conversations. This measurement asymmetry reflects broader challenges in quantifying operational excellence and process improvement (Kaplan & Norton, 1996).


High-ROI back-office use cases identified across interviews include:


Procurement and finance:


  • Contract classification and extraction: Automating data pull from supplier agreements, purchase orders, and service contracts

  • Accounts payable/receivable reconciliation: Matching invoices to purchase orders and flagging discrepancies

  • Regulatory report generation: Compiling compliance documentation from distributed data sources


Legal and compliance:


  • Document review and analysis: Identifying relevant clauses, obligations, and risks in contracts and legal filings

  • Regulatory change monitoring: Tracking policy updates and assessing organizational impact


Operations:


  • Internal workflow orchestration: Routing approvals, requests, and information flows based on content and context

  • Process compliance monitoring: Flagging deviations from standard procedures in real-time


These applications share characteristics making them particularly suitable for AI automation: high volume, rule-based logic, clear success criteria, and minimal customer-facing risk (Lacity & Willcocks, 2016).


Procurement Transformation


A Fortune 500 CPG company implemented AI-powered spend categorization and supplier risk monitoring across a procurement organization managing $8 billion in annual supplier spending. The system automatically classified transactions, flagged supplier financial health issues, and routed approvals based on risk thresholds.


Within 12 months, the organization eliminated two external consulting contracts previously providing supplier risk analysis (annual value 1.8million), reduced procurement team time spent on categorization and reporting by 40%, and identified 1.8million in consolidation opportunities through improved spend visibility.


Critically, these gains came without headcount reduction. Instead, procurement professionals reallocated time from manual data processing to strategic supplier relationship management and category strategy development—activities generating higher organizational value (Challapally et al., 2025). This pattern illustrates how AI augmentation often proves more valuable than pure automation, enabling workers to focus on higher-value activities (Daugherty & Wilson, 2018).


Organizations should systematically evaluate back-office automation opportunities using criteria including:


  • Process volume and frequency: High-volume, repetitive tasks yield clearest returns

  • Rule-based decision logic: Well-defined processes automate more reliably than those requiring extensive judgment

  • Data availability and quality: AI effectiveness depends on accessible, structured training data

  • Integration requirements: Standalone processes automate more easily than those requiring extensive system integration


Establishing Metrics and Accountability Frameworks


Organizations successfully crossing the GenAI Divide implement clear measurement frameworks connecting AI initiatives to business outcomes. This discipline addresses the evaluation ambiguity that allows low-value projects to persist indefinitely—a common pathology in technology adoption where sunk costs and optimism bias prevent objective assessment (Keil et al., 2000).


Effective frameworks share several characteristics:


Operational metrics over model metrics: Measuring business outcomes (cost per transaction, time to resolution, error rates) rather than technical performance (model accuracy, inference latency). This principle reflects how value derives from business impact, not technical sophistication (Kaplan & Norton, 1992).


Baseline establishment: Documenting pre-AI performance levels to enable valid before-after comparison. Without clear baselines, organizations cannot distinguish AI impact from concurrent improvements or external factors (Shadish et al., 2002).


Attribution clarity: Isolating AI impact from concurrent process improvements or external factors through control groups or time-series analysis. Rigorous attribution requires quasi-experimental designs that account for confounding variables (Angrist & Pischke, 2008).


Periodic review cadence: Structured quarterly or semi-annual evaluation against initial hypotheses, with explicit continue/stop/pivot decisions. Regular review prevents zombie projects from consuming resources indefinitely (McGrath & MacMillan, 2009).


Developer Productivity Measurement


A technology services firm (12,000 employees, $4 billion revenue) implemented AI-powered code generation tools across its engineering organization. Rather than relying on developer surveys or anecdotal feedback, the company established quantitative measurement using controlled comparison methodology.


The organization selected 200 developers for initial rollout while maintaining 200 similar developers as a control group. Over six months, they tracked: pull request frequency, code review cycles, bug rates in shipped code, and developer-reported time allocation. This quasi-experimental design enabled causal inference about AI impact while accounting for concurrent organizational changes (Campbell & Stanley, 1963).


Results showed statistically significant impacts: 23% increase in pull requests per developer, 15% reduction in review cycles, and no significant difference in bug rates. Developer time allocation shifted 18% from "writing boilerplate code" to "architecture and design." Based on these metrics, the organization expanded deployment to 80% of the engineering organization and calculated annual value at $14 million in equivalent capacity gains (Challapally et al., 2025).


Importantly, the measurement framework enabled nuanced decisions. Certain development contexts (greenfield projects, algorithm development) showed minimal benefit, while others (API integration, data pipeline construction) showed substantial gains. This specificity informed targeted deployment rather than blanket rollout—demonstrating how rigorous measurement supports more effective resource allocation (Brynjolfsson & Hitt, 2000).


Organizations should develop measurement capabilities including:


  • Baseline data collection systems: Infrastructure to capture pre-implementation performance

  • Control group methodologies: Approaches to isolate AI impact from confounding factors

  • Leading indicator tracking: Early signals predicting long-term success or failure

  • Decision triggers: Pre-defined criteria for scaling, pivoting, or terminating initiatives


Building Long-Term Organizational AI Capability

Cultivating Learning-Capable System Architectures


The clearest differentiator between organizations on either side of the GenAI Divide is their approach to AI system design. Those successfully crossing the divide implement architectures supporting continuous learning rather than static deployment—reflecting how adaptive systems generate compounding value over time while static systems deliver diminishing returns (March, 1991).


Learning-capable systems share core design principles:


  • Persistent memory and context retention: Systems that maintain user preferences, organizational conventions, and historical decisions rather than treating each interaction as independent. This capability addresses the contextual knowledge gap identified as the primary barrier to enterprise AI adoption (Challapally et al., 2025).

  • Feedback integration mechanisms: Structured pathways for users to correct outputs, flag errors, and indicate preferences—with these inputs directly improving future performance. Effective feedback loops enable systems to adapt to local contexts and evolving requirements (Argyris, 1977)

  • Contextual adaptation: AI that recognizes when it's operating in familiar versus novel situations and adjusts confidence levels and escalation behavior accordingly. This metacognitive capability prevents overconfidence in edge cases while maintaining efficiency in routine scenarios (Schraw & Dennison, 1994).

  • Autonomous improvement: Systems that identify their own performance gaps through outcome tracking and request additional training data or model updates targeting specific weaknesses. This self-directed learning reflects principles from reinforcement learning and online optimization (Sutton & Barto, 2018).


These capabilities distinguish genuinely adaptive systems from "AI wrappers"—tools that apply static models to enterprise data without learning from deployment. The distinction matters because learning-capable systems create increasing returns: each interaction improves future performance, generating compounding value over time (Arthur, 1996).


The infrastructure enabling these architectures is maturing rapidly. Protocols like Anthropic's Model Context Protocol (MCP), the Agent-to-Agent (A2A) framework from Google and the Linux Foundation, and MIT's NANDA (Networked Agents and Decentralized Architecture) provide standardized approaches for agent memory, interoperability, and coordination (Anthropic, 2024; Challapally et al., 2025). These protocols create the foundation for an "agentic web" where autonomous systems can discover, integrate, and coordinate across organizational boundaries.


Knowledge Management Evolution


A global management consulting firm implemented an AI-powered knowledge management system designed for continuous learning. Rather than treating knowledge articles as static documents, the system tracked which materials analysts accessed when solving client problems, which recommendations they followed versus modified, and which outputs clients found valuable. This approach builds on principles from organizational learning theory, where tacit knowledge emerges through practice and reflection (Nonaka & Takeuchi, 1995).


Over 18 months, the system developed increasingly sophisticated understanding of when specific frameworks applied, which industry contexts required adaptation, and which analyst expertise complemented AI suggestions. Measured outcomes included: 45% reduction in time spent searching for relevant prior work, 32% increase in reuse of past deliverables (indicating better matching), and 28% improvement in junior analyst productivity on first client engagements.


Critically, knowledge workers reported the system became more valuable over time—the opposite pattern from typical enterprise software where initial enthusiasm fades (Challapally et al., 2025). This sustained value appreciation reflects genuine learning capability rather than novelty effects, demonstrating how adaptive systems create increasing returns through usage (Shapiro & Varian, 1998).


Developing Adaptive Organizational Structures


Organizations crossing the GenAI Divide implement governance models balancing experimentation with accountability. This requires moving beyond traditional centralized versus decentralized debates to create hybrid structures matching AI's distinctive characteristics—a challenge familiar from earlier research on ambidextrous organizations managing exploration and exploitation simultaneously (O'Reilly & Tushman, 2004).


Effective organizational designs feature:


  • Distributed experimentation authority: Empowering business unit leaders and frontline managers to pilot tools addressing local pain points within defined boundaries. This approach recognizes that innovation often emerges from the periphery rather than the center (Hamel, 2000).

  • Centralized standards and infrastructure: Maintaining enterprise-wide data governance, security protocols, vendor management, and integration architecture. Centralized platforms provide economies of scale while enabling distributed innovation (Gawer & Cusumano, 2014).

  • Cross-functional learning networks: Creating forums for teams to share lessons, successful approaches, and failure modes without formal reporting relationships. These communities of practice facilitate knowledge transfer across organizational boundaries (Wenger, 1998).

  • Executive accountability for outcomes: Holding business unit leaders responsible for measurable AI impact in their domains rather than treating AI as separate initiative. This integration prevents AI from becoming isolated from core business objectives (Weill & Ross, 2004).


This model recognizes that successful AI adoption requires both entrepreneurial exploration (finding high-value use cases) and operational discipline (ensuring security, compliance, and integration). Organizations attempting to centralize exploration stifle innovation; those fully decentralizing governance create security risks and vendor proliferation—reflecting classic tension between autonomy and coordination (Lawrence & Lorsch, 1967).


Distributed AI Center of Excellence


A specialty retailer ($3 billion revenue, 400 locations) created an AI Center of Excellence (CoE) with an explicitly hybrid mandate. The CoE controlled enterprise infrastructure, data policies, and vendor relationships while individual business units (merchandising, supply chain, marketing, stores) held budget authority for AI initiatives. This structure reflects platform-based organizational designs that balance central control with distributed autonomy (Yoo et al., 2010).


The CoE's role combined:


  • Enablement: Providing business units access to approved vendors, integration templates, and measurement frameworks

  • Governance: Reviewing all pilots for data handling, customer privacy, and brand risk before launch

  • Knowledge sharing: Hosting monthly sessions where teams presented results, challenges, and lessons learned


Over two years, this structure generated 47 distinct AI pilots with 12 reaching production deployment—a 26% success rate substantially exceeding industry norms of 5% (Challapally et al., 2025). Successful initiatives included: AI-powered inventory allocation reducing stockouts by 18%, personalized email campaigns improving conversion by 12%, and automated vendor invoice reconciliation eliminating 60% of manual processing time.


The distributed approach enabled rapid hypothesis testing while maintaining enterprise standards. Business units could move quickly on promising opportunities without waiting for central approval, while the CoE prevented duplicative vendor relationships and ensured consistent data governance. This balance between speed and control proved critical to crossing the GenAI Divide (Eisenhardt & Martin, 2000).


Building Ecosystem Partnership Capabilities


Research indicates that vendor relationships represent a critical organizational capability for crossing the GenAI Divide, yet most enterprises lack structured approaches to AI partnership management (Challapally et al., 2025). Effective partnership capabilities require treating vendor relationships as strategic assets requiring deliberate cultivation—reflecting how interfirm relationships increasingly determine competitive advantage in knowledge-intensive industries (Dyer & Singh, 1998).


Effective partnership capabilities include:


  • Vendor discovery and evaluation processes: Moving beyond reactive response to inbound pitches toward proactive identification of capability gaps and systematic vendor assessment. Structured evaluation reduces information asymmetry and selection bias (Eisenhardt, 1989).

  • Co-development engagement models: Structuring pilots as collaborative improvement exercises rather than proof-of-concept demonstrations, with clear expectations for vendor responsiveness and customization. Collaborative models create alignment between vendor incentives and client outcomes (Gulati & Singh, 1998).

  • Relationship tiering: Distinguishing between strategic partners warranting deep integration and relationship investment versus tactical vendors providing point solutions. This segmentation enables appropriate resource allocation across the vendor portfolio (Kraljic, 1983).

  • Performance tracking and accountability: Implementing metrics and review cadences ensuring vendors deliver committed outcomes and continuous improvement. Regular performance review prevents vendor complacency and maintains quality standards (Anderson & Narus, 1990).


Organizations building these capabilities position themselves to leverage external innovation velocity—the reality that specialized vendors often advance AI capabilities faster than internal teams can match. This recognition reflects how organizational boundaries increasingly define where firms compete versus collaborate (Chesbrough, 2003).


Strategic AI Vendor Program


A regional bank ($45 billion assets) formalized its approach to AI vendor partnerships through a structured program. The bank identified six strategic AI domains (fraud detection, credit risk assessment, customer service automation, regulatory compliance, process automation, and personalized marketing) and established vendor panels in each. This portfolio approach reduced concentration risk while maintaining competitive pressure among vendors (Puranam & Vanneste, 2009).

Panel membership required: demonstrated domain expertise, willingness to customize for bank-specific workflows, clear data governance practices, and commitment to quarterly business reviews with outcome tracking.


This structure enabled the bank to run competing pilots within domains, compare approaches across vendors, and build multi-vendor ecosystems rather than betting on single providers. Over three years, the program supported 23 pilots with 8 successful deployments, avoided vendor lock-in by maintaining competitive alternatives, and created negotiating leverage that reduced total AI spending by approximately 30% compared to single-vendor scenarios (Challapally et al., 2025).


The approach also built internal evaluation capability. Bank teams developed fluency in assessing AI vendors, understanding technology trade-offs, and structuring effective partnerships—capabilities that compound over time and create sustainable competitive advantage (Teece, 2007).


Preparing for the Agentic Web Transition


Forward-thinking organizations recognize that current AI adoption represents a transitional phase toward more fundamental change: the emergence of an "agentic web" where autonomous systems discover, integrate, and coordinate across organizational boundaries without human mediation (Challapally et al., 2025). This evolution extends the architectural principles underlying the internet itself—distributed control, open protocols, emergent complexity—to AI systems (Berners-Lee et al., 1994).


This evolution extends beyond individual AI agents performing discrete tasks to interconnected systems that:


  • Autonomously discover and evaluate capabilities: Agents identifying optimal vendors, negotiating terms, and establishing integrations without requiring human research and procurement

  • Establish dynamic integrations: Real-time API connections and data sharing arrangements based on immediate needs rather than pre-built connectors

  • Execute trustless transactions: Blockchain-enabled smart contracts allowing agents to transact across organizational boundaries with automated verification (Tapscott & Tapscott, 2016)

  • Develop emergent workflows: Self-optimizing processes spanning multiple platforms and organizational entities


Early experiments demonstrate this potential. Procurement agents are beginning to identify new suppliers independently, customer service systems coordinate seamlessly across platforms, and content creation workflows span multiple providers with automated quality assurance and payment (Challapally et al., 2025). These examples suggest how agentic systems could fundamentally restructure business processes and inter-organizational relationships (Malone et al., 1987).


Organizations positioning for this transition focus on:


  • Protocol adoption: Implementing standards like MCP, A2A, and NANDA that enable agent interoperability rather than building proprietary integration approaches. Open protocols prevent vendor lock-in while enabling ecosystem participation (Shapiro & Varian, 1998).

  • Policy framework development: Establishing governance for autonomous agent behavior, including spending authorities, acceptable risk parameters, and escalation triggers. Clear policies enable automation while maintaining organizational control (Simons, 1995).

  • Capability mapping: Identifying which organizational functions benefit from agentic automation versus those requiring human judgment and relationship management. Strategic choices about automation scope determine competitive positioning (Porter, 1996).

  • Incremental deployment: Testing agent coordination in low-stakes workflows before expanding to mission-critical processes. Staged rollout enables learning while limiting downside risk (Lynn et al., 1996).


The agentic web represents not merely incremental improvement in AI capabilities but a fundamental shift in how organizations discover, integrate, and leverage external capabilities—moving from human-mediated business processes to autonomous systems operating across the entire internet ecosystem. This transition parallels earlier shifts from hierarchical to networked organizational forms, where coordination mechanisms evolved from authority to price to protocols (Powell, 1990).


Conclusion

The GenAI Divide—the stark separation between organizations extracting millions in value from AI investments and the 95% achieving zero measurable return—stems from fundamental differences in approach rather than access to technology, capital, or talent. Organizations successfully crossing this divide share three defining characteristics: they partner with external vendors rather than building internally (achieving twice the success rate), they empower distributed adoption through frontline managers rather than centralized control, and they demand learning-capable systems that integrate deeply into workflows and improve over time (Challapally et al., 2025).


The evidence synthesized from Project NANDA's examination of 300+ implementations and interviews with 52 organizations reveals consistent patterns. Back-office automation delivers superior ROI compared to heavily-funded sales and marketing initiatives, though measurement challenges obscure this reality. External partnerships systematically outperform internal builds despite organizational bias toward the latter. Shadow AI adoption—employees using personal tools outside official channels—demonstrates both user appetite and the inadequacy of current enterprise offerings (Challapally et al., 2025).


Most critically, the learning gap defines success versus failure. Users accept AI for simple tasks but demand systems that remember context, learn from feedback, and adapt to evolving workflows for mission-critical applications (Nonaka & Takeuchi, 1995). The 90% preference for humans over AI on complex work reflects not capability limitations but the absence of memory and continuous improvement in current enterprise tools (Challapally et al., 2025). This finding aligns with research on situated cognition and organizational learning, suggesting that knowledge systems must embed themselves in operational contexts to generate value (Lave & Wenger, 1991).


For organizations currently on the wrong side of the divide, the path forward requires three decisive shifts:


  1. Stop investing in static tools that treat each interaction independently; demand systems with persistent memory and learning capability that can accumulate organizational knowledge over time (Argyris & Schön, 1978)

  2. Embrace strategic partnerships with specialized vendors over internal development efforts; structure these relationships as co-development engagements with outcome accountability rather than traditional software purchases (Gulati & Singh, 1998)

  3. Prioritize workflow integration over feature completeness; evaluate tools based on operational metrics rather than model benchmarks, recognizing that value derives from business impact (Kaplan & Norton, 1996)


For vendors and technology builders, crossing the divide means:


  1. Customize aggressively for specific workflows rather than building generic platforms; establish footholds in narrow but high-value use cases where deep domain expertise creates defensible advantages (Porter, 1996)

  2. Build learning into core architecture through feedback integration, contextual adaptation, and autonomous improvement mechanisms that enable systems to accumulate knowledge through deployment (Sutton & Barto, 2018)

  3. Demonstrate outcome accountability by measuring business impact rather than technical performance; treat deployments as partnerships requiring continuous iteration and co-evolution with client needs (Davenport & Harris, 2017)


The window for action is narrowing. Enterprises are beginning to lock in vendor relationships through 2026, and the switching costs created by learning-capable systems compound monthly (Challapally et al., 2025). Organizations and vendors that act decisively on these patterns will establish dominant positions in the post-pilot AI economy, firmly positioned on the right side of the GenAI Divide. Network effects and increasing returns to scale suggest that early movers will establish difficult-to-replicate advantages (Arthur, 1996).


The emerging infrastructure—protocols enabling agent memory, coordination, and autonomous operation—provides the technical foundation for truly transformative AI adoption (Anthropic, 2024). But technology alone cannot bridge the divide. Success requires organizational design that balances experimentation with accountability (O'Reilly & Tushman, 2004), procurement approaches that emphasize capability over features, and partnership models that treat AI deployment as continuous co-evolution rather than one-time implementation (Teece et al., 1997).


The GenAI Divide is not permanent. The organizations and vendors that recognize its true nature—a learning gap rather than a technology gap—and implement evidence-based responses will not only cross the divide but help establish the architectures, practices, and ecosystems that define the next era of enterprise AI adoption. As with previous general-purpose technologies, the transformation will unfold gradually, with success concentrating among organizations that master not just the technology but the organizational capabilities required to leverage it effectively (Brynjolfsson & McAfee, 2014).


References

  1. Acemoglu, D., & Restrepo, P. (2019). Automation and new tasks: How technology displaces and reinstates labor. Journal of Economic Perspectives, 33(2), 3–30.

  2. Anderson, J. C., & Narus, J. A. (1990). A model of distributor firm and manufacturer firm working partnerships. Journal of Marketing, 54(1), 42–58.

  3. Angrist, J. D., & Pischke, J. S. (2008). Mostly harmless econometrics: An empiricist's companion. Princeton University Press.

  4. Anthropic. (2024). Introducing the Model Context Protocol.

  5. Argyris, C. (1977). Double loop learning in organizations. Harvard Business Review, 55(5), 115–125.

  6. Argyris, C., & Schön, D. A. (1978). Organizational learning: A theory of action perspective. Addison-Wesley.

  7. Arthur, W. B. (1996). Increasing returns and the new world of business. Harvard Business Review, 74(4), 100–109.

  8. Autor, D. H. (2015). Why are there still so many jobs? The history and future of workplace automation. Journal of Economic Perspectives, 29(3), 3–30.

  9. Autor, D. H., Levy, F., & Murnane, R. J. (2003). The skill content of recent technological change: An empirical exploration. Quarterly Journal of Economics, 118(4), 1279–1333.

  10. Bartelsman, E., Haltiwanger, J., & Scarpetta, S. (2013). Cross-country differences in productivity: The role of allocation and selection. American Economic Review, 103(1), 305–334.

  11. Berners-Lee, T., Cailliau, R., Luotonen, A., Nielsen, H. F., & Secret, A. (1994). The World-Wide Web. Communications of the ACM, 37(8), 76–82.

  12. Brynjolfsson, E., & Hitt, L. M. (2000). Beyond computation: Information technology, organizational transformation and business performance. Journal of Economic Perspectives, 14(4), 23–48.

  13. Brynjolfsson, E., & McAfee, A. (2014). The second machine age: Work, progress, and prosperity in a time of brilliant technologies. W. W. Norton & Company.

  14. Brynjolfsson, E., Mitchell, T., & Rock, D. (2018). What can machines learn, and what does it mean for occupations and the economy? AEA Papers and Proceedings, 108, 43–47.

  15. Brynjolfsson, E., Rock, D., & Syverson, C. (2017). Artificial intelligence and the modern productivity paradox: A clash of expectations and statistics. NBER Working Paper No. 24001.

  16. Buchanan, B. G., & Shortliffe, E. H. (1984). Rule-based expert systems: The MYCIN experiments of the Stanford Heuristic Programming Project. Addison-Wesley.

  17. Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research. Rand McNally.

  18. Challapally, A., Pease, C., Raskar, R., & Chari, P. (2025). The GenAI divide: State of AI in business 2025 (MIT Project NANDA Research Report). Massachusetts Institute of Technology.

  19. Chesbrough, H. W. (2003). Open innovation: The new imperative for creating and profiting from technology. Harvard Business Press.

  20. Christensen, C. M. (2016). The innovator's dilemma: When new technologies cause great firms to fail. Harvard Business Review Press.

  21. Consumerization of IT. (2011). Gartner IT Glossary. Gartner Research.

  22. Cusumano, M. A. (2010). Cloud computing and SaaS as new computing platforms. Communications of the ACM, 53(4), 27–29.

  23. Daugherty, P. R., & Wilson, H. J. (2018). Human + machine: Reimagining work in the age of AI. Harvard Business Review Press.

  24. Davenport, T. H., & Harris, J. G. (2017). Competing on analytics: Updated, with a new introduction: The new science of winning. Harvard Business Review Press.

  25. Davenport, T. H., & Prusak, L. (1998). Working knowledge: How organizations manage what they know. Harvard Business Press.

  26. Davenport, T. H., & Ronanki, R. (2018). Artificial intelligence for the real world. Harvard Business Review, 96(1), 108–116.

  27. Dyer, J. H., & Singh, H. (1998). The relational view: Cooperative strategy and sources of interorganizational competitive advantage. Academy of Management Review, 23(4), 660–679.

  28. Eisenhardt, K. M. (1989). Agency theory: An assessment and review. Academy of Management Review, 14(1), 57–74.

  29. Eisenhardt, K. M., & Martin, J. A. (2000). Dynamic capabilities: What are they? Strategic Management Journal, 21(10‐11), 1105–1121.

  30. Frey, C. B., & Osborne, M. A. (2017). The future of employment: How susceptible are jobs to computerisation? Technological Forecasting and Social Change, 114, 254–280.

  31. Gawer, A., & Cusumano, M. A. (2014). Industry platforms and ecosystem innovation. Journal of Product Innovation Management, 31(3), 417–433.

  32. Govindarajan, V., & Trimble, C. (2010). The other side of innovation: Solving the execution challenge. Harvard Business Review Press.

  33. Grant, R. M. (1996). Toward a knowledge‐based theory of the firm. Strategic Management Journal, 17(S2), 109–122.

  34. Gulati, R., & Singh, H. (1998). The architecture of cooperation: Managing coordination costs and appropriation concerns in strategic alliances. Administrative Science Quarterly, 43(4), 781–814.

  35. Gyory, A. A., Cleven, A., Uebernickel, F., & Brenner, W. (2012). Exploring the shadows: IT governance approaches to user-driven innovation. European Conference on Information Systems (ECIS) 2012 Proceedings, Paper 222.

  36. Hamel, G. (2000). Leading the revolution. Harvard Business School Press.

  37. Holden, R. J., & Karsh, B. T. (2010). The technology acceptance model: Its past and its future in health care. Journal of Biomedical Informatics, 43(1), 159–172.

  38. Kaplan, R. S., & Norton, D. P. (1992). The balanced scorecard: Measures that drive performance. Harvard Business Review, 70(1), 71–79.

  39. Kaplan, R. S., & Norton, D. P. (1996). The balanced scorecard: Translating strategy into action. Harvard Business Press.

  40. Keil, M., Tan, B. C., Wei, K. K., Saarinen, T., Tuunainen, V., & Wassenaar, A. (2000). A cross-cultural study on escalation of commitment behavior in software projects. MIS Quarterly, 24(2), 299–325.

  41. Kolb, D. A. (1984). Experiential learning: Experience as the source of learning and development. Prentice Hall.

  42. Kotter, J. P. (1995). Leading change: Why transformation efforts fail. Harvard Business Review, 73(2), 59–67.

  43. Kraljic, P. (1983). Purchasing must become supply management. Harvard Business Review, 61(5), 109–117.

  44. Lacity, M. C., & Willcocks, L. P. (2016). A new approach to automating services. MIT Sloan Management Review, 58(1), 41–49.

  45. Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. Cambridge University Press.

  46. Lawrence, P. R., & Lorsch, J. W. (1967). Organization and environment: Managing differentiation and integration. Harvard Business School Press.

  47. Lynn, G. S., Morone, J. G., & Paulson, A. S. (1996). Marketing and discontinuous innovation: The probe and learn process. California Management Review, 38(3), 8–37.

  48. Malone, T. W., Yates, J., & Benjamin, R. I. (1987). Electronic markets and electronic hierarchies. Communications of the ACM, 30(6), 484–497.

  49. March, J. G. (1991). Exploration and exploitation in organizational learning. Organization Science, 2(1), 71–87.

  50. March, J. G., & Simon, H. A. (1958). Organizations. Wiley.

  51. McGrath, R. G., & MacMillan, I. C. (2009). Discovery-driven growth: A breakthrough process to reduce risk and seize opportunity. Harvard Business Press.

  52. Moore, G. A. (2014). Crossing the chasm: Marketing and selling disruptive products to mainstream customers (3rd ed.). HarperBusiness.

  53. Nonaka, I., & Takeuchi, H. (1995). The knowledge-creating company: How Japanese companies create the dynamics of innovation. Oxford University Press.

  54. Norman, D. A. (2013). The design of everyday things: Revised and expanded edition. Basic Books.

  55. Norman, D. A., & Draper, S. W. (1986). User centered system design: New perspectives on human-computer interaction. Lawrence Erlbaum Associates.

  56. O'Reilly, C. A., & Tushman, M. L. (2004). The ambidextrous organization. Harvard Business Review, 82(4), 74–81.

  57. Parker, G. G., Van Alstyne, M. W., & Choudary, S. P. (2016). Platform revolution: How networked markets are transforming the economy and how to make them work for you. W. W. Norton & Company.

  58. Podolny, J. M. (2001). Networks as the pipes and prisms of the market. American Journal of Sociology, 107(1), 33–60.

  59. Pollock, N., & Williams, R. (2009). Software and organisations: The biography of the enterprise-wide system or how SAP conquered the world. Routledge.

  60. Porter, M. E. (1985). Competitive advantage: Creating and sustaining superior performance. Free Press.

  61. Porter, M. E. (1996). What is strategy? Harvard Business Review, 74(6), 61–78.

  62. Powell, W. W. (1990). Neither market nor hierarchy: Network forms of organization. Research in Organizational Behavior, 12, 295–336.

  63. Puranam, P., & Vanneste, B. (2009). Trust and governance: Untangling a tangled web. Academy of Management Review, 34(1), 11–31.

  64. Ransbotham, S., Kiron, D., Gerbert, P., & Reeves, M. (2017). Reshaping business with artificial intelligence: Closing the gap between ambition and action. MIT Sloan Management Review, 59(1), 1–17.

  65. Ries, E. (2011). The lean startup: How today's entrepreneurs use continuous innovation to create radically successful businesses. Crown Business.

  66. Rogers, E. M. (2003). Diffusion of innovations (5th ed.). Free Press.

  67. Russell, S., & Norvig, P. (2020). Artificial intelligence: A modern approach (4th ed.). Pearson.

  68. Schraw, G., & Dennison, R. S. (1994). Assessing metacognitive awareness. Contemporary Educational Psychology, 19(4), 460–475.

  69. Schuler, D., & Namioka, A. (1993). Participatory design: Principles and practices. CRC Press.

  70. Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Houghton Mifflin.

  71. Shapiro, C., & Varian, H. R. (1998). Information rules: A strategic guide to the network economy. Harvard Business Press.

  72. Simons, R. (1995). Levers of control: How managers use innovative control systems to drive strategic renewal. Harvard Business Press.

  73. Solow, R. M. (1987). We'd better watch out. New York Times Book Review, July 12, 36.

  74. Standish Group. (2020). CHAOS Report 2020: Beyond Infinity. The Standish Group International.

  75. Sutton, R. S., & Barto, A. G. (2018). Reinforcement learning: An introduction (2nd ed.). MIT Press.

  76. Tapscott, D., & Tapscott, A. (2016). Blockchain revolution: How the technology behind bitcoin is changing money, business, and the world. Portfolio.

  77. Teece, D. J. (2007). Explicating dynamic capabilities: The nature and microfoundations of (sustainable) enterprise performance. Strategic Management Journal, 28(13), 1319–1350.

  78. Teece, D. J., Pisano, G., & Shuen, A. (1997). Dynamic capabilities and strategic management. Strategic Management Journal, 18(7), 509–533.

  79. Thomke, S. H. (2003). Experimentation matters: Unlocking the potential of new technologies for innovation. Harvard Business Press.

  80. von Hippel, E. (1986). Lead users: A source of novel product concepts. Management Science, 32(7), 791–805.

  81. von Hippel, E. (2005). Democratizing innovation. MIT Press.

  82. Weill, P., & Ross, J. W. (2004). IT governance: How top performers manage IT decision rights for superior results. Harvard Business Press.

  83. Wenger, E. (1998). Communities of practice: Learning, meaning, and identity. Cambridge University Press.

  84. Yoo, Y., Henfridsson, O., & Lyytinen, K. (2010). Research commentary: The new organizing logic of digital innovation: An agenda for information systems research. Information Systems Research, 21(4), 724–735.

  85. Zysman, J., & Kenney, M. (2018). The next phase in the digital revolution: Intelligent tools, platforms, growth, employment. Communications of the ACM, 61(2), 54–63.

ree

Jonathan H. Westover, PhD is Chief Academic & Learning Officer (HCI Academy); Associate Dean and Director of HR Programs (WGU); Professor, Organizational Leadership (UVU); OD/HR/Leadership Consultant (Human Capital Innovations). Read Jonathan Westover's executive profile here.

Suggested Citation: Westover, J. H. (2025). The GenAI Divide: Why 95% of Enterprise AI Investments Fail—and How the 5% Succeed. Human Capital Leadership Review, 28(3). doi.org/10.70175/hclreview.2020.28.3.4

Human Capital Leadership Review

eISSN 2693-9452 (online)

Subscription Form

HCI Academy Logo
Effective Teams in the Workplace
Employee Well being
Fostering Change Agility
Servant Leadership
Strategic Organizational Leadership Capstone
bottom of page