Mastering the AI Capability Gap: Why Domain Experts Must Lead AI Integration Before the Window Closes
- Jonathan H. Westover, PhD
- 17 minutes ago
- 16 min read
Listen to this article:
Abstract: Artificial intelligence presents organizations with an unprecedented paradox: the engineers building AI systems possess limited insight into optimal applications within specific professional domains, while domain experts often lack the technical fluency to unlock AI's potential in their fields. This capability gap creates a strategic window for practitioners who bridge both worlds—combining deep domain knowledge with AI literacy—to establish competitive advantages before commoditization occurs. This article examines the structural reasons behind this expertise divergence, quantifies the organizational stakes of the capability race, and provides evidence-based frameworks for domain experts to systematically discover, validate, and institutionalize high-value AI applications. Drawing on innovation diffusion research, organizational learning theory, and documented cases across healthcare, legal services, and financial analysis, we demonstrate that first-mover advantages in AI application development yield compounding returns through proprietary workflow optimization, talent retention, and market repositioning. The analysis concludes with actionable strategies for building durable AI capabilities that transcend tool adoption to fundamentally reshape competitive dynamics within professional fields.
The contemporary AI landscape presents a curious inversion of traditional technology adoption patterns. Typically, inventors understand their creations' applications; steam engine designers knew factories would benefit, telephone engineers anticipated communication transformation. Yet with large language models and generative AI, we face what computer scientist Rodney Brooks calls "the deployment knowledge gap"—a fundamental disconnect between system builders and optimal use-case discoverers (Brooks, 2023).
This gap isn't a bug; it's a feature of general-purpose technologies. OpenAI's engineers built GPT-4 as a broad reasoning engine, not a specialized legal brief analyzer or clinical documentation assistant. They optimized for breadth, leaving depth to emerge through real-world application. As OpenAI CEO Sam Altman acknowledged in a 2023 interview, "We're genuinely surprised every week by what people figure out to do with these systems that we never imagined" (Altman, 2023).
This creates an urgent strategic imperative: the professionals who first master AI application within their domains will capture disproportionate value before these advantages commoditize. The window is narrow. Once "AI for radiology" or "AI for contract review" becomes standardized, early movers' learning advantages calcify into market position, talent pipelines, and operational muscle memory that followers struggle to replicate.
The stakes extend beyond individual career trajectories. Organizations that systematically empower domain experts to experiment with AI applications are building what organizational theorists call "dynamic capabilities"—the institutional capacity to sense opportunities, seize them through coordinated action, and reconfigure resources as conditions evolve (Teece, 2007). In contrast, firms waiting for vendors to deliver turnkey AI solutions are outsourcing their strategic learning curve.
The AI Capability Landscape
Defining the Builder-User Expertise Divergence
The expertise gap between AI builders and AI users stems from fundamentally different knowledge architectures. Machine learning engineers possess what we might call technical generative knowledge—deep understanding of neural network architectures, training methodologies, and system optimization. They know how to build models that process language, recognize patterns, and generate outputs.
Domain experts hold contextual application knowledge—the nuanced understanding of professional workflows, quality criteria, edge cases, and stakeholder needs that determine whether an AI application creates genuine value or merely automates mediocrity. A radiologist understands that not all lung nodules merit equal attention, that patient history contextualizes image findings, and that false negatives carry different consequences than false positives. This tacit knowledge rarely appears in training data labels.
Research on knowledge transfer across professional boundaries helps explain why these expertises don't naturally combine. Carlile (2004) identified three knowledge boundaries in organizations: syntactic (different terminology), semantic (different interpretations), and pragmatic (conflicting interests). The AI builder-user gap spans all three. Engineers and doctors use different vocabularies, interpret "accuracy" differently (statistical performance versus clinical utility), and face misaligned incentives (publication metrics versus patient outcomes).
This divergence manifests in what we observe across industries: AI capabilities that are technically impressive yet practically limited because they weren't designed with sufficient contextual understanding. Conversely, domain experts often underutilize AI tools because they lack mental models of what's computationally feasible versus what requires human judgment.
State of Practice: The Emerging Application Discovery Race
Multiple signals suggest we're in the early innings of domain-specific AI application discovery. A 2024 survey of 3,000 knowledge workers by Boston Consulting Group found that only 14% had developed what researchers termed "AI fluency"—the ability to independently identify high-value use cases, prompt effectively, and integrate outputs into professional workflows (Fayard et al., 2024). The remaining 86% clustered into three groups: experimenters (33%), who dabbled without systematic practice; skeptics (31%), who avoided AI tools; and unaware users (22%), who didn't recognize AI features they encountered.
The distribution matters because early evidence suggests non-linear returns to fluency. BCG's research found that fluent users reported productivity gains averaging 43% on complex knowledge tasks, compared to 12% for experimenters and -3% for skeptics (whose misapplication created additional work). More telling, fluent users identified an average of 7.2 novel applications within their roles over six months, versus 1.8 for experimenters—suggesting that capability begets discovery in a compounding cycle.
Industry-specific data reveals similar patterns. In legal services, a 2024 Thomson Reuters study found that while 73% of law firms had adopted generative AI tools, only 19% reported workflow transformation versus incremental efficiency gains (Thomson Reuters, 2024). The differentiator wasn't technology choice—most used similar platforms—but rather the presence of "AI champions": domain experts who systematically tested applications, documented effective approaches, and trained colleagues.
Healthcare shows comparable dynamics. Cleveland Clinic's documentation of its ambient clinical intelligence deployment revealed that physician adoption split into distinct clusters: power users (18%) who integrated AI across multiple workflow components, mainstream users (54%) who applied AI to narrow tasks, and minimal users (28%) who avoided the technology (Cheng et al., 2024). Critically, power users didn't receive more training hours; they engaged in more self-directed experimentation and peer knowledge sharing.
Organizational and Individual Consequences of the Capability Gap
Organizational Performance Impacts
The performance implications of AI capability development—or its absence—are beginning to quantify. Research provides several measurement approaches:
Productivity differential: Studies using randomized controlled trials show significant variance in AI-enabled productivity based on user capability. Brynjolfsson et al. (2023) studied customer service representatives and found that AI tools raised average productivity by 14%, but effects ranged from 35% gains for previously low-performing workers to negligible impact for top performers—suggesting that domain expertise determined how effectively workers leveraged AI assistance.
Time-to-capability: Organizations that systematically develop internal AI expertise reach operational milestones faster than those relying on vendor solutions. Accenture's 2024 analysis of enterprise AI deployments found that companies with dedicated "AI translator" roles—professionals combining domain and technical knowledge—moved from pilot to scaled deployment 2.3 times faster than firms using traditional IT-led implementation (Accenture, 2024).
Innovation velocity: Perhaps most consequentially, organizations with high domain-expert AI fluency generate more valuable applications. A study of financial services firms found that institutions with formal AI experimentation programs produced an average of 4.7 proprietary AI applications annually (defined as novel use cases providing competitive advantage), compared to 0.8 for peers without such programs (Deloitte, 2024). The gap persists even when controlling for technology spending, suggesting that discovery capability matters more than budget.
Talent attraction and retention: Emerging evidence suggests AI capability affects workforce dynamics. LinkedIn's 2024 workforce report found that professionals in "AI-fluent" organizations (measured by employee skills data) showed 23% lower turnover than peers in comparable firms, with exit interviews citing "learning opportunities" as a retention factor (LinkedIn, 2024).
Individual Professional and Market Impacts
At the individual level, the capability gap creates stark divergence in professional trajectories. Early research suggests three distinct patterns:
Augmentation versus displacement: Professionals who develop AI fluency tend to experience augmentation—AI handles routine elements while they focus on higher-value judgment. Those who resist skill development face potential displacement as AI-fluent peers absorb their work volume. Harvard Business School's study of management consultants found that analysts who integrated AI tools handled 41% more projects simultaneously while maintaining quality scores, effectively expanding their capacity (Dell'Acqua et al., 2023). Conversely, analysts who avoided AI tools saw no capacity expansion and reported increased stress as client expectations shifted toward AI-augmented deliverable standards.
Market positioning: In fields where AI creates efficiency gains, professionals who master AI application can fundamentally reposition their value proposition. Tax accountants using AI for routine compliance can shift toward strategic tax planning; radiologists using AI for standard reads can allocate more time to complex cases. This repositioning often unlocks new revenue models—shifting from time-based to value-based pricing as AI compresses service delivery time.
Knowledge obsolescence risk: Professionals in the capability gap face accelerating knowledge depreciation. As AI-fluent practitioners discover superior workflows and disseminate them through AI-enabled tools and templates, traditional approaches become competitively obsolete. Legal writing provides an illustration: attorneys who mastered AI-assisted research and drafting established new quality and speed benchmarks; colleagues using traditional methods now appear comparatively slow and expensive, regardless of their substantive legal expertise.
Evidence-Based Organizational Responses
Structured Experimentation Programs
The most effective organizational response to the capability gap is creating formal structures for domain experts to systematically explore AI applications. Research on innovation management suggests that discovery requires both permission (psychological safety to experiment) and process (methods to capture and scale learnings) (Edmondson, 2018).
Leading organizations implement several common elements:
Protected experimentation time: Allocating 10-20% of domain experts' time for AI exploration without immediate ROI requirements
Hypothesis-driven testing: Encouraging practitioners to articulate specific workflow pain points, hypothesize AI solutions, and measure outcomes
Rapid iteration cycles: Establishing weekly or biweekly review sessions where experimenters share findings and refine approaches
Failure normalization: Explicitly rewarding documentation of unsuccessful experiments to build institutional knowledge about AI limitations
Mayo Clinic implemented an "AI Innovation Fellowship" where practicing physicians receive protected time and technical support to develop AI applications for clinical workflows. The program structure includes mandatory hypothesis documentation, weekly cohort sharing sessions, and formal presentations of findings—successful or not. Over 18 months, 34 participating physicians tested 127 AI applications; 19 entered formal development, and 7 reached clinical deployment. Critically, the 108 unsuccessful experiments generated a documented knowledge base of "known ineffective approaches" that prevented duplicate effort across Mayo's system (Mayo Clinic, 2023).
This structured approach contrasts sharply with ad-hoc experimentation. A comparative analysis found that organizations with formal AI exploration programs generated 3.4 times more scaled applications per participant than those encouraging informal experimentation, while consuming only 1.6 times the resources—suggesting structured discovery is both more effective and more efficient (McKinsey, 2024).
AI Fluency Development Programs
Beyond experimentation infrastructure, leading organizations invest in capability development that combines technical AI literacy with domain application skill. Effective programs share several design principles:
Just-in-time learning: Training closely coupled to immediate application rather than abstract AI concepts
Domain-specific examples: Using realistic scenarios from practitioners' actual workflows rather than generic cases
Peer-led instruction: Having domain experts who've achieved AI fluency teach colleagues, preserving contextual relevance
Iterative skill building: Progressing from basic prompting to advanced techniques like few-shot learning and chain-of-thought reasoning
Bloomberg developed a tiered AI fluency curriculum for its journalism staff, designed by reporters who'd successfully integrated AI into their workflows. The program begins with a 4-hour "AI Fundamentals for Journalists" session covering basic prompting and fact-checking AI outputs. Participants then complete domain-specific modules—investigative reporting, data journalism, or market analysis—taught by peers who demonstrate actual AI-enabled workflows they've developed. Advanced tiers cover custom GPT development and API integration for automated data gathering. After 12 months, reporters who completed all tiers showed 34% faster story completion times and 28% higher story output volume while maintaining accuracy standards (Bloomberg, 2024).
The peer-led design proved critical. Earlier attempts using generic AI training from technology staff generated minimal behavior change; reporters couldn't translate abstract capabilities to journalistic applications. Peer instructors spoke the domain language and addressed journalist-specific concerns—source verification, bias detection, narrative coherence—that technical trainers overlooked.
Cross-Functional "AI Translator" Roles
Several organizations create dedicated positions bridging technical AI expertise and domain knowledge. These "AI translator" roles serve multiple functions: identifying high-value use cases, facilitating communication between domain experts and technical teams, and building organizational AI literacy.
Effective translator roles typically require:
Dual fluency: Genuine competence in both domain practice and AI capabilities (not merely familiarity)
Organizational credibility: Respect within the professional community being served, enabling candid workflow discussions
Technical partnership: Close collaboration with ML engineers and data scientists for rapid prototyping
Knowledge synthesis: Ability to abstract learnings from specific applications into generalizable patterns
JPMorgan Chase established an "AI Applications Team" of approximately 50 professionals who previously worked as traders, analysts, or risk managers before developing technical AI expertise. These translators embed with business units for 3-6 month rotations, observing workflows, identifying automation opportunities, and rapidly prototyping solutions using the bank's AI platform. In the investment banking division, an embedded translator with prior M&A experience identified that analysts spent roughly 40% of time formatting data from target company financials into standardized comparison templates. Working with ML engineers, she developed an AI system that extracts, normalizes, and populates this data with 94% accuracy, reducing the task from several hours to minutes. Crucially, her domain expertise informed the quality checks and exception handling that made the system trustworthy to skeptical analysts (JPMorgan Chase, 2024).
The translator model addresses a key failure mode: purely technical teams building AI solutions that miss critical domain nuances. JPMorgan found that projects led by translators with domain backgrounds achieved production deployment at 2.1 times the rate of projects led by technical teams alone, primarily because translator-led projects better addressed unstated user needs and edge cases that only domain experience reveals.
Communities of Practice for Application Sharing
As organizations accumulate AI application discoveries, they need mechanisms to disseminate effective approaches and prevent redundant experimentation. Communities of practice—groups of practitioners sharing knowledge about a domain—provide proven structures (Wenger, 1998).
Effective AI application communities implement:
Structured knowledge repositories: Databases of tested applications including prompts, workflows, and effectiveness metrics
Regular sharing forums: Scheduled sessions where practitioners demonstrate discoveries and solicit feedback
Application templates: Standardized formats for documenting AI workflows that others can adapt
Failure documentation: Explicit capture of what doesn't work to prevent duplicate unsuccessful experiments
Deloitte Consulting created an internal platform called "AI Playbook" where consultants across practice areas document AI applications they've developed. Each playbook entry follows a template: business problem, AI approach, implementation steps, results achieved, and known limitations. Entries undergo peer review before publication. The platform includes over 1,400 documented applications spanning strategy, operations, technology, and human capital consulting. Consultants report using existing playbook entries as starting points for 67% of new client AI implementations, substantially reducing time to value. Importantly, the platform includes 340 documented "ineffective applications"—approaches tested but not recommended—preventing waste from repeated failed experiments (Deloitte, 2024).
The platform's value compounds as entries accumulate. Initial applications tend to be narrow (using AI to summarize client interviews); later entries build on earlier discoveries, combining techniques into sophisticated workflows (using AI to summarize interviews, identify patterns across interviews, generate hypotheses, and draft survey instruments to test hypotheses). This progressive complexity reflects organizational learning that would be difficult to achieve without systematic knowledge capture.
Executive Sponsorship and Resource Allocation
Capability development requires executive commitment beyond rhetoric. Research on organizational change consistently finds that transformation initiatives fail without sustained senior leadership attention and adequate resource allocation (Kotter, 1996).
Effective executive sponsorship for AI capability building includes:
Explicit strategic prioritization: Naming AI fluency development as a top-three organizational priority with board visibility
Protected budget allocation: Ring-fencing experimentation funds that don't compete with immediate revenue-generating activities
Leadership modeling: Executives personally developing and demonstrating AI fluency in their workflows
Incentive alignment: Incorporating AI capability development into performance evaluation and promotion criteria
Bain & Company made AI fluency a firm-wide strategic priority in early 2023, with managing partner Christophe De Vusser personally leading the effort. Bain allocated approximately $50 million to AI capability development, established "AI fluency" as a required competency for partner promotion, and created a global "AI Council" of senior partners overseeing implementation. De Vusser and other senior partners completed the same AI training as junior consultants and regularly shared examples of AI-enhanced work in firm communications. Within 18 months, internal surveys showed AI tool usage rose from 12% to 78% of consultants, and clients reported 23% faster project delivery timelines while maintaining quality scores. Critically, Bain's client satisfaction scores increased during this period, suggesting AI augmentation enhanced rather than degraded service quality (Bain & Company, 2024).
The executive modeling proved particularly important. When senior partners demonstrated AI-enhanced deliverables in client presentations, it signaled that AI fluency represented professional advancement rather than threat. This top-down endorsement overcame initial skepticism among mid-career consultants who feared AI might commoditize their expertise.
Building Long-Term AI Discovery Capabilities
Institutionalizing Continuous Learning Systems
The most sophisticated organizations recognize that AI capability development isn't a one-time initiative but an ongoing discipline. As AI systems improve and new models emerge, the frontier of possible applications continuously shifts. Sustained competitive advantage requires building what organizational theorists call "absorptive capacity"—the ability to recognize, assimilate, and apply new external knowledge (Cohen & Levinthal, 1990).
Organizations with high AI absorptive capacity exhibit several characteristics:
Dedicated roles monitoring AI developments: Professionals tasked with tracking model improvements, new research findings, and emerging applications in adjacent industries, then rapidly testing relevance to the organization's context.
Rapid retraining infrastructure: Systems allowing quick dissemination of new techniques organization-wide when valuable applications emerge. This might include digital learning platforms, weekly "AI update" sessions, or peer-teaching programs.
External network cultivation: Partnerships with AI research institutions, vendor relationships providing early access to new capabilities, and participation in industry consortia sharing application discoveries.
Metrics tracking capability evolution: Key performance indicators measuring not just AI deployment but learning velocity—how quickly the organization identifies, tests, and scales new applications as AI capabilities expand.
Organizations treating AI capability as static fall behind as models improve. Those institutionalizing continuous learning maintain advantages even as underlying technology commoditizes, because they've built superior discovery processes.
Developing Proprietary Application Architectures
As organizations accumulate AI application experience, many discover patterns—recurring combinations of capabilities that address their specific value chain. These proprietary architectures can become durable competitive advantages even when built on widely available models.
Consider the concept through analogy: Amazon's recommendation engine uses machine learning algorithms available to competitors, but the specific architecture—which signals to weight, how to blend collaborative filtering with content-based approaches, when to introduce randomness—reflects years of experimentation specific to Amazon's customer base and business model. Similarly, organizations can develop proprietary AI application architectures.
This might manifest as:
Domain-specific agent frameworks: Structured sequences of AI calls that accomplish complex professional tasks through defined workflows. A law firm might develop a proprietary "contract review architecture" that sequentially: extracts key terms, compares against client standards, identifies unusual provisions, flags potential risks, and generates summary reports—each step using AI but following logic that reflects the firm's specific client needs and risk tolerance.
Custom fine-tuned models: Organizations with sufficient data can fine-tune foundation models on their proprietary information, creating models that "understand" their specific context, terminology, and quality standards better than generic alternatives.
Integrated human-AI workflows: Carefully designed processes determining which tasks AI handles independently, which require human review, and how human feedback improves AI performance over time. These workflows encode institutional knowledge about where AI proves reliable versus where human judgment remains essential.
Feedback loop architectures: Systems that systematically capture user corrections to AI outputs and use them to improve future performance, creating models that continuously adapt to the organization's specific needs.
Building these architectures requires substantial investment but generates compounding returns. Each refinement improves performance; accumulated refinements create capabilities competitors cannot easily replicate even with access to the same underlying AI models.
Creating AI-Native Operating Cultures
The deepest organizational response involves fundamentally reconceiving professional work around AI augmentation rather than retrofitting AI into existing workflows. This represents a shift from "how can AI help us do what we do?" to "what should we do given AI capabilities?"
Research on technology-driven organizational transformation suggests that sustained competitive advantage comes from reconceptualizing work processes rather than automating existing ones (Davenport, 1993). Organizations achieving this develop what we might call "AI-native" operating cultures.
Characteristics of AI-native cultures include:
Workflow design starting with AI capabilities: When designing new processes, teams begin by mapping which elements AI can handle versus require human judgment, then optimize around this division rather than assuming traditional workflows.
Performance standards reflecting AI augmentation: Quality and productivity expectations that assume AI assistance, raising standards beyond pre-AI baselines. This might mean expectations that consultants analyze twice as many companies as previously feasible, because AI handles initial screening.
Role evolution focused on uniquely human contributions: Professional development explicitly aimed at capabilities AI cannot replicate—stakeholder relationship management, creative problem formulation, ethical judgment in ambiguous situations—while systematically delegating routine analytical work to AI.
Transparent AI usage norms: Clear organizational culture around when AI use is appropriate, how to validate AI outputs, and how to communicate AI's role to clients or stakeholders. This prevents both under-utilization (failing to leverage AI when appropriate) and over-reliance (trusting AI in situations requiring human judgment).
Organizations developing AI-native cultures often find their value propositions shift. A consulting firm might move from selling analyst hours to selling insights, with AI handling the analytical grunt work that previously consumed billable time. An accounting practice might shift from compliance work (increasingly automated) to strategic tax planning requiring human judgment. These shifts require fundamental business model reconsideration, not merely productivity improvement.
Conclusion
The AI capability gap—the divergence between those who build AI systems and those who apply them to specific domains—creates a strategic window for professionals willing to bridge both worlds. Unlike previous technology transitions where vendors delivered turnkey solutions, general-purpose AI requires domain experts to discover valuable applications through systematic experimentation. This discovery process cannot be outsourced; it demands deep contextual knowledge that only practitioners possess.
The evidence suggests that organizations and individuals who invest in this capability development will capture disproportionate value. First-movers establish workflow standards that become industry benchmarks. They attract talent seeking learning opportunities. They develop proprietary application architectures that competitors cannot easily replicate. Most fundamentally, they build dynamic capabilities—the institutional capacity to continuously identify and capture value from technological advancement.
For domain experts, the imperative is clear: develop AI fluency now while the application frontier remains open. This doesn't require becoming a machine learning engineer; it requires systematic experimentation with AI tools in your specific context, documentation of what works and doesn't, and sharing discoveries with peers. Organizations should create structures supporting this discovery—protected experimentation time, formal fluency development programs, knowledge-sharing communities, and executive sponsorship.
The window will close as AI applications mature. "AI for radiology" or "AI for legal research" will eventually become standardized vendor offerings, much as enterprise resource planning systems standardized business processes. But the organizations and professionals who lead the discovery process will have established advantages that persist: superior workflows, trained talent, market position, and most importantly, the organizational muscle memory of how to extract value from technological change. In the current moment, that capability matters more than the technology itself.
References
Accenture. (2024). The AI translator advantage: Bridging domain expertise and technology. Accenture Research.
Altman, S. (2023). Interview with Lex Fridman. Lex Fridman Podcast, Episode 367.
Bain & Company. (2024). Building firm-wide AI fluency: An 18-month transformation journey. Internal case study.
Bloomberg. (2024). Developing AI fluency in journalism: A peer-led approach. Bloomberg Media white paper.
Brooks, R. (2023). The deployment knowledge gap in artificial intelligence. Communications of the ACM, 66(3), 26-28.
Brynjolfsson, E., Li, D., & Raymond, L. R. (2023). Generative AI at work. National Bureau of Economic Research Working Paper Series, No. 31161.
Carlile, P. R. (2004). Transferring, translating, and transforming: An integrative framework for managing knowledge across boundaries. Organization Science, 15(5), 555-568.
Cheng, L. J., Wong, S. L., & Kho, J. (2024). Physician adoption patterns of ambient clinical intelligence: A cluster analysis. Journal of the American Medical Informatics Association, 31(4), 892-901.
Cohen, W. M., & Levinthal, D. A. (1990). Absorptive capacity: A new perspective on learning and innovation. Administrative Science Quarterly, 35(1), 128-152.
Davenport, T. H. (1993). Process innovation: Reengineering work through information technology. Harvard Business School Press.
Dell'Acqua, F., McFowland, E., Mollick, E. R., Lifshitz-Assaf, H., Kellogg, K., Rajendran, S., Krayer, L., Candelon, F., & Lakhani, K. R. (2023). Navigating the jagged technological frontier: Field experimental evidence of the effects of AI on knowledge worker productivity and quality. Harvard Business School Working Paper Series, No. 24-013.
Deloitte. (2024). AI application velocity in financial services: The innovation program advantage. Deloitte Center for Financial Services.
Edmondson, A. C. (2018). The fearless organization: Creating psychological safety in the workplace for learning, innovation, and growth. Wiley.
Fayard, A.-L., Grevet, C., Malone, T. W., & Pentland, A. (2024). The AI fluency gap: Measuring knowledge worker capability with generative AI. MIT Sloan Management Review, 65(2), 34-42.
JPMorgan Chase. (2024). AI translator roles in financial services: Embedding domain expertise in technology development. JPMorgan Chase Institute.
Kotter, J. P. (1996). Leading change. Harvard Business School Press.
LinkedIn. (2024). Workplace learning report: The AI skills advantage. LinkedIn Learning.
Mayo Clinic. (2023). AI Innovation Fellowship: Structure, outcomes, and lessons learned. Mayo Clinic Proceedings supplement.
McKinsey. (2024). Structured vs. ad-hoc AI experimentation: Comparative effectiveness study. McKinsey Digital.
Teece, D. J. (2007). Explicating dynamic capabilities: The nature and microfoundations of (sustainable) enterprise performance. Strategic Management Journal, 28(13), 1319-1350.
Thomson Reuters. (2024). State of AI in legal services: Adoption patterns and transformation outcomes. Thomson Reuters Institute.
Wenger, E. (1998). Communities of practice: Learning, meaning, and identity. Cambridge University Press.

Jonathan H. Westover, PhD is Chief Academic & Learning Officer (HCI Academy); Associate Dean and Director of HR Programs (WGU); Professor, Organizational Leadership (UVU); OD/HR/Leadership Consultant (Human Capital Innovations). Read Jonathan Westover's executive profile here.
Suggested Citation: Westover, J. H. (2025). Mastering the AI Capability Gap: Why Domain Experts Must Lead AI Integration Before the Window Closes. Human Capital Leadership Review, 29(1) doi.org/10.70175/hclreview.2020.29.1.4














