To Unlock the Full Value of AI, Invest in Your People: Building Capability Systems That Translate Adoption into Business Impact
- Jonathan H. Westover, PhD
- 5 hours ago
- 20 min read
Listen to this article:
Abstract: Organizations face a persistent value gap in artificial intelligence adoption: while most have deployed AI tools, fewer than 5% generate value at scale. This article examines why traditional training approaches fail to bridge the adoption-to-impact divide and proposes an integrated capability-building framework grounded in organizational behavior research and practitioner evidence. Drawing on recent consulting experience and academic literature on technology adoption, learning transfer, and behavior change, we outline a three-stage progression—foundational knowledge, applied practice, and embedded habits—that moves beyond conventional training programs. The article presents role-specific capability development strategies, leadership modeling imperatives, trust-building mechanisms, and measurement approaches that connect learning interventions to tangible business outcomes. Case evidence from financial services, consumer goods, biopharmaceuticals, and technology sectors illustrates how targeted capability investment in high-value workflows unlocks AI's transformative potential when supported by redesigned work systems, visible executive commitment, and metrics focused on business impact rather than mere adoption rates.
Artificial intelligence has moved from experimental curiosity to strategic imperative across industries. Yet a stark reality tempers the enthusiasm: most organizations struggle to translate AI adoption into meaningful business value. Recent evidence reveals that while AI tools proliferate across enterprises, only approximately 5% of organizations generate value at scale, with nearly 60% reporting little to no measurable impact from their AI investments (Bedard et al., 2025). This adoption-impact gap represents one of the most pressing challenges facing business leaders today.
The instinctive response—deploying more AI tools, faster—misses the fundamental issue. Technology adoption without corresponding capability development leaves value unrealized. The critical constraint isn't technological; it's human. Organizations need people who can think differently, work differently, and lead differently in AI-augmented environments. This requires moving beyond traditional training paradigms toward integrated capability-building systems that reshape behaviors, mindsets, and organizational practices.
The stakes are considerable. AI's transformative potential extends across core business functions—from customer service and marketing to supply chain optimization and product development. Organizations that build robust AI capabilities positioned around high-value workflows can achieve productivity gains exceeding 50%, reduce processing times by 70%, and accelerate decision cycles from days to minutes (Bedard et al., 2025). Those that fail to invest systematically in human capabilities risk falling permanently behind as competitors unlock these value pools.
This article examines what distinguishes effective AI capability-building from conventional training approaches. It synthesizes emerging practitioner experience with established research on technology adoption, organizational learning, and behavior change to offer evidence-based guidance for leaders navigating this transformation.
The AI Capability Landscape
Defining AI Capabilities in Organizational Context
AI capability encompasses more than technical skill with specific tools. It represents an integrated mix of technical fluency, adaptive human skills, and supportive organizational systems that enable people to work effectively with AI technologies. This definition aligns with dynamic capability theory, which emphasizes organizations' ability to reconfigure resources and competencies in response to changing technological environments (Teece et al., 1997).
Technical fluency includes understanding AI fundamentals, recognizing appropriate use cases, formulating effective prompts, and evaluating AI-generated outputs critically. Human skills encompass creative problem-solving, ethical judgment, change leadership, and the ability to collaborate effectively in hybrid human-AI workflows. Organizational systems include redesigned workflows, clear governance structures, supportive incentive mechanisms, and cultures that reward experimentation and learning.
Research on technology acceptance demonstrates that successful adoption depends not merely on perceived usefulness but on actual ability to integrate new tools into existing work patterns (Venkatesh & Davis, 2000). In AI contexts, this integration challenge intensifies because AI tools often fundamentally reshape job tasks rather than simply automating existing processes. Workers must develop new mental models for when and how to leverage AI assistance, requiring deeper capability development than traditional software training provides.
State of Practice: The Adoption-Impact Gap
Current organizational practice reveals significant variability in AI capability maturity. Most organizations occupy what might be termed the "experimentation stage"—characterized by pilot projects, scattered tool adoption, and fragmented learning initiatives. A smaller cohort has progressed to "integration," where AI tools connect to specific workflows with measurable efficiency gains. The rarest category, "transformation," involves comprehensive workflow redesign with AI embedded throughout value chains, supported by systematic capability development (Bedard et al., 2025).
This maturity distribution reflects broader patterns documented in digital transformation research. Organizations frequently overestimate technology's autonomous impact while underestimating the organizational capability required to realize value (Kane et al., 2019). The AI context amplifies this tendency because generative AI's apparent ease of use—anyone can type a prompt—creates an illusion of capability that masks deeper skill requirements for effective application.
Research on training transfer provides insight into why conventional approaches fail. Studies consistently show that fewer than 15% of learning interventions produce sustained behavior change in workplace settings (Grossman & Salas, 2011). The transfer problem intensifies with AI because effective use requires not just knowledge retention but the development of new judgment capabilities, comfort with ambiguity, and willingness to experiment—outcomes difficult to achieve through traditional training methods.
Organizational and Individual Consequences of AI Capability Gaps
Organizational Performance Impacts
The business costs of inadequate AI capability development manifest across multiple dimensions. Organizations face direct financial losses from failed AI initiatives, which research suggests occur in 70-80% of cases, often due to insufficient attention to human and organizational factors rather than technical limitations (Ransbotham et al., 2020). Beyond failed initiatives, capability gaps create opportunity costs as competitors who build robust AI capabilities capture value pools and establish competitive advantages.
Productivity impacts are measurable. Organizations that successfully build AI capabilities and redesign workflows around them achieve productivity improvements of 40-60% in targeted functions, while those with weak capabilities see gains under 10% despite deploying similar technologies (Bedard et al., 2025). In customer-facing operations, capability gaps translate to inconsistent service quality as frontline workers struggle to effectively leverage AI tools, potentially damaging customer relationships and brand reputation.
Strategic agility suffers when organizations lack AI capabilities. Rapid technological evolution requires continuous learning and adaptation. Organizations without systematic capability-building systems find themselves perpetually behind, unable to capitalize on emerging AI applications because their workforce lacks the foundational fluency to absorb and implement new approaches. This creates a capability debt that compounds over time, widening performance gaps with more capable competitors.
Individual Impacts: Employee Experience and Wellbeing
For individual workers, inadequate AI capability support produces distinct consequences. Research on technology-induced stress documents that workers experience anxiety, frustration, and reduced job satisfaction when required to use technologies they don't understand or trust (Tarafdar et al., 2019). AI amplifies these dynamics because of uncertainties around job security, concerns about AI quality and bias, and confusion about appropriate AI use boundaries.
Job redesign without capability support can diminish work quality and employee engagement. When AI automates routine tasks without corresponding skill development for higher-value work, employees may experience deskilling and reduced autonomy rather than the promised elevation to more strategic contributions (Autor, 2015). This mismatch between technological change and capability development erodes both productivity and worker wellbeing.
Conversely, effective capability development creates positive individual outcomes. Workers who receive robust AI training and support report higher confidence, increased ability to focus on meaningful work, and stronger motivation (Bedard et al., 2025). They experience AI as empowering rather than threatening, viewing new tools as expanding their capabilities rather than replacing their contributions. This psychological shift from threat to opportunity represents a critical capability-building outcome that conventional training rarely achieves.
Evidence-Based Organizational Responses
Ground AI Enablement in Clear Business Context
Effective AI capability-building begins with strategic clarity about where AI can deliver the highest value and what workflows must change to realize that value. Organizations should identify specific value pools—areas where AI can generate the greatest and fastest returns—and concentrate capability-building resources there rather than pursuing generic, enterprise-wide training.
Research on organizational learning emphasizes that transfer improves dramatically when training connects directly to real work contexts and immediate application opportunities (Baldwin & Ford, 1988). In AI contexts, this means identifying high-potential workflows, redesigning them around AI capabilities, and then building the specific capabilities required to execute the redesigned processes.
Value pool identification and workflow redesign
Leading organizations conduct systematic assessments to map AI opportunity landscapes, evaluating potential impact across business functions, feasibility given current capabilities and data availability, and strategic alignment with organizational priorities. This analysis reveals where concentrated capability investment will generate returns that justify the resource commitment.
A European retail bank exemplifies this approach. Rather than deploying generic AI training, leaders identified lending operations as a high-value target where AI could simultaneously reduce costs and improve customer experience. They redesigned core lending workflows to incorporate an "Ops AI Agent" that automated document validation, plausibility checks, and data transfers. Only after redesigning workflows did they develop capabilities to support the new processes, focusing training on the specific skills needed to work effectively in the redesigned system—validating AI outputs, handling exceptions, and managing AI-escalated cases (Bedard et al., 2025).
This targeted approach delivered results conventional training could not match: productivity gains exceeding 50%, manual processing time reductions of 70%, and approval cycles compressed from days to under 30 minutes. These outcomes reflect not just AI tool deployment but successful capability development that enabled people to work effectively in fundamentally redesigned processes.
Capability assessment and maturity mapping
Organizations should assess current AI capability levels to establish realistic starting points for capability building. This assessment should examine technical fluency across different roles, existing comfort levels with AI tools, current adoption patterns and pain points, organizational culture around experimentation and learning, and infrastructure supporting AI use.
Many organizations discover significant capability variance across units and levels. Executives may possess strategic vision but lack technical fluency, while frontline workers have experimentation experience but insufficient context for strategic application. Middle managers often fall between—expected to lead AI adoption without clear models or capabilities to do so. Effective programs acknowledge this variance and tailor interventions accordingly.
Develop Role-Specific Capability Pathways
Generic AI training fails because different organizational roles require fundamentally different capabilities to work effectively with AI. Research on job crafting demonstrates that individuals' ability to adapt to technological change depends heavily on role-specific demands and autonomy levels (Wrzesniewski & Dutton, 2001). AI capability building should reflect this by designing distinct learning pathways for different role archetypes.
Table 1: AI Capability Archetypes and Development Pathways
Role Archetype | Core Capability Needs | Key Learning Objectives | Targeted Learning Interventions | Success Metrics |
Executives (Shapers) | Strategic understanding; AI fluency to evaluate opportunities and risks; ability to connect AI to business strategy; change leadership. | Set compelling AI visions; make informed investment decisions; model desired behaviors; create enabling organizational conditions; build shared narrative. | Immersive labs engaging with real business challenges; strategic persona journeys; live case study sessions; hands-on practice with AI tools. | Executive team coherence; visible leadership modeling of AI behaviors; quality of strategic AI investment decisions. |
Managers (Enablers) | Technical fluency to guide teams; change management skills; coaching capabilities; process redesign skills. | Translate vision into operational reality; coach teams through adoption; address resistance; build trust; model responsible usage. | Upskilling in AI fluency and change management; manager-led adoption sprints; peer reviews; small-group labs. | Team adoption rates; employee confidence and motivation; successful transition to AI-augmented workflows. |
Process Owners (Transformers) | Deep process knowledge; AI capability understanding; creative problem-solving; facilitation and project management. | Fundamentally rewire workflows; identify AI application opportunities; prototype and implement redesigned processes. | Hands-on redesign practice with real domain processes; structured workflow analysis methodologies; coaching on stakeholder engagement; peer learning networks. | Productivity gains (e.g., ); reduction in manual processing time (e.g., ); compression of approval/decision cycles. |
Frontline Contributors | Practical application skills; prompt engineering; critical evaluation of AI outputs; experimentation comfort. | Use AI tools effectively in daily tasks; understand appropriate vs. inappropriate AI use cases; handle exceptions and AI-escalated cases. | Hands-on simulations; applied use case labs tied directly to daily work; experiential practice with immediate application. | Tool adoption rates (e.g., increasing from to ); depth and variety of AI use; quality of AI outputs; time savings; error reduction. |
Executive and Senior Leader Capabilities: Shapers
Executives and senior leaders need capabilities to set compelling AI visions, make informed investment decisions, model desired behaviors visibly, and create organizational conditions that enable AI scaling. Their capability needs center on strategic understanding rather than technical depth—sufficient AI fluency to evaluate opportunities and risks, connect AI initiatives to business strategy, and lead organizational change.
A global fast-moving consumer goods company designed a strategic persona journey for senior leaders focused on building shared narrative around AI transformation, practicing change leadership behaviors specific to AI adoption, and engaging with real business challenges through immersive labs. Leaders participated in live sessions examining case studies, facilitated discussions on AI's strategic implications, and hands-on exercises applying AI tools to actual business problems (Bedard et al., 2025).
This approach differs markedly from conventional executive briefings. Rather than passive information consumption, leaders actively practice the behaviors expected throughout the organization—experimenting with AI tools, sharing learning from failures, and visibly wrestling with application challenges. This experiential approach builds not just knowledge but comfort and confidence to lead authentically.
Manager Capabilities: Leaders Who Create Enabling Conditions
Middle managers play a crucial bridging role in AI transformation, translating strategic vision into operational reality while coaching teams through adoption challenges. Their capability needs include technical fluency sufficient to guide team members, change management skills to address resistance and build trust, coaching capabilities to support skill development, and ability to redesign team processes around AI tools.
Research on managerial influence in technology adoption shows that supervisor support represents one of the strongest predictors of successful implementation (Venkatesh et al., 2003). In AI contexts, this support requires managers who can demonstrate tool usage, address concerns empathetically while reinforcing business rationale, provide constructive feedback on AI application, and recognize and reward effective experimentation.
A global technology company exemplifies effective manager enablement. Facing low GenAI adoption among software engineering teams, leaders first upskilled managers in both GenAI fluency and change management, equipping them to set expectations, model responsible AI usage, and address team concerns with empathy. Managers then guided teams through adoption sprints combining hands-on experimentation with peer reviews and small-group labs. This manager-led approach, building on research showing the importance of proximate support in skill development (Allen & Eby, 2003), created safe environments for experimentation and normalized new workflows. Participating teams showed not only higher adoption rates but stronger confidence and motivation (Bedard et al., 2025).
Workflow Redesign Capabilities: Transformers
Certain roles—team leads, process owners, power users—need capabilities to fundamentally rewire workflows around AI possibilities. These "transformers" require deep understanding of both current processes and AI capabilities, creative problem-solving skills to envision new workflows, facilitation abilities to engage stakeholders in redesign, and project management competence to implement changes.
These individuals serve as critical change agents, translating between technical possibilities and operational realities. Their capability development should emphasize hands-on redesign practice, often working with real processes in their own domains. Theoretical knowledge proves insufficient; transformers need supported experience analyzing existing workflows, identifying AI application opportunities, prototyping redesigned processes, and implementing changes with stakeholder engagement.
Organizations often identify transformers through early AI experimentation, selecting individuals who demonstrate both technical aptitude and process thinking. Formal capability development then accelerates their impact through structured methodologies for workflow analysis and redesign, coaching on change management and stakeholder engagement, peer learning networks to share approaches and lessons, and recognition systems that reinforce transformer contributions.
Frontline Contributor Capabilities
Frontline employees who use AI tools daily need capabilities focused on practical application: prompt engineering skills for their specific tasks, critical evaluation of AI outputs, understanding of when AI assistance is and isn't appropriate, and comfort experimenting with new approaches. Their learning needs emphasize hands-on practice with immediate application rather than conceptual foundations.
The biopharmaceuticals company mentioned earlier segmented over 100,000 employees by role archetype and designed tailored journeys for each. Frontline contributors engaged in hands-on simulations and applied use case labs tied directly to their work. This targeted approach increased AI tool adoption from approximately 20% to nearly 90%, demonstrating how role-appropriate capability building drives both adoption and effective use (Bedard et al., 2025).
Build Executive Alignment and Visible Leadership Commitment
AI transformation requires unified leadership commitment that extends beyond endorsement to active modeling of desired behaviors. Research on transformational leadership emphasizes that leaders influence organizational change through visible behavior demonstration, not just articulated vision (Bass & Riggio, 2006). In AI contexts, this means executives must develop their own AI fluency and visibly practice new behaviors.
Creating shared language and strategic coherence
Leadership teams often struggle with AI transformation because different functions speak different languages. Technology leaders focus on technical possibilities; business leaders emphasize strategic outcomes. Without shared vocabulary and mental models, discussions fragment into debates over point solutions rather than strategic dialogue about fundamental transformation.
Organizations address this through immersive leadership alignment experiences that build common AI fluency across the executive team. One client brought executives together for a CEO-championed summit blending AI technology education with business strategy discussions. Leaders worked through actual use cases, confronted transformation realities, and engaged in cross-functional dialogue using shared frameworks. This created immediate impact: both business and technology leaders gained confidence and common language to guide decisions. Fragmented conversations evolved into strategic, cross-functional problem-solving focused on scaling value (Bedard et al., 2025).
This alignment work proves essential because, as organizational change research demonstrates, executive team coherence strongly predicts transformation success (Edmondson, 2012). When leaders operate from shared understanding, their collective influence compounds. When they send mixed signals, confusion and resistance spread.
Modeling as capability-building imperative
Knowing what behaviors to model proves insufficient; leaders must practice those behaviors visibly and consistently. Research shows that leaders often revert to reinforced behaviors rather than newly learned ones unless explicit accountability structures support change (Kegan & Lahey, 2009). Effective AI capability building for executives should make desired behaviors explicit, create practice opportunities, establish accountability mechanisms, and recognize consistent modeling.
Organizations increasingly incorporate behavior change science into executive AI development. Rather than one-time briefings, they design ongoing support structures: regular executive labs where leaders practice using AI tools collaboratively, peer coaching arrangements for mutual accountability, visible commitments to specific AI behaviors with team follow-through, and recognition in leadership forums for modeling excellence.
This approach reflects research showing that leadership development requires experiential learning and ongoing support rather than isolated training events (Day et al., 2014). For AI transformation, this means executives engage in continuous learning journeys paralleling those of their organizations, demonstrating that AI capability development never reaches completion but requires sustained commitment.
Invest in Trust-Building and Emotional Engagement
AI adoption triggers emotional responses—skepticism about capabilities, anxiety about job security, concerns about bias and quality, and uncertainty about appropriate use boundaries. Organizations that ignore these emotions see resistance and low engagement regardless of training quality. Research on resistance to change emphasizes that emotional responses often prove more influential than rational assessments (Piderit, 2000).
Acknowledging and addressing skepticism
Effective programs create space for concerns and engage them directly rather than dismissing skepticism as irrational resistance. This means anticipating common concerns through pre-assessment, incorporating concern discussion into capability-building design, equipping managers to address emotional responses empathetically, and demonstrating organizational commitment to responsible AI use.
The technology company example illustrates this approach. Software developers initially approached GenAI with skepticism manifested as low engagement. Rather than pushing harder on adoption, leaders first built manager capabilities in empathetic change leadership. Managers then guided teams through adoption sprints that combined experimentation with structured spaces for sharing concerns, successes, and setbacks. This created psychological safety for trying new workflows while acknowledging very real questions about AI's role (Bedard et al., 2025).
Research on psychological safety demonstrates that people take interpersonal risks—like experimenting with unfamiliar technologies—only when they trust they won't be penalized for mistakes (Edmondson, 1999). Creating this safety requires explicit permission for experimentation, visible leadership vulnerability in acknowledging their own AI learning journey, celebration of thoughtful failures alongside successes, and transparent discussion of AI limitations and appropriate use boundaries.
Building trust through transparency and involvement
Trust develops when people understand AI capabilities and limitations, see how decisions about AI use get made, participate in shaping AI application in their work, and receive consistent support during adoption. Organizations should design capability-building programs that incorporate these trust-building elements rather than treating trust as something established separately from learning.
This might include co-creation workshops where employees help identify AI use cases, transparent discussion of how AI decisions align with values and ethics, regular forums for sharing both AI successes and failures, and clear governance structures with employee voice.
Research on procedural justice shows that people's acceptance of change depends heavily on whether they perceive decision processes as fair and inclusive (Colquitt et al., 2001). In AI contexts, this means involving affected employees in determining how AI gets used, communicating clearly about AI's role and limitations, and maintaining consistent standards for AI quality and accountability.
Implement Integrated Measurement Systems Focused on Business Value
Organizations frequently measure AI capability-building through conventional training metrics—completion rates, satisfaction scores, knowledge tests—that fail to capture actual value creation. Research on training evaluation has long emphasized the need to assess learning transfer and business impact, not just participant reactions (Kirkpatrick & Kirkpatrick, 2006). AI capability measurement requires extending this principle to track behavior change and business outcomes systematically.
Defining outcome-focused metrics
Effective measurement systems work backward from desired business outcomes to identify behaviors that drive those outcomes, capabilities that enable those behaviors, and learning experiences that build those capabilities. This creates a clear line of sight from learning interventions to business value.
For example, rather than measuring only tool adoption rates, organizations should track depth and variety of AI use across different work tasks, quality of AI outputs and prompts as assessed by expert review, experimentation rates and willingness to try new approaches, time saved or efficiency gains in specific workflows, decision quality improvements or error reduction rates, and employee confidence and satisfaction with AI-augmented work.
A financial services organization implementing GenAI upskilling measured not just adoption but new use case generation, experimentation rates, and tangible business value from time savings and error reduction. Results showed 98% of learners generated new use case ideas, 80% applied learnings directly to managing projects, and 85% reported more frequent AI use at work. These metrics demonstrated capability shifts and business impact that adoption rates alone would have missed (Bedard et al., 2025).
Real-time learning analytics and course correction
Leading organizations implement technology-enabled measurement systems that provide real-time insight into capability development progress. Learning analytics platforms track engagement patterns, skill development trajectories, and application to work contexts. When combined with operational data on AI tool usage and business outcomes, these systems enable rapid course correction.
For example, analytics might reveal that certain roles struggle with specific capability elements, prompting targeted interventions. Usage data might show high adoption but low-value application, suggesting the need for more advanced training on strategic use cases. Combining capability and outcome data creates feedback loops that continuously improve program effectiveness.
This approach reflects research on continuous improvement and organizational learning, which emphasizes rapid experimentation and data-driven adaptation (Argote & Miron-Spektor, 2011). Rather than treating capability-building as a fixed program, leading organizations view it as an evolving system that learns and improves based on evidence.
Building Long-Term AI Capability and Organizational Resilience
Embedding AI Capabilities in Organizational Systems
Sustainable AI capability requires moving beyond individual skill development to embed new practices in organizational systems. Research on organizational routines demonstrates that behavior change becomes self-sustaining only when integrated into formal structures and informal norms (Feldman & Pentland, 2003). For AI capabilities, this means redesigning performance management systems to recognize AI fluency, incorporating AI expectations into role definitions and hiring criteria, establishing communities of practice for ongoing learning, and creating governance structures that support responsible scaling.
Organizations should codify AI best practices in standard operating procedures, create role models and success stories that reinforce desired behaviors, design career pathways that reward AI capability development, and align incentives to encourage experimentation and value creation. This systematic embedding ensures that capability development becomes self-reinforcing rather than requiring constant leadership attention.
The European retail bank exemplifies this embedding process. Beyond initial capability building, they redesigned roles to incorporate AI responsibilities explicitly, established communities of practice where lending specialists shared AI application insights, created recognition programs celebrating innovative AI uses, and integrated AI fluency into performance reviews and promotion criteria. These structural changes sustained behavioral shifts beyond the initial transformation push (Bedard et al., 2025).
Cultivating Continuous Learning Cultures
AI technology evolves rapidly, rendering specific skills obsolete while creating needs for new capabilities. Organizations require cultures and systems that support continuous learning rather than one-time training interventions. Research on learning organizations emphasizes that sustained competitive advantage comes from building learning capacity, not just current knowledge (Garvin et al., 2008).
This means establishing regular rhythms for AI capability refreshment, creating easy access to learning resources and experimentation tools, celebrating learning failures as valuable data, and embedding time for learning in work schedules. Organizations should move from event-based training to continuous learning embedded in work flow—just-in-time resources when facing new AI applications, communities where practitioners share emerging practices, and regular showcases of AI innovation from across the organization.
Leading organizations establish AI centers of excellence or capability hubs that serve as ongoing learning resources, curate and update training content as AI evolves, provide coaching and troubleshooting support, and track emerging AI trends to guide capability development. This creates organizational infrastructure for sustained capability building beyond initial transformation phases.
Developing Distributed Leadership for AI
Long-term AI capability requires distributed leadership where AI fluency and change leadership exist throughout the organization, not concentrated in specific roles. Research on distributed leadership demonstrates that organizational adaptation improves when leadership capacity spreads across levels and functions (Gronn, 2002).
Organizations should identify and develop AI champions at multiple levels, create networks connecting champions across units, provide champions with tools and support to influence peers, and recognize championship as valued leadership contribution. This creates resilience—AI capability development continues even with leadership turnover—and accelerates learning diffusion through peer networks.
The fast-moving consumer goods company illustrates distributed leadership development. Beyond training senior executives and frontline employees, they identified mid-level champions who combined AI enthusiasm with credibility in their functions. These champions received enhanced training, coaching on peer influence, and regular forums to share approaches and troubleshoot challenges. They became nodes in learning networks that accelerated capability development beyond formal programs (Bedard et al., 2025).
Conclusion
The AI adoption-impact gap reflects a fundamental capability challenge, not a technology deficit. Organizations that invest systematically in human capability development—grounded in clear business context, tailored to role-specific needs, led by aligned executives, built on trust, and measured through business outcomes—unlock AI's transformative value. Those that treat capability as an afterthought to technology deployment see continued disappointment despite mounting AI investments.
The evidence examined here demonstrates several actionable imperatives for leaders:
Start with value pools, not tools. Identify where AI can deliver the greatest business impact, redesign workflows to capture that value, and concentrate capability building there.
Design role-specific learning journeys. Executives, managers, workflow transformers, and frontline contributors need different capabilities; tailor development accordingly.
Lead from the front. Executive teams must develop shared AI fluency and visibly model desired behaviors; transformation stalls without unified, active leadership.
Build trust proactively. Acknowledge emotional responses to AI, create psychological safety for experimentation, involve employees in shaping AI use, and demonstrate commitment to responsible practices.
Measure what matters. Track behavior change and business outcomes, not just training completion; use data to continuously improve capability-building approaches.
Embed capabilities systematically. Integrate AI expectations into organizational systems—roles, performance management, incentives, governance—to sustain behavior change.
The AI moment demands more than technology deployment. It requires building integrated capability systems that reshape how people think, work, and lead. Organizations that make this investment position themselves to capture AI's full value potential. Those that don't will find themselves perpetually experimenting with limited impact, watching as more capable competitors pull ahead.
The opportunity remains open, but the window narrows as AI-enabled ways of working become competitive table stakes. The question facing leaders is not whether to invest in AI capabilities, but how quickly and systematically they can build the human foundation that transforms AI adoption into lasting business advantage.
References
Allen, T. D., & Eby, L. T. (2003). Relationship effectiveness for mentors: Factors associated with learning and quality. Journal of Management, 29(4), 469-486.
Argote, L., & Miron-Spektor, E. (2011). Organizational learning: From experience to knowledge. Organization Science, 22(5), 1123-1137.
Autor, D. H. (2015). Why are there still so many jobs? The history and future of workplace automation. Journal of Economic Perspectives, 29(3), 3-30.
Baldwin, T. T., & Ford, J. K. (1988). Transfer of training: A review and directions for future research. Personnel Psychology, 41(1), 63-105.
Bass, B. M., & Riggio, R. E. (2006). Transformational leadership (2nd ed.). Lawrence Erlbaum Associates.
Bedard, J., Beidelman, J., Lyle, E., & Messenböck, R. (2025). To unlock the full value of AI, invest in your people. Boston Consulting Group. Article published November 10, 2025.
Colquitt, J. A., Conlon, D. E., Wesson, M. J., Porter, C. O., & Ng, K. Y. (2001). Justice at the millennium: A meta-analytic review of 25 years of organizational justice research. Journal of Applied Psychology, 86(3), 425-445.
Day, D. V., Fleenor, J. W., Atwater, L. E., Sturm, R. E., & McKee, R. A. (2014). Advances in leader and leadership development: A review of 25 years of research and theory. The Leadership Quarterly, 25(1), 63-82.
Edmondson, A. C. (1999). Psychological safety and learning behavior in work teams. Administrative Science Quarterly, 44(2), 350-383.
Edmondson, A. C. (2012). Teaming: How organizations learn, innovate, and compete in the knowledge economy. Jossey-Bass.
Feldman, M. S., & Pentland, B. T. (2003). Reconceptualizing organizational routines as a source of flexibility and change. Administrative Science Quarterly, 48(1), 94-118.
Garvin, D. A., Edmondson, A. C., & Gino, F. (2008). Is yours a learning organization? Harvard Business Review, 86(3), 109-116.
Gronn, P. (2002). Distributed leadership as a unit of analysis. The Leadership Quarterly, 13(4), 423-451.
Grossman, R., & Salas, E. (2011). The transfer of training: What really matters. International Journal of Training and Development, 15(2), 103-120.
Kane, G. C., Palmer, D., Phillips, A. N., Kiron, D., & Buckley, N. (2019). Achieving digital maturity. MIT Sloan Management Review and Deloitte, Research Report.
Kegan, R., & Lahey, L. L. (2009). Immunity to change: How to overcome it and unlock the potential in yourself and your organization. Harvard Business Press.
Kirkpatrick, D. L., & Kirkpatrick, J. D. (2006). Evaluating training programs: The four levels (3rd ed.). Berrett-Koehler Publishers.
Piderit, S. K. (2000). Rethinking resistance and recognizing ambivalence: A multidimensional view of attitudes toward an organizational change. Academy of Management Review, 25(4), 783-794.
Ransbotham, S., Kiron, D., Gerbert, P., & Reeves, M. (2020). Expanding AI's impact with organizational learning. MIT Sloan Management Review and Boston Consulting Group, Research Report.
Tarafdar, M., Cooper, C. L., & Stich, J. F. (2019). The technostress trifecta-techno eustress, techno distress and design: Theoretical directions and an agenda for research. Information Systems Journal, 29(1), 6-42.
Teece, D. J., Pisano, G., & Shuen, A. (1997). Dynamic capabilities and strategic management. Strategic Management Journal, 18(7), 509-533.
Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the technology acceptance model: Four longitudinal field studies. Management Science, 46(2), 186-204.
Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425-478.
Wrzesniewski, A., & Dutton, J. E. (2001). Crafting a job: Revisioning employees as active crafters of their work. Academy of Management Review, 26(2), 179-201.

Jonathan H. Westover, PhD is Chief Academic & Learning Officer (HCI Academy); Associate Dean and Director of HR Programs (WGU); Professor, Organizational Leadership (UVU); OD/HR/Leadership Consultant (Human Capital Innovations). Read Jonathan Westover's executive profile here.
Suggested Citation: Westover, J. H. (2025). To Unlock the Full Value of AI, Invest in Your People: Building Capability Systems That Translate Adoption into Business Impact. Human Capital Leadership Review, 29(4). doi.org/10.70175/hclreview.2020.29.4.4






















