Enterprise AI Upskilling at Scale: Strategic Workforce Transformation in the Age of Generative AI
- Jonathan H. Westover, PhD
- 12 hours ago
- 16 min read
Listen to this article:
Abstract: Large-scale AI upskilling initiatives represent a critical organizational response to generative AI adoption across knowledge-intensive sectors. This article examines enterprise strategies for workforce AI capability development, analyzing Citigroup's 175,000-employee prompt engineering training program alongside parallel initiatives at JPMorgan, Bank of America, and Wells Fargo. Drawing on evidence from organizational learning, change management, and human-capital development research, the analysis identifies key success factors including adaptive learning design, continuous upskilling architectures, psychological safety cultivation, and integration with broader digital transformation efforts. The article argues that sustainable competitive advantage from AI derives not from technology deployment alone but from systematic human capability building that positions AI as augmentation rather than replacement. Organizational responses span mandatory foundational training, role-specific advanced modules, leadership development, and cultural interventions addressing workforce concerns about technological displacement. The findings suggest that effective AI workforce transformation requires coordinated attention to skills development, organizational culture, change communication, and long-term learning infrastructure.
In early 2025, Citigroup launched mandatory AI prompt engineering training for 175,000 employees across 80 global locations—one of the largest corporate AI upskilling initiatives documented to date. The program, announced in an internal memo from Chief Technology Officer Tim Ryan and Chief Operating Officer Anand Selva, exemplifies a broader strategic shift across knowledge-intensive industries: recognizing that generative AI adoption fundamentally depends on systematic workforce capability development rather than technology implementation alone.
The stakes are considerable. Citi employees had already entered 6.5 million prompts into the bank's AI tools by early 2025, yet leadership recognized a critical gap between basic usage and the "great prompting versus basic prompting" distinction that drives "impactful results" (as cited in Paoli, 2025). This distinction—between surface-level technology access and deep capability to extract value—mirrors challenges facing organizations across sectors as generative AI tools proliferate faster than workforce readiness.
The why-now urgency reflects converging pressures. First, competitive dynamics: Citi's financial services peers including JPMorgan, Bank of America, and Wells Fargo had already implemented mandatory AI training, creating potential capability gaps. Second, productivity imperatives: work "that once took hours" now completes "in minutes" when properly leveraged, according to the internal memo (Paoli, 2025). Third, workforce anxiety: absent structured capability building, AI adoption risks triggering concerns about job security and technological displacement that undermine engagement and performance.
This article examines organizational strategies for large-scale AI workforce transformation, analyzing evidence-based approaches to upskilling, cultural integration, and long-term capability building. The analysis draws on documented initiatives across financial services while synthesizing insights applicable to knowledge-intensive sectors more broadly.
The Enterprise AI Upskilling Landscape
Defining AI Literacy and Prompt Engineering Capability
AI literacy in organizational contexts encompasses multiple capability layers beyond basic technology familiarity. Research on technology acceptance and organizational learning suggests that effective technology adoption requires not just access but genuine competency in leveraging tools to achieve work outcomes (Venkatesh & Davis, 2000). Prompt engineering—the specific focus of Citi's initiative—represents a specialized literacy domain: the systematic crafting of natural language inputs to generative AI systems to elicit accurate, relevant, and actionable outputs.
Effective prompt engineering involves understanding model behavior patterns, structuring queries with appropriate context and constraints, iterating based on output quality, and recognizing when human judgment should override AI suggestions. As Peter Fox, Citi's head of learning, noted, the training distinguishes expertise levels: experienced users complete modules in under 10 minutes while beginners require approximately 30 minutes, suggesting the capability exists on a continuum rather than as binary proficiency (Paoli, 2025).
Beyond technical skills, AI literacy encompasses ethical dimensions—recognizing bias risks, maintaining data privacy, ensuring appropriate human oversight—and strategic awareness: knowing when AI augmentation creates value versus when alternative approaches prove more effective. This multidimensional understanding aligns with broader frameworks for digital competency in organizational settings (van Laar et al., 2017).
Prevalence, Drivers, and Distribution of Enterprise AI Training
Large-scale AI upskilling has accelerated dramatically since generative AI's mainstream emergence in late 2022. The financial services sector demonstrates particularly aggressive adoption. JPMorgan integrated mandatory AI training into employee onboarding beginning in 2024, ensuring all new hires develop foundational prompt engineering and AI tool literacy before assuming role-specific responsibilities (Paoli, 2025). Bank of America reported that over 90% of its 213,000 global workforce actively uses AI tools in daily work by April 2025, supported by both voluntary and required training modules across departments (Paoli, 2025). Wells Fargo invested in specialized education, training 4,000 employees through Stanford's Human-Centered AI program to develop advanced capabilities (Paoli, 2025).
Several drivers explain this rapid proliferation. Productivity gains create immediate return-on-investment justification: when properly leveraged, generative AI demonstrably reduces task completion time for information synthesis, content generation, analysis, and routine communication. Competitive pressures intensify as early adopters realize productivity advantages, forcing peers to respond or accept capability gaps. Regulatory and risk management considerations also motivate structured training: in highly regulated industries like financial services, ad hoc AI adoption without systematic capability building and governance creates compliance exposure.
The distribution of training initiatives varies by organizational maturity and strategic positioning. Leading institutions implement mandatory, role-differentiated programs with continuous upskilling architectures. Mid-tier adopters typically offer voluntary foundational training with incentivized participation. Lagging organizations may provide access to AI tools without systematic capability development—an approach multiple experts characterize as insufficient for meaningful transformation.
Organizational and Individual Consequences of AI Capability Gaps
Organizational Performance Impacts
The performance differential between organizations with systematic AI capability building versus those with ad hoc adoption manifests across multiple dimensions. Productivity represents the most direct impact: Accenture CEO Julie Sweet's 2025 statement that the firm would exit employees unable to reskill with AI technology signals the competitive intensity around capability development (Paoli, 2025). When workforce capability lags technology deployment, organizations experience suboptimal tool utilization—employees either avoid AI systems entirely or use them ineffectively, failing to realize promised productivity gains.
Innovation capacity also suffers from capability gaps. Research on organizational innovation demonstrates that technology alone does not drive innovation outcomes; rather, employee capability to creatively apply tools determines whether organizations realize transformative benefits versus incremental improvements (Damanpour & Schneider, 2006). Employees lacking sophisticated prompt engineering skills cannot leverage generative AI for creative problem-solving, scenario exploration, or novel solution generation—use cases that extend beyond efficiency gains to support strategic differentiation.
Wells Fargo's virtual assistant Fargo handled 20 million interactions within its first year, with projections to reach 100 million annually as capabilities develop (Paoli, 2025)—illustrating how systematic capability building enables organizations to scale AI applications from pilot to enterprise impact.
Risk exposure increases when deployment outpaces capability development. Employees using AI tools without understanding limitations, bias patterns, or appropriate oversight introduce quality control failures, compliance violations, and reputational risks. Gary Lamach from ELB Learning characterizes superficial approaches as treating "AI as a box to check, launching a tool or rolling out a one-time training, and calling it transformation"—an implementation pattern he identifies with failure (Paoli, 2025).
Talent retention and recruitment suffer when organizations fall behind in capability building. As AI literacy becomes baseline expectation for knowledge workers, particularly among early-career professionals, firms unable to demonstrate systematic upskilling opportunities face disadvantages in talent markets.
Individual Wellbeing and Workforce Impacts
For individual employees, inadequate AI capability support creates psychological strain and performance anxiety. Christina Muller, a workplace mental health expert at R3 Continuum, observes that absent structured training, "people may wonder if they will become expendable," viewing AI adoption as threatening rather than enabling (Paoli, 2025). This anxiety manifests in resistance behaviors—avoiding new tools, maintaining inefficient legacy workflows, or experiencing heightened stress from perceived skill obsolescence.
Research on technology-induced job insecurity demonstrates that organizational responses to automation significantly influence employee psychological outcomes (Brougham & Haar, 2018). When organizations fail to provide clear communication about technological change and concrete capability development support, employees experience elevated stress, reduced job satisfaction, and increased turnover intentions. Conversely, transparent communication paired with genuine upskilling opportunities can mitigate negative psychological impacts.
The capability gap also creates internal stratification: employees with self-directed learning capacity or prior technical exposure develop AI proficiency independently, while those lacking resources or confidence fall behind, widening productivity and advancement disparities within workforces. When organizations fail to democratize AI capability building, they risk entrenching inequality rather than creating broad-based productivity gains.
Job satisfaction deteriorates when employees recognize that colleagues or competitors leverage AI effectively while they struggle. Walmart CEO Doug McMillon's statement that AI would affect "virtually every job" at the retailer (Paoli, 2025) captures the pervasiveness of transformation—making capability development not peripheral but central to continued professional viability.
Conversely, effective training interventions yield positive wellbeing impacts. When organizations position AI as "co-pilot—not a replacement—on an already flying plane," as Muller recommends (Paoli, 2025), and provide genuine capability building rather than superficial exposure, employees experience mastery, increased autonomy in leveraging new tools, and reduced anxiety about technological change.
Evidence-Based Organizational Responses
Mandatory Foundational Training with Adaptive Design
Systematic AI capability building begins with universal foundational training ensuring all employees develop baseline literacy regardless of role or technical background. Citi's approach exemplifies evidence-based design: the training module uses adaptive learning platforms that tailor content and pacing based on individual knowledge levels, enabling experts to complete training efficiently while providing extended support for beginners (Paoli, 2025). This differentiation addresses a common training failure mode—one-size-fits-all programs that bore experienced users while overwhelming novices.
Research on training effectiveness demonstrates that adaptive, personalized learning approaches yield superior outcomes compared to standardized curricula, particularly in heterogeneous learner populations (Truong, 2016). By assessing prior knowledge and adjusting content complexity accordingly, adaptive systems optimize engagement and learning efficiency across skill levels.
Effective foundational training addresses multiple knowledge domains:
Core curriculum components:
AI system capabilities and limitations (avoiding both over-reliance and under-utilization)
Prompt engineering fundamentals: structuring queries, providing context, iterating based on outputs
Output evaluation: recognizing hallucinations, assessing accuracy, applying critical judgment
Appropriate use cases and escalation criteria (when to use AI versus alternative approaches)
Data privacy, security, and ethical considerations
JPMorgan's integration of mandatory AI training into new employee onboarding ensures capability building precedes tool access, establishing competency before deployment risk. The onboarding approach embeds AI literacy as foundational expectation rather than optional enhancement, signaling organizational commitment and normalizing capability development across the workforce.
Bank of America achieved 90% workforce adoption of AI tools through combined mandatory and voluntary training, suggesting that universal baseline requirements can coexist with additional elective modules for deeper specialization. The bank's departmental deployment indicates role-specific customization beyond foundational training—recognizing that AI applications in risk management differ substantively from customer service or investment analysis contexts (Paoli, 2025).
Continuous Upskilling Architectures Beyond One-Time Training
Multiple experts emphasize that single training events prove insufficient for meaningful capability building in rapidly evolving AI landscapes. Lamach explicitly critiques "one-time training" approaches as inadequate transformation strategies (Paoli, 2025). Sustainable capability development requires continuous learning architectures that support ongoing skill refinement as tools evolve and organizational use cases mature.
Research on professional development demonstrates that continuous, practice-integrated learning yields superior long-term capability retention and application compared to episodic training interventions (Grossman & Salas, 2011). For complex, evolving competencies like AI literacy, ongoing learning opportunities prove essential for maintaining proficiency as technologies and organizational contexts change.
Citi explicitly positions prompt engineering training as "part of a 'continuous upskilling process we're offering all employees,'" with additional modules available for employees seeking deeper AI knowledge beyond mandatory baseline training (Paoli, 2025). This layered architecture enables progressive capability development aligned with individual roles and interests.
Continuous learning infrastructure elements:
Regular refresher modules addressing tool updates and emerging capabilities
Advanced specialty tracks (domain-specific AI applications, advanced prompt engineering, AI-assisted analytics)
Peer learning communities enabling knowledge sharing about effective practices
Performance support resources (prompt libraries, use case repositories, troubleshooting guides)
Feedback mechanisms capturing employee experiences to inform training evolution
Wells Fargo's investment in Stanford's Human-Centered AI program for 4,000 employees represents a substantial continuous development commitment, providing university-level education to develop sophisticated capabilities beyond basic tool literacy (Paoli, 2025). While resource-intensive, such partnerships enable organizations to access cutting-edge knowledge and position selected employees as internal AI champions who can mentor colleagues and drive organizational capability building.
The financial services sector's rapid iteration suggests effective continuous learning adapts to competitive dynamics. As JPMorgan, Bank of America, and Wells Fargo implement successive training generations, each firm learns from peer experiences while tailoring approaches to organizational culture and strategic priorities.
Psychological Safety and Change Communication Strategies
Effective AI capability building requires parallel attention to psychological dimensions of technological change. Muller's observation that employees "may wonder if they will become expendable" absent clear communication positions anxiety management as core implementation requirement, not peripheral consideration (Paoli, 2025). Organizations that fail to address displacement concerns risk resistance, disengagement, and talent attrition that undermine technology investments.
Research on organizational change demonstrates that transparent communication about transformation rationale, concrete support for affected employees, and participative implementation processes significantly improve change outcomes and reduce resistance (Oreg et al., 2011). Evidence-based change communication combines transparency about transformation implications with concrete demonstrations of organizational investment in people.
The "co-pilot" framing Muller recommends—positioning AI as augmentation enhancing human capability rather than replacement—requires consistent reinforcement through leadership messaging, training design, and actual deployment patterns (Paoli, 2025). This framing aligns with research on technology framing effects, which demonstrates that how organizations communicate about automation influences employee attitudes and adoption behaviors (Brougham & Haar, 2018).
Psychological safety cultivation approaches:
Transparent communication about AI deployment plans and workforce implications
Concrete upskilling commitments demonstrating organizational investment in employee capability
Career pathway articulation showing how AI proficiency creates advancement opportunities
Safe experimentation environments where employees can learn without performance consequences
Recognition and celebration of successful AI-augmented outcomes to normalize adoption
The Citi internal memo's framing illustrates effective communication design: comparing effective prompting to "the right question in a client pitch" connects new capability to familiar professional skills, positioning AI literacy as natural evolution rather than radical departure (Paoli, 2025). By emphasizing that training enables employees to "accelerate your work, surface insights and amplify your impact," leadership messages empowerment rather than displacement.
Accenture's approach—publicly stating the firm would exit employees unable to reskill—represents a contrasting communication strategy that may motivate capability development through urgency but risks heightening anxiety and reducing psychological safety (Paoli, 2025). The sustainability of such approaches likely depends on labor market conditions and organizational culture, with potential for counterproductive effects in contexts where talent retention proves critical.
Role-Specific Application Development and Use Case Libraries
Foundational AI literacy becomes organizationally valuable only when translated into role-specific applications that drive actual work outcomes. Generic prompt engineering capability must connect to concrete use cases reflecting employees' daily responsibilities, challenges, and performance metrics. Banks' departmental deployment strategies recognize this principle: risk analysts, relationship managers, and operations staff require different AI applications despite sharing foundational literacy.
Effective role-specific development identifies high-value use cases where AI augmentation creates measurable impact, then provides targeted training, templates, and support enabling employees to implement those applications. This approach moves beyond abstract capability to practical value realization.
Role-specific capability building elements:
Use case identification workshops engaging employees to surface relevant AI applications
Role-customized training modules demonstrating domain-specific prompting techniques
Template and prompt libraries codifying effective approaches for common tasks
Metrics frameworks measuring AI-augmented performance against baseline workflows
Community of practice forums enabling role-based knowledge sharing
Wells Fargo's Fargo virtual assistant demonstrates role-specific application at scale: the tool handled specific customer service interactions rather than providing generic AI access, with interaction volumes reaching 20 million in the first year (Paoli, 2025). The application's design for particular use cases—customer inquiries, account information, transaction support—illustrates how focused deployment creates measurable value versus diffuse technology availability.
Citi's internal data showing 6.5 million prompts entered by early 2025 suggests widespread engagement, but the quality and business impact of those prompts likely varies substantially (Paoli, 2025). Role-specific development helps ensure that engagement translates into productivity gains, quality improvements, or innovation outcomes rather than exploratory usage without performance impact.
Leadership Development and Executive Capability Building
AI transformation requires leadership capability development alongside workforce upskilling. Executives and managers unable to leverage AI effectively cannot provide credible direction, evaluate team outputs, or make informed decisions about technology investments and deployment priorities. Leadership capability gaps create strategic risks that compound operational challenges from workforce capability deficits.
JPMorgan's inclusion of AI training in new employee onboarding, regardless of seniority, signals expectation that leadership develops capability alongside individual contributors (Paoli, 2025). This approach prevents the emergence of a capability divide where senior decision-makers lack hands-on understanding of tools they oversee.
Leadership-specific capability priorities:
Strategic AI application identification within business units
AI-augmented output evaluation and quality assessment
Team capability development and performance management in AI-enabled environments
Risk oversight and governance for AI deployment
Resource allocation decisions balancing AI investment with human capital development
Lamach's assertion that "the future of finance will not be defined by the tools themselves, but by the leaders who empower their people to use them with purpose" positions leadership capability as determinative factor separating successful transformation from superficial adoption (Paoli, 2025). Leaders require not just technical literacy but change management capability, communication skill, and strategic vision to guide workforce transitions.
The financial services sector's C-suite involvement in AI training announcements—with statements from Citi's COO and CTO, Accenture's CEO, and Walmart's CEO—demonstrates executive engagement levels necessary for credible transformation (Paoli, 2025). When leadership visibly commits to capability building and articulates strategic rationale, organizational adoption accelerates relative to initiatives championed only at middle management levels.
Building Long-Term AI-Enabled Organizational Capability
Learning Culture and Continuous Capability Development Systems
Sustainable competitive advantage from AI requires embedding continuous learning as organizational norm rather than treating capability building as discrete training events. Organizations with strong learning cultures—where experimentation, knowledge sharing, and skill development pervade daily work—adapt more effectively to technological change than institutions where learning remains episodic or compliance-driven (Schein, 2010).
Citi's positioning of prompt training within a "continuous upskilling process" reflects this principle, signaling ongoing capability development rather than finite skill acquisition (Paoli, 2025). The challenge lies in operationalizing continuous learning through infrastructure, incentives, and cultural norms that sustain capability building beyond initial training mandates.
Learning culture enablers:
Time allocation legitimizing learning during work hours rather than relegating development to discretionary time
Knowledge-sharing platforms and communities of practice facilitating peer learning
Performance management systems rewarding capability development and knowledge contribution
Leadership modeling continuous learning through visible skill-building engagement
Psychological safety supporting experimentation and constructive failure in capability development
Organizations transitioning from stable technological environments to rapid AI evolution must often transform learning cultures developed around periodic training interventions. This cultural shift proves as challenging as technical upskilling, requiring sustained leadership attention and structural reinforcement. Research on organizational culture change demonstrates that successful transformation requires alignment of formal systems (policies, metrics, rewards) with informal norms and leadership behaviors (Schein, 2010).
AI Governance, Ethics, and Responsible Use Frameworks
As AI capabilities proliferate across workforces, governance frameworks ensuring responsible, ethical, and compliant use become critical infrastructure. Training alone cannot guarantee appropriate AI application; organizations require clear policies, oversight mechanisms, and accountability structures guiding employee judgment when deploying AI tools.
Financial services institutions face particularly stringent regulatory requirements around bias, transparency, data privacy, and consumer protection that necessitate robust AI governance. Bank of America's 90% workforce AI adoption magnifies governance complexity: ensuring 213,000 employees use AI consistently with regulatory expectations and ethical standards requires systematic controls beyond individual training (Paoli, 2025).
Research on AI ethics implementation emphasizes the necessity of translating abstract principles into concrete organizational processes, decision frameworks, and accountability mechanisms (Jobin et al., 2019). Without operationalized governance, even well-intentioned ethical commitments fail to influence day-to-day employee decisions about AI deployment.
Governance framework components:
Clear use case approval processes distinguishing permitted from prohibited AI applications
Bias detection and mitigation protocols for AI-assisted decisions affecting stakeholders
Data governance ensuring AI tools access only appropriate information
Human oversight requirements for consequential AI-generated outputs
Audit trails and accountability mechanisms enabling post-deployment review
Escalation pathways for ethics concerns or edge cases
Effective governance balances control with enablement: overly restrictive frameworks limit productivity gains and frustrate employees, while insufficient oversight creates compliance and reputational risk. Organizations must iteratively calibrate governance as AI capabilities evolve and workforce proficiency develops.
Integration with Broader Digital Transformation and Operating Model Evolution
AI capability building achieves maximum impact when integrated with broader digital transformation efforts rather than implemented as isolated initiative. Organizations simultaneously evolving cloud infrastructure, data platforms, collaboration tools, and process automation create synergies where AI upskilling amplifies other technology investments.
Citi's internal memo positions AI adoption as "the beginning of a new way of working," suggesting transformation extends beyond discrete productivity gains to fundamental operating model evolution (Paoli, 2025). When process redesign, organizational restructuring, and capability building proceed in coordination, organizations realize systemic benefits exceeding individual intervention impacts.
Digital transformation integration areas:
Process reengineering optimizing workflows around AI-augmented capabilities
Data infrastructure investments ensuring AI tools access high-quality, well-governed information
Collaboration platform evolution supporting AI-enabled teamwork and knowledge sharing
Performance measurement system updates capturing AI-augmented productivity and quality
Talent strategy alignment positioning AI capability in hiring, development, and retention
Walmart CEO McMillon's statement that AI would affect "virtually every job" reflects enterprise-wide transformation scope requiring coordination across technology, process, people, and strategy dimensions (Paoli, 2025). Organizations treating AI as IT initiative rather than business transformation typically achieve limited value realization despite technology deployment. Research on digital transformation success factors consistently identifies integration across organizational domains as critical differentiator between successful and failed initiatives (Vial, 2019).
Conclusion
Citigroup's 175,000-employee AI upskilling initiative exemplifies a fundamental shift in organizational AI strategy: recognizing that competitive advantage derives not from technology acquisition but from systematic human capability development. The evidence from financial services institutions—spanning JPMorgan's onboarding integration, Bank of America's 90% workforce adoption, and Wells Fargo's Stanford partnership—demonstrates that leading organizations invest substantially in continuous, role-specific, psychologically-informed capability building rather than treating training as compliance exercise.
Several actionable insights emerge for organizations pursuing AI workforce transformation:
First, foundational literacy must be universal and adaptive, accommodating varying experience levels while ensuring all employees develop baseline proficiency. Mandatory training signals organizational commitment while adaptive design respects individual capability differences.
Second, continuous learning architectures prove essential in rapidly evolving AI landscapes. One-time training events quickly obsolete; sustainable capability building requires ongoing development infrastructure, communities of practice, and progressive specialization pathways.
Third, psychological safety and change communication critically determine adoption success. Positioning AI as augmentation rather than replacement, demonstrating concrete organizational investment in people, and addressing displacement anxiety transparently enable employees to engage capability building constructively rather than resistively.
Fourth, role-specific application development translates generic literacy into business value. Use case identification, template development, and domain-customized training connect AI capability to actual work outcomes and performance metrics.
Fifth, leadership capability building proves as important as workforce upskilling. Executives and managers require hands-on proficiency to provide credible direction, evaluate outputs, and make informed strategic decisions about AI deployment.
Looking forward, the organizations that thrive in AI-enabled environments will be those that position capability building as continuous strategic priority rather than discrete implementation phase. As Lamach observes, "the future of finance will not be defined by the tools themselves, but by the leaders who empower their people to use them with purpose" (Paoli, 2025). That empowerment requires sustained investment in learning infrastructure, cultural transformation, governance frameworks, and integration with broader operating model evolution.
The scale and speed of AI capability building in financial services—from zero to near-universal adoption within 18 months across major institutions—demonstrates both the urgency of transformation and the feasibility of large-scale upskilling when organizations commit resources and leadership attention. The question facing knowledge-intensive industries is not whether to pursue systematic AI capability development but how rapidly and effectively to implement approaches positioned for long-term competitive advantage.
References
Brougham, D., & Haar, J. (2018). Smart technology, artificial intelligence, robotics, and algorithms (STARA): Employees' perceptions of our future workplace. Journal of Management & Organization, 24(2), 239–257.
Damanpour, F., & Schneider, M. (2006). Phases of the adoption of innovation in organizations: Effects of environment, organization and top managers. British Journal of Management, 17(3), 215–236.
Grossman, R., & Salas, E. (2011). The transfer of training: What really matters. International Journal of Training and Development, 15(2), 103–120.
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399.
Oreg, S., Vakola, M., & Armenakis, A. (2011). Change recipients' reactions to organizational change: A 60-year review of quantitative studies. The Journal of Applied Behavioral Science, 47(4), 461–524.
Paoli, N. (2025, January 1). Citi begins retraining 175,000 employees in working with AI: 'great prompting versus basic prompting to generate impactful results.' Fortune.
Schein, E. H. (2010). Organizational culture and leadership (4th ed.). Jossey-Bass.
Truong, H. M. (2016). Integrating learning styles and adaptive e-learning system: Current developments, problems and opportunities. Computers in Human Behavior, 55, 1185–1193.
van Laar, E., van Deursen, A. J., van Dijk, J. A., & de Haan, J. (2017). The relation between 21st-century skills and digital skills: A systematic literature review. Computers in Human Behavior, 72, 577–588.
Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the technology acceptance model: Four longitudinal field studies. Management Science, 46(2), 186–204.
Vial, G. (2019). Understanding digital transformation: A review and a research agenda. The Journal of Strategic Information Systems, 28(2), 118–144.

Jonathan H. Westover, PhD is Chief Academic & Learning Officer (HCI Academy); Associate Dean and Director of HR Programs (WGU); Professor, Organizational Leadership (UVU); OD/HR/Leadership Consultant (Human Capital Innovations). Read Jonathan Westover's executive profile here.
Suggested Citation: Westover, J. H. (2025). Enterprise AI Upskilling at Scale: Strategic Workforce Transformation in the Age of Generative AI. Human Capital Leadership Review, 26(4). doi.org/10.70175/hclreview.2020.26.4.4