AI-Driven Workforce Planning: Predictive Models for Future Talent Needs
- Jonathan H. Westover, PhD
- Oct 23
- 16 min read
Updated: Oct 23
Listen to this article:
Abstract: Organizations increasingly deploy artificial intelligence to anticipate workforce requirements, moving beyond reactive headcount management toward predictive talent architecture. This article examines how AI-driven workforce planning systems combine machine learning, organizational data, and external labor market signals to forecast skill gaps, succession risks, and capacity constraints. Drawing on recent empirical studies and practitioner cases across technology, healthcare, and manufacturing sectors, the analysis identifies evidence-based implementation strategies including data infrastructure development, algorithm transparency protocols, and human-centered design principles. The article synthesizes organizational performance outcomes—ranging from reduced time-to-hire to improved diversity metrics—alongside emerging governance challenges surrounding algorithmic bias and employee privacy. Forward-looking recommendations emphasize the integration of predictive workforce analytics within broader talent ecosystems, the cultivation of internal analytics capability, and the establishment of ethical guardrails that balance optimization with human dignity.
The convergence of demographic shifts, accelerating skill obsolescence, and distributed work models has fundamentally altered how organizations think about workforce supply and demand. Traditional workforce planning—often reliant on spreadsheet projections and manager intuition—struggles to keep pace with environments where technical skills depreciate at estimated rates of 2.5 to 5 years and competitive talent markets shift within quarters rather than planning cycles (World Economic Forum, 2020). Meanwhile, advances in machine learning, natural language processing, and graph analytics have matured sufficiently to parse millions of data points across internal HR systems, labor market platforms, and economic indicators, surfacing patterns invisible to manual analysis.
The practical stakes are considerable. Organizations that accurately forecast talent needs can reduce costly emergency hiring, minimize productivity losses from unfilled critical roles, and make strategic investments in reskilling before capability gaps materialize. Conversely, those relying on lagging indicators often find themselves competing for scarce talent in overheated markets or carrying excess capacity during downturns. Early adopters of AI-driven workforce planning report measurable improvements: shorter recruitment cycles, higher quality-of-hire scores, and better alignment between workforce composition and strategic priorities (Bersin et al., 2019).
Yet predictive workforce models also introduce governance complexities. Algorithms trained on historical data may perpetuate past biases in hiring and promotion; opacity in model logic can undermine employee trust; and the quantification of human potential raises ethical questions about autonomy and dignity. This article explores both the organizational value and implementation challenges of AI-driven workforce planning, offering evidence-based guidance for HR leaders, analytics professionals, and executives navigating this evolving capability.
The Workforce Planning Landscape
Defining AI-Driven Workforce Planning in Contemporary Practice
AI-driven workforce planning refers to the application of machine learning algorithms, predictive analytics, and data integration platforms to forecast future talent requirements, identify skill gaps, model workforce scenarios, and optimize talent allocation decisions. Unlike traditional workforce planning—which typically projects headcount linearly from business plans—AI approaches incorporate diverse data streams: employee demographics and tenure patterns, performance ratings and promotion velocity, external labor market dynamics, technology adoption roadmaps, and even macroeconomic indicators (Huselid & Becker, 2011).
Contemporary systems often combine several analytical techniques. Supervised learning models predict attrition risk by identifying patterns among employees who previously left. Natural language processing analyzes job descriptions, resumes, and skills taxonomies to map capability supply against strategic demand. Graph analytics reveal informal influence networks and succession vulnerabilities. Simulation engines test "what-if" scenarios—how would a market expansion, technology platform migration, or acquisition affect talent needs across functions and geographies?
The distinction between descriptive workforce analytics and predictive planning is meaningful. Descriptive analytics answer "what happened" questions: turnover rates by department, demographic composition, recruitment funnel conversion. Predictive models address "what will happen" and "what should we do" questions: which high-performers are flight risks, where will critical skill shortages emerge, how should we sequence hiring and development investments (Davenport et al., 2010)?
Prevalence, Drivers, and Adoption Patterns
Adoption of AI-driven workforce planning remains concentrated among large enterprises and digitally mature sectors, though diffusion is accelerating. A 2022 survey of global HR executives found that 37% had deployed predictive workforce analytics in at least one business unit, up from 14% in 2018 (Deloitte, 2022). Technology, financial services, and healthcare organizations lead adoption, driven by acute talent competition, regulatory reporting requirements, and established data infrastructures.
Several convergent forces drive organizational interest. First, chronic skill mismatches create economic pressure—the talent shortage cost organizations an estimated $8.5 trillion globally in unrealized revenue during 2022 alone (Korn Ferry, 2022). Second, pandemic-era disruptions revealed the fragility of static workforce plans; organizations that could rapidly model remote-work scenarios and reallocate talent across functions demonstrated greater resilience. Third, the consumerization of AI has lowered technical barriers—cloud-based platforms now embed machine learning capabilities that previously required specialized data science teams.
However, significant implementation barriers persist. Many organizations lack integrated HR data systems, with employee information fragmented across payroll, applicant tracking, learning management, and performance platforms. Data quality issues—incomplete records, inconsistent taxonomies, missing skills inventories—undermine model accuracy. Cultural resistance emerges when workforce planning shifts from HR generalists to analytics specialists, and when algorithmic recommendations challenge manager prerogatives. Privacy regulations such as GDPR impose constraints on employee data processing and algorithmic decision-making transparency (European Commission, 2018).
Organizational and Individual Consequences of AI-Driven Workforce Planning
Organizational Performance Impacts
Empirical evidence on organizational outcomes remains emerging but increasingly robust. IBM reported that their AI-powered workforce planning system reduced time-to-fill critical roles by 30% and improved quality-of-hire metrics by identifying non-traditional candidate pools that traditional recruiters overlooked (Marr, 2020). Unilever's implementation of predictive attrition modeling enabled proactive retention interventions for high-risk technical talent, reducing regrettable turnover in engineering roles by 25% over 18 months (CIPD, 2021).
Financial impacts extend beyond recruitment efficiency. Organizations using predictive workforce planning report 15-20% reductions in contingent labor spending by optimizing workforce mix and improving capacity utilization forecasts (Boudreau & Cascio, 2017). More sophisticated adopters model the ROI of build-versus-buy talent decisions—comparing the total cost and time-to-productivity of external hiring against internal development programs, adjusted for attrition probabilities and skill transferability.
Strategic benefits appear in organizational agility and foresight. When Siemens modeled the workforce implications of their digital transformation roadmap, predictive analytics revealed that 40% of their industrial engineering workforce would require substantial reskilling within three years—a finding that accelerated investments in learning platforms and talent marketplace systems (Bersin, 2019). The ability to surface such insights 12-24 months before capability gaps materialize provides meaningful strategic lead time.
However, performance outcomes vary considerably with implementation quality. Organizations that treat AI-driven workforce planning as primarily a technology deployment rather than a sociotechnical system often see disappointing results. Models require continuous calibration as business strategies shift and labor markets evolve. Without change management and stakeholder engagement, analytically optimal recommendations face managerial resistance. The performance dividend emerges from the combination of predictive capability and organizational readiness to act on insights.
Stakeholder and Employee Impacts
For employees, the consequences of algorithmic workforce planning are double-edged. On the positive side, predictive models can identify development opportunities and career paths that individuals might not have envisioned. Machine learning systems analyzing skills adjacencies have surfaced non-obvious internal mobility opportunities, helping employees navigate lateral moves that position them for future growth (Cappelli, 2020). Some organizations use predictive analytics to personalize learning recommendations, directing employees toward capabilities their career trajectory and market trends suggest will be valuable.
Yet algorithmic opacity and perceived surveillance generate legitimate concerns. When employees learn that AI systems calculate their flight risk or promotion probability, without understanding the underlying logic, trust and psychological safety may erode. Research on algorithmic management indicates that employees experience greater stress and lower organizational commitment when they perceive evaluation systems as opaque or unfair, even when the systems are statistically accurate (Lee, 2018).
Privacy implications are particularly acute. Workforce planning models often incorporate sensitive data—health claims patterns, family status, compensation history, informal network connections—raising questions about consent and appropriate use. The European Union's General Data Protection Regulation establishes rights to explanation and human review of automated decisions affecting individuals, requirements that many workforce planning systems struggle to satisfy (European Commission, 2018).
Bias risks represent another critical concern. Because machine learning models learn patterns from historical data, they can perpetuate past discrimination unless explicitly designed to counteract it. A widely cited example involved Amazon's experimental recruiting algorithm, which penalized resumes containing the word "women's" because historical hiring patterns in technical roles skewed male—the company ultimately abandoned the system (Dastin, 2018). In workforce planning, similar dynamics can emerge in promotion predictions, succession planning, and development opportunity allocation if historical inequities shape training data.
Evidence-Based Organizational Responses
Integrated Data Infrastructure and Governance
Effective AI-driven workforce planning begins not with algorithms but with foundational data architecture. Organizations achieving meaningful results typically invest 12-18 months building integrated HR data platforms before deploying sophisticated predictive models. This infrastructure consolidates employee records, skills inventories, performance data, compensation information, and learning histories into unified systems with consistent taxonomies and data quality standards.
Schneider Electric's approach illustrates this foundation-first strategy. The company spent two years harmonizing HR data across 100+ countries and multiple legacy systems, establishing common job architectures and skills frameworks, before implementing predictive workforce models (Boudreau & Cascio, 2017). This groundwork enabled accurate modeling of global talent pools and skills transferability across regions—capabilities impossible with fragmented data.
Key infrastructure components include:
Unified data models that connect employee records, organizational structures, job taxonomies, and skills ontologies
Master data management processes ensuring consistent definitions, regular data quality audits, and clear ownership
Skills taxonomies that map both current employee capabilities and future strategic requirements using standardized frameworks
External data integration incorporating labor market analytics, competitive intelligence, and economic indicators
Privacy-preserving architectures implementing appropriate access controls, anonymization for aggregate analytics, and consent management
API ecosystems enabling data flow between HR systems, business planning platforms, and analytics environments
Governance structures matter as much as technical infrastructure. Leading organizations establish cross-functional workforce planning councils that include HR, finance, business unit leaders, and data governance representatives. These bodies define acceptable use policies, approve model deployment, review algorithmic outputs for bias, and adjudicate conflicts between analytical recommendations and manager judgment (Huselid & Becker, 2011).
Algorithmic Transparency and Interpretability Protocols
As predictive workforce models influence consequential decisions—hiring priorities, development investments, restructuring plans—the ability to explain model logic becomes both an ethical imperative and a practical necessity for user adoption. Organizations successfully deploying AI-driven workforce planning implement transparency protocols that balance model sophistication with interpretability.
Microsoft's approach to their talent analytics platform emphasizes "glass box" models over "black box" algorithms where possible. For attrition prediction, they prioritize ensemble methods and regularized regression that surface feature importance—showing that factors like manager quality, promotion velocity, and compensation percentile drive predictions—rather than more accurate but opaque deep learning approaches (Marr, 2020). When complex models are necessary, they complement predictions with natural language explanations: "This employee's flight risk increased due to below-market compensation relative to peers, limited recent skill development, and a manager with high team turnover."
Effective transparency practices include:
Model documentation specifying training data sources, features, algorithms, validation methods, known limitations, and refresh frequency
Feature importance disclosure identifying which variables most influence predictions, enabling managers to understand and potentially address root causes
Confidence intervals communicating prediction uncertainty rather than false precision—"70-85% probability of achieving hiring target" versus "78.3% probability"
Counterfactual explanations showing what would need to change for different outcomes—"if this business unit improved manager quality scores, predicted attrition would decrease by X%"
Stakeholder testing with diverse employee groups to assess whether explanations are meaningful and culturally appropriate
Human review requirements for high-stakes decisions, ensuring algorithms inform rather than determine outcomes
Unilever's implementation of AI-driven campus recruiting illustrates transparency in practice. Their system uses gamified assessments and video interviews analyzed by machine learning to screen candidates. However, the company provides candidates with feedback on assessment dimensions and ensures human recruiters make final decisions, with the algorithm serving as decision support rather than replacement (CIPD, 2021).
Bias Detection and Algorithmic Fairness Interventions
Addressing algorithmic bias requires proactive design choices, ongoing monitoring, and correction mechanisms throughout the model lifecycle. Organizations at the frontier of responsible AI-driven workforce planning implement multi-layered fairness interventions, recognizing that technical solutions alone cannot guarantee equitable outcomes.
Pre-processing interventions address bias in training data. When Hilton examined historical promotion patterns to build succession planning models, they discovered that women and employees of color were systematically underrepresented in certain leadership pipelines—not due to performance differences, but due to historical access barriers to developmental assignments. Rather than training models on biased historical patterns, they reweighted training data and supplemented it with external benchmarks for balanced representation (Bersin, 2019).
Comprehensive bias mitigation strategies include:
Diverse data collection ensuring training datasets represent the full spectrum of employee demographics, backgrounds, and career paths
Sensitive attribute analysis testing model predictions across protected groups to identify disparate impact before deployment
Fairness constraints encoding equity requirements directly into optimization algorithms—for example, requiring that promotion predictions achieve demographic parity across gender and ethnicity
Adversarial debiasing using machine learning techniques that penalize models for making predictions correlated with protected attributes
Regular bias audits conducted by independent teams examining model outputs, interviewing affected employees, and testing for unintended consequences
Stakeholder input from diversity councils, employee resource groups, and ethics committees during model design and refinement
IBM's implementation of AI-driven career pathing demonstrates ongoing monitoring. The company established a Talent Analytics Review Board that quarterly examines whether development opportunity recommendations show demographic skew, whether employees from different backgrounds receive comparable coaching, and whether the algorithm surfaces diverse candidate slates for leadership roles. When disparities emerge, the team investigates root causes—sometimes in the algorithm, sometimes in underlying organizational practices the algorithm accurately reflects—and implements corrections (Marr, 2020).
Change Management and Human-Centered Implementation
Technical sophistication in predictive models means little if organizational stakeholders do not understand, trust, or act on insights. Organizations achieving value from AI-driven workforce planning invest as much in change management and user experience as in algorithms, recognizing that adoption depends on manager capability and employee acceptance.
When Cisco deployed predictive workforce planning across global operations, they designed a phased rollout focused on building internal capability. Rather than immediately deploying sophisticated models enterprise-wide, they started with a single business unit, embedded analytics specialists within HR business partner teams, and conducted extensive training on interpreting model outputs. Managers learned not just what the predictions indicated, but how to have conversations with employees about development needs the models surfaced (Boudreau & Cascio, 2017).
Human-centered implementation approaches include:
Co-design processes involving managers, employees, and HR partners in defining use cases, prioritizing features, and testing prototypes
Capability building training HR professionals and business leaders in data literacy, model interpretation, and evidence-based decision-making
Gradual automation starting with decision support and augmentation before moving to automated recommendations for high-stakes choices
Feedback mechanisms enabling users to challenge predictions, report errors, and suggest improvements—creating learning loops that improve both models and organizational processes
Use case prioritization beginning with low-stakes, high-frequency decisions where errors have limited consequences, building credibility before addressing promotion, compensation, or restructuring
Communication strategies explaining the purpose, benefits, and limitations of workforce planning systems to employees, emphasizing human oversight and agency
Procter & Gamble's rollout of predictive talent analytics emphasized manager empowerment rather than automation. The system provides hiring managers with candidate quality predictions and diversity metrics, but requires managers to document their reasoning when deviating from recommendations. This "human-in-the-loop" approach preserved managerial autonomy while creating accountability and generating data that improved subsequent model iterations (CIPD, 2021).
Continuous Learning and Model Operations
AI-driven workforce planning systems require ongoing maintenance and evolution as business strategies shift, labor markets change, and organizational contexts evolve. Leading organizations establish model operations (MLOps) capabilities that treat predictive workforce analytics as living systems requiring continuous improvement rather than one-time deployments.
Attrition prediction models exemplify this need. A model trained on pre-pandemic employee behavior performed poorly during 2020-2021 as remote work, family responsibilities, and market volatility altered turnover patterns. Organizations with robust model operations detected this performance degradation through monitoring dashboards, retrained models on recent data incorporating new features (remote work status, caregiving responsibilities), and validated updated predictions before redeployment (Davenport et al., 2010).
Model operations practices include:
Performance monitoring tracking prediction accuracy, calibration, and fairness metrics on an ongoing basis with automated alerts for degradation
Concept drift detection identifying when the relationships between features and outcomes shift, requiring model retraining
Regular retraining schedules updating models with recent data at defined intervals (monthly, quarterly) to maintain relevance
A/B testing comparing new model versions against current production systems before full deployment
Feedback integration incorporating manager input, employee outcomes, and business results to improve feature engineering and model design
Documentation and versioning maintaining clear records of model changes, performance over time, and lessons learned
General Electric's approach to workforce planning analytics includes dedicated MLOps teams that manage model lifecycle. They maintain parallel model versions, continuously test experimental features, and conduct quarterly business reviews assessing whether predictions aligned with actual hiring needs, attrition patterns, and skills gaps. This discipline enabled rapid adaptation when the company's strategic pivot toward digital industrial solutions required wholesale recalibration of skills forecasts (Marr, 2020).
Building Long-Term Workforce Intelligence Capability
Analytics Capability and Talent Development
Sustainable value from AI-driven workforce planning requires building internal capability rather than relying solely on external vendors or centralized data science teams. Organizations developing this competency invest in upskilling HR professionals, business leaders, and employees in data literacy, analytical thinking, and evidence-based decision-making.
Progressive organizations are creating hybrid roles that combine HR expertise with analytical skills. Talent analytics business partners sit within business units, translating strategic questions into analytical inquiries, interpreting model outputs in organizational context, and helping leaders act on insights. These roles require both technical competence—understanding model assumptions, recognizing bias, querying data—and deep business acumen about talent dynamics in specific functions and markets (Huselid & Becker, 2011).
Development pathways for workforce analytics capability typically include formal training in statistics, data visualization, and machine learning fundamentals; exposure to analytical tools and platforms; and apprenticeship models where less experienced practitioners work alongside senior analysts on real business problems. Some organizations establish internal certification programs, communities of practice, and knowledge-sharing platforms that democratize analytical capability across HR and business functions.
Johnson & Johnson's Workforce Analytics Academy illustrates this capability-building approach. The program offers modular curricula for different roles—foundational data literacy for all HR professionals, intermediate predictive analytics for talent acquisition and planning leaders, and advanced machine learning for specialized analysts. Participants complete project-based learning using actual company data, with successful projects piloted in their business units (Bersin, 2019).
Investment in tools and platforms represents another dimension of capability building. User-friendly workforce planning platforms with intuitive interfaces, guided analytics workflows, and built-in visualization lower barriers to adoption. Organizations increasingly favor low-code/no-code analytics environments that enable HR professionals to build dashboards, run standard reports, and test simple scenarios without requiring programming skills, while preserving the ability for specialists to develop sophisticated custom models.
Ecosystem Integration and Strategic Alignment
Workforce planning achieves maximum value when integrated within broader talent ecosystems and aligned with business strategy, rather than operating as an isolated analytics function. Leading organizations connect predictive workforce models to talent acquisition systems, learning platforms, performance management, succession planning, and strategic workforce design—creating closed-loop systems where insights drive action and outcomes inform future predictions.
Strategic alignment requires ongoing dialogue between workforce planning teams and business leadership. At quarterly business reviews, rather than presenting historical HR metrics, progressive organizations model workforce implications of strategic scenarios: market expansion plans, merger integration, technology platform migrations, new product launches. These forward-looking analyses surface talent constraints, investment tradeoffs, and capability-building timelines that inform strategic choices.
Integration touchpoints include:
Strategic planning cycles incorporating workforce scenarios into annual planning, stress-testing talent availability against growth assumptions
Talent marketplace platforms using predictive analytics to match employees with internal opportunities, projects, and development experiences aligned with forecasted capability needs
Learning systems personalizing development recommendations based on individual career trajectories, skills adjacencies, and predicted future demand
Succession planning identifying bench strength gaps, development needs for high-potential talent, and vulnerable roles years before transitions
Workforce design initiatives modeling optimal organizational structures, span of control, role definitions, and geographic distribution using simulation and optimization
M&A integration assessing cultural fit, identifying retention risks, and planning capability integration using predictive models of talent movement
Mastercard's integration of workforce planning with business strategy illustrates this ecosystem approach. Their talent analytics team participates in strategic planning sessions, modeling the workforce implications of geographic expansion, digital product development, and partnership strategies. When analysis revealed insufficient data science capability for planned initiatives, the company launched a multi-year build-and-buy strategy combining aggressive external hiring, university partnerships, and internal reskilling programs calibrated to forecasted demand curves (Boudreau & Cascio, 2017).
Ethical Frameworks and Governance Evolution
As AI-driven workforce planning systems grow more sophisticated and influential, organizations require robust ethical frameworks and governance structures that evolve alongside technological capability. Forward-thinking organizations establish principles-based approaches that balance optimization with human dignity, efficiency with fairness, and prediction with employee agency.
Ethical frameworks for workforce analytics typically address several dimensions: consent and privacy (what data can be collected and how it can be used), transparency (what employees know about systems affecting them), fairness (how to ensure equitable treatment), and human oversight (what decisions require human judgment). These principles get operationalized through governance processes, technical controls, and accountability mechanisms.
Emerging governance practices include:
Ethics review boards with diverse membership—employees, HR, legal, data governance, external experts—that review high-impact workforce planning use cases
Employee data rights enabling individuals to understand what data is collected, how it influences decisions affecting them, and mechanisms to challenge or correct information
Purpose limitation restricting use of workforce data to defined, legitimate purposes with prohibitions on mission creep
Automated decision logging creating audit trails when algorithmic recommendations influence consequential outcomes (hiring, promotion, restructuring)
Regular ethical audits examining workforce planning systems for unintended harms, disparate impact, and alignment with organizational values
Sunset provisions requiring periodic reauthorization of workforce planning systems, forcing deliberate decisions to continue rather than perpetuate by default
Microsoft's approach to responsible AI in workforce systems emphasizes accountability through their AI, Ethics, and Effects in Engineering and Research (Aether) Committee, which reviews workforce planning use cases alongside other AI applications. The company published internal guidelines specifying that algorithmic workforce decisions require human review, that employees have rights to understand factors influencing their career opportunities, and that systems must be regularly audited for bias (Marr, 2020).
Some organizations are exploring participatory governance models where employee representatives have formal input into workforce planning system design and deployment. This approach recognizes that those most affected by algorithmic decisions should have voice in shaping systems, improving both ethical outcomes and organizational legitimacy.
Conclusion
AI-driven workforce planning represents a fundamental evolution in how organizations anticipate and respond to talent needs, moving from reactive headcount management toward predictive capability architecture. The convergence of machine learning maturity, data availability, and acute talent market pressures has created both opportunity and imperative for this transformation. Organizations successfully implementing these capabilities demonstrate measurable improvements in recruitment efficiency, retention of critical talent, strategic foresight, and workforce agility.
However, realizing this value requires more than deploying sophisticated algorithms. The evidence across sectors points to common success factors: foundational investment in integrated data infrastructure; algorithmic transparency and interpretability that builds user trust; proactive bias detection and fairness interventions; human-centered implementation that prioritizes adoption and capability building; continuous model operations that adapt to changing contexts; ecosystem integration that connects workforce planning to broader talent systems; and ethical governance frameworks that balance optimization with human dignity.
The organizations profiled here—from technology firms to manufacturers to healthcare systems—share an approach that treats AI-driven workforce planning as sociotechnical systems requiring attention to technology, process, people, and values. They recognize that predictive models inform rather than determine talent decisions, that historical data can perpetuate past inequities unless deliberately addressed, and that employee trust depends on transparency, fairness, and meaningful human oversight.
Looking forward, workforce planning systems will likely grow more sophisticated, incorporating richer data sources (skills inferred from work products, network analytics, external labor market signals), more advanced techniques (reinforcement learning for optimal talent allocation, causal inference for intervention design), and broader scope (ecosystem talent beyond employees, continuous workforce redesign). The ethical and governance challenges will evolve in parallel, requiring ongoing attention to algorithmic accountability, employee rights, and equitable outcomes.
For HR leaders and executives, the imperative is clear: develop these capabilities intentionally, implement them responsibly, and integrate them strategically. The competitive advantage increasingly belongs not to organizations with the most advanced algorithms, but to those that combine predictive intelligence with organizational wisdom, leveraging AI to enhance rather than replace human judgment in stewarding their most critical resource—their people.
References
Bersin, J. (2019). HR technology 2020: The 5 trends that matter. Josh Bersin Academy.
Bersin, J., Chamorro-Premuzic, T., Stempel, J., & Macoukji, N. (2019). The case for AI in HR. MIT Sloan Management Review, 60(4), 1-6.
Boudreau, J. W., & Cascio, W. F. (2017). Human capital analytics: Why are we not there? Journal of Organizational Effectiveness: People and Performance, 4(2), 119-126.
Cappelli, P. (2020). The Future of the Office: Work from Home, Remote Work, and the Hard Choices We All Face. Wharton School Press.
CIPD. (2021). People analytics: Driving business performance with people data. Chartered Institute of Personnel and Development.
Dastin, J. (2018, October 10). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters.
Davenport, T. H., Harris, J., & Shapiro, J. (2010). Competing on talent analytics. Harvard Business Review, 88(10), 52-58.
Deloitte. (2022). 2022 Global Human Capital Trends: The social enterprise in a world disrupted. Deloitte Insights.
European Commission. (2018). General Data Protection Regulation (GDPR). Official Journal of the European Union.
Huselid, M. A., & Becker, B. E. (2011). Bridging micro and macro domains: Workforce differentiation and strategic human resource management. Journal of Management, 37(2), 421-428.
Korn Ferry. (2022). The talent shift: How the war for talent is changing the global economy. Korn Ferry Institute.
Lee, M. K. (2018). Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society, 5(1), 1-16.
Marr, B. (2020). Artificial Intelligence in Practice: How 50 Successful Companies Used AI and Machine Learning to Solve Problems. Wiley.
World Economic Forum. (2020). The Future of Jobs Report 2020. World Economic Forum.

Jonathan H. Westover, PhD is Chief Academic & Learning Officer (HCI Academy); Associate Dean and Director of HR Programs (WGU); Professor, Organizational Leadership (UVU); OD/HR/Leadership Consultant (Human Capital Innovations). Read Jonathan Westover's executive profile here.
Suggested Citation: Westover, J. H. (2025). AI-Driven Workforce Planning: Predictive Models for Future Talent Needs. Human Capital Leadership Review, 26(4). doi.org/10.70175/hclreview.2020.26.4.6.1

















