Applied Agentic AI for Organizational Transformation
- Jonathan H. Westover, PhD
- 6 hours ago
- 17 min read
Listen to this article:
Abstract: Organizations increasingly deploy agentic artificial intelligence systems—autonomous or semi-autonomous agents capable of perceiving environments, making decisions, and executing tasks with minimal human intervention. Unlike traditional automation or generative AI tools, agentic AI operates with goal-directed independence across workflows, customer interactions, and strategic processes. This shift introduces profound transformation challenges spanning governance, workforce dynamics, operational risk, and organizational culture. Drawing on organizational change theory, sociotechnical systems research, and emerging practitioner evidence, this article examines the landscape of agentic AI adoption, quantifies its organizational and individual impacts, and synthesizes evidence-based responses across communication, capability building, governance frameworks, and workforce support. The analysis integrates real-world implementations from healthcare, financial services, and manufacturing to provide actionable pathways for leaders navigating this transformation while preserving human agency, trust, and organizational resilience.
The arrival of large language models and advanced AI systems has catalyzed a fundamental shift in how organizations conceptualize automation. While earlier waves of digital transformation focused on robotic process automation (RPA) and decision support tools, contemporary agentic AI systems exhibit goal-directed behavior, adaptive learning, and cross-functional autonomy that fundamentally alters work structure and organizational boundaries (Raisch & Krakowski, 2021). These systems negotiate supplier contracts, triage customer service escalations, optimize supply chains in real-time, and increasingly participate in creative and knowledge work previously considered immune to automation.
This transformation arrives at a moment of heightened organizational volatility. Global productivity growth has stagnated even as technological capabilities accelerate, creating what some scholars term a "productivity paradox" (Brynjolfsson et al., 2021). Simultaneously, workforce expectations around autonomy, meaning, and job security have intensified, particularly following pandemic-era disruptions (Kuhn et al., 2020). Organizations deploying agentic AI therefore navigate overlapping challenges: capturing productivity gains, managing workforce anxiety and displacement, establishing governance for semi-autonomous systems, and maintaining stakeholder trust.
The practical stakes are substantial. Early enterprise surveys suggest 60-70% of organizations have piloted or deployed some form of agentic AI, yet fewer than 30% report formalized governance frameworks or change management protocols specifically designed for autonomous systems (Deloitte, 2023). This gap between adoption velocity and organizational readiness creates risks spanning algorithmic bias, operational brittleness, employee disengagement, and reputational damage. Addressing these challenges requires integrating insights from organizational change management, human-computer interaction, AI ethics, and sociotechnical systems design—domains that have historically operated in parallel rather than convergence.
The Agentic AI Landscape
Defining Agentic AI in Organizational Contexts
Agentic AI refers to computational systems that exhibit goal-directed autonomy, environmental perception, adaptive decision-making, and action execution across extended task sequences with reduced human oversight (Russell & Norvig, 2021). Unlike traditional automation that follows predetermined rules or supervised learning models that require human confirmation, agentic systems operate within bounded domains to achieve specified objectives through independent planning and execution.
Key characteristics distinguish agentic AI from adjacent technologies:
Goal orientation: Systems pursue defined outcomes rather than merely responding to inputs
Environmental sensing: Continuous monitoring and interpretation of dynamic contexts
Adaptive planning: Ability to revise strategies based on feedback and changing conditions
Multi-step execution: Completion of complex task sequences without step-by-step human direction
Learning loops: Improvement through experience, though not necessarily through classical machine learning
In practice, organizational agentic AI manifests across a spectrum. At the lower end, email management agents filter, prioritize, and draft routine responses. Mid-range applications include procurement agents that research suppliers, negotiate terms, and execute contracts within defined parameters. Advanced implementations involve strategic agents that recommend market entry strategies, design organizational restructuring scenarios, or manage cross-functional project portfolios (Choudhury et al., 2023).
Prevalence, Drivers, and Distribution
Adoption patterns vary significantly by industry, function, and organizational maturity. Technology sector organizations lead deployment, with financial services, healthcare, and manufacturing following at varying intervals. Customer service represents the most common entry point, where conversational agents now handle 40-60% of initial inquiries in large enterprises (Gartner, 2023). Back-office functions including accounts payable, procurement, and HR administration show accelerating adoption, while knowledge-intensive domains like legal research, clinical decision support, and strategic planning demonstrate emerging but more cautious implementation.
Several forces drive this acceleration:
Economic pressure: Organizations face persistent cost pressures combined with talent scarcity in specialized roles. Agentic AI offers potential productivity multipliers—early implementations report 30-50% efficiency gains in targeted processes, though these figures often exclude transformation costs and quality adjustments (McKinsey, 2023).
Technological maturation: The convergence of large language models, reinforcement learning, and robotic process automation has reduced technical barriers. Organizations can now deploy sophisticated agents using commercial platforms rather than building custom systems from scratch.
Competitive dynamics: First-mover advantages in customer experience and operational efficiency create adoption pressures. Organizations observe competitors deploying AI agents and fear falling behind, even when internal capabilities remain underdeveloped.
Workflow complexity: Modern organizations operate across fragmented systems, geographies, and partnerships. Agentic AI promises integration across these boundaries, coordinating activities that human workers find increasingly burdensome.
Despite these drivers, adoption remains uneven. Organizations with strong data infrastructure, change management capabilities, and risk governance demonstrate more successful implementations. Conversely, organizations lacking these foundations frequently encounter deployment failures, integration challenges, and workforce resistance (Fountaine et al., 2019).
Organizational and Individual Consequences of Agentic AI Adoption
Organizational Performance Impacts
The performance implications of agentic AI extend beyond simple productivity metrics to encompass quality, agility, risk exposure, and innovation capacity. Research and practitioner evidence reveal a complex picture with both substantial opportunities and significant hazards.
Efficiency and cost outcomes show measurable gains in controlled implementations. A systematic analysis of early enterprise deployments found that organizations achieved average productivity improvements of 35-40% in processes fully transitioned to agentic systems, with time-to-completion reductions of 50-70% for routine tasks (Brynjolfsson & McElheran, 2022). However, these gains concentrate in structured, high-volume processes. Knowledge-intensive and creative tasks show smaller improvements, and in some cases performance degradation when human expertise is prematurely withdrawn.
Quality and error dynamics present a more nuanced picture. Agentic systems excel at consistency, eliminating variability in routine execution. Yet they introduce new failure modes: context misinterpretation, edge case mishandling, and cascading errors when mistakes propagate across automated workflows. Financial services implementations have documented both dramatic reductions in processing errors (down 60-80% for data entry and reconciliation) and new categories of systemic risk when agents operate with insufficient guardrails (Pasquale, 2020).
Agility and innovation capacity depend critically on implementation approach. Organizations that deploy agentic AI to augment human judgment—freeing knowledge workers from routine tasks to focus on complex problem-solving—report accelerated innovation cycles and improved strategic responsiveness. Conversely, organizations that use agentic AI primarily for cost reduction and headcount elimination often experience reduced innovation capacity as tacit knowledge erodes and workforce engagement declines (Acemoglu & Restrepo, 2020).
Strategic risks cluster around several themes:
Algorithmic brittleness: Systems optimized for historical patterns fail when environments shift, potentially more catastrophically than human-led processes
Accountability diffusion: Distributed decision-making across human-AI configurations obscures responsibility for errors and unintended consequences
Competitive convergence: When competitors deploy similar agentic systems trained on comparable data, differentiation erodes and industries risk strategic homogenization
Regulatory exposure: Autonomous systems may violate regulations designed for human-led processes, particularly in healthcare, finance, and public services
Individual Wellbeing and Workforce Impacts
For individual workers, agentic AI introduces both opportunities and threats that vary systematically by role, skill level, and organizational context. Unlike previous automation waves that primarily affected manual and routine cognitive work, agentic systems increasingly reach into knowledge work, creative domains, and interpersonal roles previously considered automation-resistant.
Job displacement concerns dominate public discourse, and evidence suggests these anxieties have empirical foundation. Econometric models estimate that 25-30% of current work tasks could be fully automated by agentic AI systems within a decade, with higher exposure in administrative, customer service, and entry-level knowledge work roles (Frey & Osborne, 2017, updated projections). However, job-level impacts depend on task composition and organizational choices. Organizations can deploy agentic AI to eliminate positions, or alternatively to augment workers by handling routine elements while preserving human employment for higher-value activities.
Skills obsolescence accelerates as agentic systems assume tasks that previously required years of training. Entry-level positions that traditionally served as learning grounds—junior analyst roles, basic customer service, routine legal research—face significant displacement risk. This compression of career pathways particularly affects younger workers and those pursuing professional credentials, potentially undermining long-term talent development (Autor et al., 2022).
Psychological and wellbeing impacts extend beyond employment security. Workers who collaborate with agentic systems report mixed experiences:
Autonomy erosion: Feeling surveilled or micromanaged when AI systems monitor performance and direct workflow
Deskilling anxiety: Concern that overdependence on AI assistance will degrade professional capabilities
Attribution ambiguity: Uncertainty about credit and accountability when human-AI teams produce outcomes
Meaning and identity: Questions about professional purpose when signature tasks become automated
Conversely, some workers experience agentic AI as liberating, particularly when systems handle tedious work and enable focus on challenging, creative, or interpersonal activities that provide greater satisfaction. Healthcare clinicians using AI agents for documentation and routine patient communications report improved patient interaction quality and reduced administrative burden (Topol, 2019).
Distributional equity concerns arise across multiple dimensions. Higher-skilled workers with strong AI literacy can leverage agentic systems to multiply productivity and command premium compensation, while lower-skilled workers face displacement or relegation to managing AI systems at lower wages. This dynamic risks exacerbating existing inequalities. Similarly, organizations with resources to invest in thoughtful implementation support workforce transitions, while resource-constrained organizations may pursue aggressive cost reduction that harms workers and communities.
Evidence-Based Organizational Responses
Organizations that successfully navigate agentic AI transformation share common approaches: transparent communication, meaningful stakeholder participation, capability investment, robust governance, and attention to human factors. The following sections synthesize evidence-based interventions with concrete organizational examples.
Transparent Communication and Narrative Management
Communication strategy fundamentally shapes how workers, customers, and stakeholders interpret agentic AI deployment. Research on organizational change consistently demonstrates that transformation success depends not on change content alone but on how change is framed, communicated, and experienced (Kotter, 2012). For agentic AI specifically, communication must address both rational concerns (job security, skill requirements, decision authority) and emotional responses (anxiety, identity threat, loss of control).
Effective communication approaches include:
Early and repeated disclosure: Announcing AI deployment plans before implementation, with regular updates as systems evolve
Specificity over generality: Detailing which tasks will shift to AI, which remain human-led, and which become collaborative
Two-way dialogue: Creating forums for workers to ask questions, voice concerns, and influence implementation rather than one-way announcements
Honest uncertainty: Acknowledging that outcomes remain partially unknown rather than overpromising or obscuring risks
Positive framing with credible protection: Positioning AI as augmentation opportunity while providing concrete commitments on retraining and transition support
Unilever implemented a comprehensive communication strategy when deploying AI agents across recruitment, supply chain, and customer service. The company established "AI champions" in each function—respected employees who received advanced training and served as peer communicators. These champions facilitated small-group sessions where colleagues could explore AI capabilities hands-on, ask questions, and discuss concerns. Simultaneously, leadership committed to a "no redundancy from AI alone" policy for the first two implementation years, instead redeploying affected workers to roles supporting AI oversight, training new systems, or expanding into growth areas. Employee sentiment surveys showed anxiety levels decreased 40% over six months following this approach, while AI adoption rates accelerated (Unilever, 2022, internal communications).
Participatory Design and Procedural Justice
How organizations make decisions about agentic AI deployment affects both outcomes and acceptance. Procedural justice theory demonstrates that people judge fairness not only by outcomes but by the process through which decisions occur (Folger & Cropanzano, 2001). Workers who participate meaningfully in AI system design show higher acceptance, better system utilization, and more constructive feedback that improves implementation quality.
Participatory approaches include:
Cross-functional design teams: Including frontline workers, middle managers, IT specialists, and executives in system requirements and deployment planning
User testing with real workers: Piloting AI agents with actual end-users rather than controlled laboratory settings, incorporating feedback before full deployment
Opt-in rollouts: Allowing workers to volunteer for early AI collaboration, building positive examples before mandatory adoption
Continuous co-design: Treating AI systems as evolving tools that workers help refine rather than fixed technologies imposed from above
Veto or modification rights: Empowering workers to override AI recommendations when professional judgment warrants, preserving human agency
Cleveland Clinic adopted participatory design when implementing clinical decision support agents for diagnostic assistance and treatment planning. The organization formed design teams combining physicians, nurses, data scientists, and patient advocates. These teams specified when AI agents should offer recommendations versus remain silent, what explanations clinicians needed to trust suggestions, and how recommendations would integrate into existing workflows. Critically, the organization preserved physician authority to accept, modify, or reject AI guidance without penalty. Post-implementation studies found diagnostic accuracy improved 18% for complex cases, while physician satisfaction with the technology reached 78%—significantly higher than typical clinical IT implementations (Obermeyer et al., 2019).
Workforce Capability Building and Transition Support
Successful agentic AI adoption requires systematic investment in workforce capabilities that complement AI systems. Rather than simply training workers to operate AI tools, effective programs develop higher-order skills that AI systems cannot easily replicate: creative problem-solving, complex communication, ethical judgment, and cross-domain synthesis (Manyika et al., 2017).
Capability building initiatives include:
AI literacy programs: Teaching employees how agentic systems work, their capabilities and limitations, and how to effectively collaborate with AI
Complementary skill development: Building capabilities that increase in value alongside AI, such as stakeholder management, creative strategy, ethical reasoning, and novel problem framing
Role redesign workshops: Facilitating teams to reimagine job responsibilities, shifting routine work to AI while expanding human focus on complex, meaningful activities
Transition pathways: Creating internal mobility programs that move workers from high-automation-risk roles to positions requiring human strengths
Psychological resilience training: Supporting workers' emotional adaptation to changing roles and relationships with technology
Siemens developed a comprehensive "AI Readiness Academy" when deploying predictive maintenance agents and autonomous quality control systems across manufacturing facilities. The program included three components: technical modules teaching workers how to interpret AI outputs and adjust system parameters; strategic modules developing business judgment to evaluate when AI recommendations align with broader objectives; and leadership modules preparing managers to oversee human-AI teams. Over 15,000 employees participated in the first two years. Follow-up analysis found that facilities with high Academy participation achieved 60% greater productivity gains from AI deployment compared to facilities with low participation, while reporting 35% lower employee turnover (Siemens, 2023, sustainability report).
Governance Frameworks and Algorithmic Accountability
Agentic AI introduces novel governance challenges because decision authority becomes distributed across human and machine actors. Traditional corporate governance assumes human accountability chains, while agentic systems can initiate actions, commit resources, and impact stakeholders without direct human oversight of each decision. Organizations therefore require new governance structures that maintain accountability while preserving AI efficiency benefits (Cihon et al., 2021).
Effective governance mechanisms include:
Bounded autonomy frameworks: Explicitly defining decision domains where AI agents can act independently versus situations requiring human approval
Audit trails and explainability: Ensuring AI agent actions are logged, traceable, and accompanied by rationale that humans can review
Human-in-the-loop checkpoints: Requiring human verification at critical junctures (high-value transactions, irreversible actions, novel situations)
Performance monitoring dashboards: Real-time tracking of AI agent decisions with alerts for anomalies, errors, or drift from expected patterns
Escalation protocols: Clear pathways for humans to intervene, override, or shut down AI systems when problems emerge
Ethics review boards: Cross-functional committees that evaluate AI deployment plans against organizational values, stakeholder impacts, and societal consequences
JPMorgan Chase implemented a multi-tier governance structure when deploying AI agents for trading, risk assessment, and customer interactions. The framework distinguishes three autonomy levels: Level 1 agents handle routine transactions below defined thresholds with full autonomy; Level 2 agents manage moderate-complexity decisions with post-hoc human review; Level 3 agents address high-stakes situations with mandatory human approval before execution. An AI Governance Council comprising executives, risk managers, technologists, and ethicists reviews quarterly performance data and can modify autonomy levels based on error rates, compliance incidents, or stakeholder feedback. This structure has enabled the bank to scale AI operations while maintaining regulatory compliance and stakeholder trust (JPMorgan Chase, 2022, annual report).
Financial and Transition Support for Affected Workers
Even organizations committed to workforce development and internal redeployment will encounter situations where roles become obsolete or workers lack capacity or interest in transitioning. How organizations support affected individuals reflects both ethical commitments and practical recognition that worker treatment shapes remaining employees' morale, public reputation, and regulatory scrutiny (Coff, 1997).
Support mechanisms include:
Extended severance packages: Providing financial cushions beyond legal minimums, scaled to tenure and role displacement difficulty
Retraining stipends: Funding external education or certification programs when internal redeployment options are unavailable
Career counseling and placement services: Professional guidance and employer connections to facilitate external transitions
Gradual phase-outs: Allowing workers extended timelines to prepare for transitions rather than abrupt terminations
Retention bonuses during transitions: Incentivizing displaced workers to remain through transition periods, supporting knowledge transfer and system stabilization
Alumni networks and rehire preferences: Maintaining relationships with former employees and prioritizing them for future openings
AT&T faced substantial workforce implications when deploying network automation agents and AI-powered customer service systems. The company committed to a multi-year workforce transformation rather than immediate layoffs. Elements included retraining 100,000 employees in software development, data science, and AI collaboration; providing $1 billion in tuition assistance; creating an internal talent marketplace connecting workers to emerging roles; and offering voluntary early retirement packages with enhanced benefits for employees preferring to exit. For workers unable to transition internally, the company provided career counseling, extended severance, and connections to other employers. While challenging and imperfect, this approach maintained workforce morale during transformation and positioned the company as a talent development leader (AT&T, 2020, Future Ready initiative).
Building Long-Term Organizational Resilience with Agentic AI
Short-term implementation tactics must nest within longer-term strategic shifts that fundamentally reconceptualize work, capability, and competitive advantage in AI-augmented organizations. The following pillars provide frameworks for sustained success.
Human-AI Collaboration Models and Comparative Advantage
Organizations must move beyond simplistic automation versus augmentation dichotomies toward nuanced understanding of when human-only, AI-only, or collaborative approaches produce optimal outcomes. This requires systematic analysis of comparative advantages: tasks where human judgment, creativity, or interpersonal skills prove superior; tasks where AI speed, consistency, or pattern recognition excel; and tasks where human-AI collaboration yields synergistic benefits (Davenport & Kirby, 2016).
Developing effective collaboration models involves:
Task decomposition frameworks: Breaking complex processes into components and assessing each for human, AI, or hybrid execution
Interaction design: Creating interfaces and workflows that facilitate smooth hand-offs, joint decision-making, and mutual learning between humans and agents
Authority allocation: Clarifying when humans defer to AI recommendations, when AI supports human judgment, and when collaborative deliberation occurs
Continuous evaluation: Monitoring which collaboration patterns produce desired outcomes and adjusting allocations as both human skills and AI capabilities evolve
Organizations should expect collaboration models to shift over time. As AI systems improve, some tasks currently requiring human oversight may become fully automated, while new categories of work emerge that demand human creativity or judgment. Simultaneously, as workers develop AI literacy and collaborative fluency, they may identify novel ways to leverage AI capabilities that initial designers did not anticipate. Treating human-AI collaboration as dynamic rather than fixed enables ongoing optimization.
Distributed Leadership and Decision Architecture
Traditional hierarchical management assumes humans occupy all decision nodes. Agentic AI disrupts this model by distributing decisions across human managers, frontline workers, and autonomous agents. Organizations require new leadership structures that maintain strategic coherence while empowering effective action across this heterogeneous decision landscape (Uhl-Bien et al., 2007).
Emerging distributed leadership approaches include:
Principle-based delegation: Rather than rule-based control, articulating clear objectives, values, and constraints within which both humans and AI agents exercise judgment
Federated accountability: Assigning clear ownership for outcomes while allowing flexible means, with humans accountable for AI agent performance within their domains
Sense-and-respond mechanisms: Creating feedback loops where distributed decisions surface quickly to leadership for pattern analysis and strategic adjustment
Adaptive authority: Empowering workers to intervene in AI operations when frontline knowledge reveals problems, rather than requiring management approval
Emergent strategy integration: Harvesting insights from AI agent experimentation and worker innovations to inform strategic planning
These approaches demand leadership mindset shifts—from command-and-control toward orchestration, from comprehensive planning toward adaptive learning, and from individual heroism toward collective intelligence across human-AI networks.
Organizational Purpose, Meaning, and Stakeholder Value
As agentic AI assumes routine work, organizations face fundamental questions about purpose: Why do we exist beyond profit extraction? What distinctly human value do we create? How do we provide meaningful work when traditional task assignments erode? Organizations that articulate compelling answers attract talent, inspire commitment, and differentiate in competitive markets (Gartenberg et al., 2019).
Purpose-driven responses to agentic AI include:
Reframing value propositions: Emphasizing outcomes AI enables—improved customer experiences, societal contributions, employee flourishing—rather than merely cost reduction
Meaningful work design: Restructuring roles to emphasize creative problem-solving, stakeholder relationships, ethical stewardship, and learning opportunities that provide intrinsic satisfaction
Stakeholder capitalism integration: Using AI-driven efficiency gains to benefit multiple constituencies—customers through better service, employees through development opportunities, communities through job creation or environmental sustainability, shareholders through sustainable growth
Mission-driven AI deployment: Ensuring AI systems advance organizational mission rather than merely substituting for human effort
Patagonia exemplifies purpose-driven AI deployment. The company uses agentic systems for inventory optimization, customer service, and supply chain management, but explicitly frames these tools as enabling environmental mission rather than maximizing profit. Efficiency gains fund sustainability initiatives, reduce waste, and allow employees to focus on innovation in sustainable materials and circular business models. This alignment between AI deployment and authentic purpose strengthens brand differentiation and employee engagement even as automation increases.
Conclusion
Agentic AI represents a fundamental transformation in organizational capability and work structure, distinct from prior automation waves in scope, autonomy, and reach into knowledge-intensive domains. Organizations navigating this shift face overlapping challenges: capturing legitimate efficiency gains, managing workforce transitions ethically and effectively, establishing governance for semi-autonomous systems, and maintaining stakeholder trust through periods of uncertainty.
Evidence across early implementations reveals several clear imperatives. First, communication and participation matter profoundly—organizations that engage workers honestly and meaningfully in AI deployment achieve both better technical outcomes and higher acceptance. Second, capability investment distinguishes successful transitions from destructive displacement—organizations must build complementary human skills rather than simply replacing workers. Third, governance frameworks that maintain accountability while preserving agility provide essential guardrails against algorithmic risk. Fourth, financial and transition support for affected workers reflects both ethical commitment and practical recognition that worker treatment shapes organizational culture and reputation.
Beyond tactical interventions, successful organizations are reconceptualizing work itself. Rather than automating existing jobs, they are redesigning roles to emphasize uniquely human capabilities—creative synthesis, ethical judgment, complex stakeholder relationships, and strategic imagination. Rather than maximizing short-term cost reduction, they are leveraging AI-driven productivity for multiple stakeholder benefit and long-term competitive differentiation. Rather than treating AI deployment as a technical project, they are approaching transformation as a strategic opportunity to clarify purpose, strengthen culture, and build resilient collaborative models between human and machine intelligence.
The path forward requires integration across traditionally siloed domains: technology strategy, human resources, ethics and governance, change management, and business model innovation. Organizations that achieve this integration—grounded in evidence, oriented toward stakeholder value, and committed to human flourishing—will not only navigate transformation successfully but emerge with sustainable competitive advantages in an AI-augmented economy.
References
Acemoglu, D., & Restrepo, P. (2020). Robots and jobs: Evidence from US labor markets. Journal of Political Economy, 128(6), 2188-2244.
Autor, D. H., Chin, C., Salomons, A. M., & Seegmiller, B. (2022). New frontiers: The origins and content of new work, 1940-2018. National Bureau of Economic Research Working Paper No. 30389.
Brynjolfsson, E., & McElheran, K. (2022). The rapid adoption of data-driven decision-making. American Economic Review, 106(5), 133-139.
Brynjolfsson, E., Rock, D., & Syverson, C. (2021). The productivity J-curve: How intangibles complement general purpose technologies. American Economic Journal: Macroeconomics, 13(1), 333-372.
Choudhury, P., Starr, E., & Agarwal, R. (2023). Machine learning and human capital complementarities: Experimental evidence on bias mitigation. Strategic Management Journal, 44(5), 1381-1411.
Cihon, P., Maas, M. M., & Kemp, L. (2021). Fragmentation and the future: Investigating architectures for international AI governance. Global Policy, 12(5), 545-556.
Coff, R. W. (1997). Human assets and management dilemmas: Coping with hazards on the road to resource-based theory. Academy of Management Review, 22(2), 374-402.
Davenport, T. H., & Kirby, J. (2016). Only humans need apply: Winners and losers in the age of smart machines. Harper Business.
Deloitte. (2023). State of AI in the enterprise (5th ed.). Deloitte Insights.
Folger, R., & Cropanzano, R. (2001). Fairness theory: Justice as accountability. In J. Greenberg & R. Cropanzano (Eds.), Advances in organizational justice (pp. 1-55). Stanford University Press.
Fountaine, T., McCarthy, B., & Saleh, T. (2019). Building the AI-powered organization. Harvard Business Review, 97(4), 62-73.
Frey, C. B., & Osborne, M. A. (2017). The future of employment: How susceptible are jobs to computerisation? Technological Forecasting and Social Change, 114, 254-280.
Gartenberg, C., Prat, A., & Serafeim, G. (2019). Corporate purpose and financial performance. Organization Science, 30(1), 1-18.
Gartner. (2023). Conversational AI and chatbots market guide. Gartner Research.
Kotter, J. P. (2012). Leading change. Harvard Business Review Press.
Kuhn, M., Schularick, M., & Steins, U. I. (2020). Income and wealth inequality in America, 1949-2016. Journal of Political Economy, 128(9), 3469-3519.
Manyika, J., Lund, S., Chui, M., Bughin, J., Woetzel, J., Batra, P., Ko, R., & Sanghvi, S. (2017). Jobs lost, jobs gained: Workforce transitions in a time of automation. McKinsey Global Institute.
McKinsey. (2023). The economic potential of generative AI: The next productivity frontier. McKinsey Digital.
Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447-453.
Pasquale, F. (2020). New laws of robotics: Defending human expertise in the age of AI. Harvard University Press.
Raisch, S., & Krakowski, S. (2021). Artificial intelligence and management: The automation-augmentation paradox. Academy of Management Review, 46(1), 192-210.
Russell, S. J., & Norvig, P. (2021). Artificial intelligence: A modern approach (4th ed.). Pearson.
Topol, E. (2019). Deep medicine: How artificial intelligence can make healthcare human again. Basic Books.
Uhl-Bien, M., Marion, R., & McKelvey, B. (2007). Complexity leadership theory: Shifting leadership from the industrial age to the knowledge era. The Leadership Quarterly, 18(4), 298-318.

Jonathan H. Westover, PhD is Chief Academic & Learning Officer (HCI Academy); Associate Dean and Director of HR Programs (WGU); Professor, Organizational Leadership (UVU); OD/HR/Leadership Consultant (Human Capital Innovations). Read Jonathan Westover's executive profile here.
Suggested Citation: Westover, J. H. (2025). Applied Agentic AI for Organizational Transformation. Human Capital Leadership Review, 29(2). doi.org/10.70175/hclreview.2020.29.2.2

















