top of page
HCL Review
HCI Academy Logo
Foundations of Leadership
DEIB
Purpose-Driven Workplace
Creating a Dynamic Organizational Culture
Strategic People Management Capstone

When AI Investments Fail: Why Work Redesign, Not Technology Deployment, Unlocks ROI

ree

Listen to this article:


Abstract: Despite widespread adoption of artificial intelligence tools at the individual level, organizational returns remain disappointing. Recent industry research indicates that only a small fraction of companies achieve significant value from AI investments, with satisfaction rates similarly low. This gap between individual experimentation and enterprise-scale value realization stems not from technological limitations but from a fundamental mismatch: organizations layer AI onto legacy processes rather than redesigning work systems to exploit AI's capabilities. This article synthesizes evidence from management consulting, organizational design, and human-computer interaction research to demonstrate that sustainable AI value requires systematic work redesign. Organizations must analyze and reconstruct roles, cultivate hybrid digital-domain expertise, and realign skill requirements to match augmented workflows. Without intentional redesign of work architectures, AI initiatives remain trapped in pilot purgatory, generating demonstrations rather than transformative business outcomes. Evidence-based interventions spanning process deconstruction, capability development, governance structures, and change management offer pathways from tactical adoption to strategic value creation.

The enthusiasm surrounding artificial intelligence has generated an unprecedented wave of corporate investment. Yet beneath the surface of pilot programs and proof-of-concept demonstrations lies a troubling reality: most organizations are not realizing meaningful returns. Recent research from leading consulting firms reveals that fewer than 10% of companies report significant ROI from their AI investments, while satisfaction with AI adoption outcomes remains similarly low (Fountaine et al., 2019). These figures represent not merely an implementation challenge but a fundamental strategic misalignment.


The paradox becomes sharper when we observe individual-level adoption. Knowledge workers increasingly incorporate generative AI into daily tasks—drafting communications, analyzing data, generating code, summarizing documents. This grassroots experimentation demonstrates both capability and demand. Yet this individual productivity rarely scales to departmental effectiveness, much less enterprise transformation.


The disconnect reveals a critical insight: technology adoption and work redesign are fundamentally different organizational capabilities. Most enterprises approach AI as a tool to accelerate existing processes, asking "How can AI make this faster?" rather than "How should we reconstruct this work now that AI capabilities exist?" This distinction matters enormously. Layering intelligent systems onto workflows designed for purely human execution creates friction, redundancy, and missed opportunities. It generates what organizational scholars call "paving the cow path"—automating inefficiency rather than reimagining possibility (Davenport, 2018).


Work design—the systematic analysis, deconstruction, and reconstruction of roles, tasks, and workflows—offers the missing foundation. When organizations redesign work around human-AI collaboration rather than human-AI substitution, when they cultivate combined digital and domain expertise rather than siloed specialization, and when they realign capability requirements to match augmented workflows, AI investments shift from cost centers to value drivers. This article examines why work redesign matters, what organizational and individual consequences flow from its absence, and which evidence-based interventions enable sustainable transformation.


The Work Design Imperative in the AI Era

Defining Work Design in Human-AI Systems


Work design encompasses the deliberate organization of job tasks, responsibilities, roles, and relationships to achieve organizational objectives while supporting employee wellbeing (Parker, 2014). In traditional contexts, work design addresses questions of task variety, autonomy, skill utilization, and social connection. The introduction of AI capabilities fundamentally expands this scope.


Human-AI work design specifically addresses how intelligent systems and human workers distribute cognitive labor, make decisions, maintain accountability, and create value together (Jarrahi et al., 2023). This goes beyond automation's familiar substitution logic—replacing human tasks with machine execution. Instead, it encompasses complementarity (humans and AI perform different aspects of integrated work), augmentation (AI enhances human capabilities), and emergence (new work forms become possible that neither humans nor AI could perform alone).


Effective AI-era work design requires reconceptualizing three fundamental elements:


  • Task architecture: Which activities require human judgment, creativity, or contextual understanding versus pattern recognition, data processing, or optimization that AI handles effectively?

  • Role boundaries: How do job descriptions, performance expectations, and career pathways shift when routine cognitive work migrates to algorithms?

  • Interaction models: What interfaces, feedback mechanisms, and governance structures enable productive human-AI collaboration rather than parallel execution?


Prevalence, Drivers, and Current State of Practice


The deployment-without-redesign pattern dominates current organizational practice. Surveys of AI adoption reveal that most implementations focus on narrow use cases grafted onto existing workflows: customer service chatbots that supplement human agents using identical scripts, predictive maintenance systems that generate alerts for unchanged inspection processes, or document summarization tools that feed into unmodified review procedures (Ransbotham et al., 2020).


Several forces drive this conservative approach. Path dependency creates powerful inertia—existing processes embed years of refinement, compliance requirements, and cultural expectations that resist wholesale reconfiguration. Capability gaps limit options—many organizations lack the process analysis, change management, and sociotechnical design expertise required for fundamental redesign. Risk aversion favors incremental change—pilots and supplements minimize disruption and preserve rollback options if initiatives fail.


Perhaps most critically, misaligned incentives perpetuate the status quo. Technology teams optimize for successful deployment and user adoption, not business outcome transformation. Business units pursue quarterly targets using familiar methods rather than investing in uncertain redesign with delayed payoffs. HR systems reward individual productivity gains but provide no mechanisms to capture and redistribute capacity freed by AI augmentation.


The result is what analysts characterize as "islands of automation" within seas of manual work—isolated efficiency improvements that fail to trigger system-level optimization (Davenport & Ronanki, 2018). Customer service agents using AI-generated response suggestions still follow rigid escalation hierarchies designed for fully manual operations. Financial analysts employing algorithmic anomaly detection still prepare standardized reports for human review processes unchanged since the spreadsheet era. Software developers utilizing AI code completion still navigate approval workflows built around assumption of fully human-written code.


This fragmented approach explains the ROI disappointment. Individual productivity gains exist but remain trapped within unchanged organizational structures, unable to compound into competitive advantage or margin expansion.


Organizational and Individual Consequences of Design-Free AI Adoption

Organizational Performance Impacts


When AI capabilities layer onto mismatched work structures, several predictable pathologies emerge. Productivity paradoxes materialize as improved individual efficiency fails to translate to organizational throughput. Research on technology adoption has repeatedly demonstrated this pattern—workers complete tasks faster but organizational capacity constraints elsewhere in the workflow absorb the time savings as slack (Brynjolfsson, 1993).


Consider the quantified impacts. Organizations implementing AI without corresponding process redesign typically capture only a fraction of theoretical productivity gains, with the remainder dissipated through coordination overhead, quality verification burdens, and workflow bottlenecks (Fountaine et al., 2019). A financial services firm might deploy AI-powered document review that reduces analyst review time by 50%, but if downstream approval processes, client communication protocols, and reporting requirements remain unchanged, the overall deal cycle may improve by only 10-15%.


Change fatigue compounds as repeated initiatives deliver underwhelming results. When employees experience multiple waves of AI tool adoption—each promising transformation but delivering only modest improvements—organizational receptivity to future change degrades. Research on change management demonstrates that failed or underwhelming initiatives create cynicism that undermines subsequent efforts, even well-designed ones (Kotter, 2008).


Competitive vulnerability emerges more subtly. While incumbents achieve marginal improvements, competitors or entrants who redesign work around AI capabilities from inception can achieve order-of-magnitude advantages. This mirrors historical technology transitions—early spreadsheet adopters who simply computerized manual calculations gained little advantage over competitors who reconceived financial analysis entirely.


Talent misallocation creates hidden costs. When role definitions remain static despite AI augmentation, organizations continue hiring for capability profiles that no longer match work requirements. Companies recruit financial analysts primarily for computational skills that algorithms now handle rather than for strategic interpretation and relationship management that humans uniquely provide. This mismatch both raises compensation costs and diminishes employee engagement as workers perceive their skills underutilized.


Individual Wellbeing and Stakeholder Impacts


For employees, poorly designed AI introduction creates profound ambiguity and stress. Role confusion emerges when technologies alter task content without corresponding updates to job descriptions, performance metrics, or career pathways (Lebovitz et al., 2021). Workers find themselves performing different activities than their titles suggest, evaluated on outdated criteria, and uncertain about skill development priorities.


Deskilling anxiety intensifies as AI systems absorb tasks that previously required expertise to master. When algorithms handle work that once distinguished skilled practitioners from novices—preliminary legal research, diagnostic hypothesis generation, or code architecture—experienced workers question their value proposition. This concern extends beyond replacement fear to professional identity and purpose (Autor, 2015).


Accountability ambiguity surfaces when human-AI systems make consequential decisions without clear frameworks for responsibility attribution. If an AI-assisted credit decision proves discriminatory, an AI-recommended medical treatment causes harm, or an AI-optimized supply chain fails during disruption, who bears responsibility? Absent clear work design specifying human judgment points and AI authority boundaries, frontline workers experience ethical burdens without supportive structures.


Research on algorithmic management reveals these stresses acutely. Delivery drivers, warehouse workers, and content moderators working under AI-directed task assignment and performance monitoring report reduced autonomy, increased monitoring stress, and perceived unfairness—particularly when system logic remains opaque and appeals processes non-existent (Möhlmann & Zalmanson, 2017).


Customer and stakeholder experiences mirror these tensions. Service interactions blending chatbots and human agents often create frustrating handoffs and repeated explanations. Healthcare patients receiving AI-influenced diagnoses express confusion about physician versus algorithm contributions to recommendations. Citizens interacting with AI-mediated government services encounter rigidity that neither fully automated nor fully human systems would impose.

Evidence-Based Organizational Responses


Organizations that achieve meaningful AI value share a common approach: they treat work redesign as prerequisite rather than afterthought. Evidence across industries identifies several intervention categories that enable this transition.


Process Deconstruction and Task Reallocation


Effective redesign begins with granular analysis of work components. Rather than accepting job roles as indivisible units, leading organizations decompose work into discrete tasks, analyze each task's characteristics, and make explicit allocation decisions about human versus AI execution, collaborative approaches, and quality assurance mechanisms.


The methodology draws from industrial engineering traditions but requires augmentation for knowledge work. Tasks are assessed not merely for automability but for dimensions including:


  • Cognitive requirements: Pattern recognition, causal reasoning, contextual judgment, creative synthesis

  • Interdependence: Standalone versus requiring coordination with upstream/downstream activities

  • Variability: Routine and standardized versus situationally unique

  • Stakeholder interaction: Transactional data processing versus relationship-intensive communication

  • Risk profile: Consequences of errors and need for explainability


Research on task-based approaches to automation demonstrates that jobs rarely automate completely; instead, specific task components within jobs become candidates for AI execution while other components remain human-centric (Autor, 2015). Effective organizations make these allocations explicit rather than allowing ad-hoc evolution.


Unilever systematically redesigned recruiting by deconstructing hiring into distinct stages: candidate sourcing, initial screening, skills assessment, cultural fit evaluation, and final selection. Rather than applying AI uniformly across all stages, they allocated pattern-recognition-intensive screening to algorithms while preserving human judgment for contextual evaluation and relationship development. The company deployed games and AI-analyzed video interviews for initial assessment, then shifted to human-intensive interaction for qualified candidates. This reallocation reduced time-to-hire substantially while improving diversity outcomes, results impossible through simple automation of traditional interview processes (Chamorro-Premuzic et al., 2017).


Healthcare organizations have applied similar principles to diagnostic workflows. Instead of deploying AI diagnostic systems to replicate physician workflows, leading medical centers analyze the diagnostic process into components: pattern identification in imaging or test results, differential diagnosis generation, treatment recommendation synthesis, and patient communication. AI image analysis assumes pattern identification for routine cases, while physicians concentrate on complex differentials, integration with patient context and preferences, and care coordination. This division of cognitive labor enables both efficiency gains and quality improvements by allocating tasks to optimal human or AI capabilities (Topol, 2019).


Key enabling practices include:


  • Work shadowing and task logging: Systematic observation and worker self-reporting to surface tacit activities often missing from formal process documentation

  • Value stream mapping: Tracing work products through organizational handoffs to identify delays, redundancies, and quality checks

  • Capability-task matrices: Explicit documentation of which capabilities (human creativity, algorithmic pattern recognition, etc.) each task requires

  • Prototyping sprints: Rapid experimentation with alternative task allocations before full-scale implementation

  • Performance baseline establishment: Clear metrics on current state quality, cycle time, and costs to enable redesign evaluation


Hybrid Capability Development


Successful AI integration requires workforce capabilities that blend domain expertise with digital fluency—what researchers term "fusion skills" or "hybrid expertise" (Davenport & Kirby, 2016). This goes beyond training workers to use specific AI tools; it requires cultivating judgment about when to rely on algorithmic outputs, how to interpret AI reasoning, and where human intervention adds value.


Manufacturing organizations implementing predictive maintenance systems discovered that optimal results required workers who combined equipment expertise with sufficient understanding of machine learning to evaluate model confidence, recognize patterns indicating sensor drift versus actual equipment degradation, and refine feature selection based on production context. Pure data scientists lacked domain knowledge to interpret results meaningfully; pure domain experts lacked technical understanding to assess model reliability. The intersection proved essential (Brynjolfsson & Mitchell, 2017).


Financial services firms similarly found that effective AI deployment required "translator" roles bridging technology and business teams. These professionals combine sufficient technical understanding to evaluate AI capabilities and limitations with deep domain knowledge to identify high-value application opportunities and design appropriate human-AI workflows. Organizations investing in developing these hybrid capabilities accelerated deployment timelines and improved solution relevance compared to those maintaining strict separation between technical and business functions (Ransbotham et al., 2020).


Effective capability development approaches:


  • Embedded learning: Training integrated with real work problems rather than abstract technical instruction

  • Cohort-based skill building: Cross-functional teams learning together to build shared vocabulary and mental models

  • Rotation programs: Temporary assignments enabling technologists to develop domain context and business leaders to build technical fluency

  • Apprenticeship models: Pairing domain experts with data scientists on actual projects to enable mutual knowledge transfer

  • Continuous learning resources: On-demand explanations of system logic, decision rationales, and confidence levels embedded in work tools


This capability focus addresses a common failure mode: deploying sophisticated AI systems to workers who lack frameworks for productive engagement. A physician presented with an algorithmic treatment recommendation needs more than UI training—they require understanding of what evidence the model weights heavily, which patient characteristics it handles well versus poorly, and how to synthesize its output with clinical judgment and patient preferences.


Governance Structures and Accountability Frameworks


Clear work design specifies not only what humans and AI do but who holds responsibility for outcomes. Effective organizations establish explicit governance defining decision authorities, review mechanisms, and accountability when human-AI systems err.


Research on human-AI decision making demonstrates that ambiguous accountability creates both operational and ethical problems (Lebovitz et al., 2021). When algorithms make recommendations but humans retain formal authority, workers may defer excessively to system outputs even when contextual factors suggest override. Conversely, when systems execute decisions autonomously but without clear boundaries, inappropriate automation can occur in situations requiring human judgment.


Leading organizations address this through structured governance frameworks. Insurance companies processing health claims, for instance, have implemented tiered authority structures: low-complexity routine claims receive fully automated processing with statistical quality assurance, medium-complexity claims receive AI recommendations requiring human review before execution, and high-complexity or high-stakes claims receive AI decision support with human adjudicators retaining final authority. Critically, these frameworks specify not just authority levels but review criteria, sampling rates, override documentation requirements, and escalation paths when patterns suggest system drift or inappropriate human override rates.


Governance mechanisms that enable effective accountability include:


  • Decision authority matrices: Explicit specification of which decision types AI can execute autonomously, which require human approval, and which use AI as advisory input only

  • Explanation requirements: Standards for what algorithmic reasoning must be documented for different decision types

  • Override protocols: Clear processes for human workers to challenge or modify AI recommendations, including documentation requirements and pattern analysis

  • Performance monitoring: Systematic tracking of human-AI system outcomes disaggregated to enable identification of algorithm drift, human override patterns, and emerging edge cases

  • Ethics review processes: Structured evaluation of AI applications for fairness, privacy, and alignment with organizational values before deployment


Technology companies with mature AI practices exemplify structured governance. Product teams must complete reviews addressing AI system purpose, potential harms, fairness implications, and user agency before deployment. This process, embedded within work design rather than treated as compliance overhead, surfaces issues early and ensures appropriate human oversight mechanisms (Holstein et al., 2019).


Organizational Structure and Operating Model Evolution


When work redesign fundamentally shifts task allocation, organizational structures often require corresponding evolution. Traditional functional silos—separating technology deployment, business operations, and workforce planning—create misalignment when effective AI integration demands coordination across all three.


Research on digital transformation demonstrates that companies achieving superior results often restructure around value streams or customer journeys rather than maintaining purely functional organizations (Westerman et al., 2014). This structural shift enables cross-functional teams with joint responsibility for end-to-end processes to iterate rapidly on work design, adjusting AI allocation, human roles, and supporting tools without cross-departmental negotiations that slow traditional structures.


DBS Bank in Singapore exemplifies this approach, restructuring into customer-journey-aligned squads combining technologists, domain experts, designers, and data specialists. Teams work from shared objectives and possess authority to modify processes, technology, and staffing within their scope. This operating model enabled continuous work optimization cycles as AI capabilities evolved, contributing to the bank's recognition for digital transformation leadership (Sia et al., 2016).


Healthcare delivery organizations have similarly reorganized clinical operations into integrated practice units combining physicians, nurses, data analysts, and care coordinators around specific medical conditions rather than traditional departmental structures. This organization facilitates continuous care pathway optimization including AI integration, as teams can redesign clinical workflows and technology capabilities together rather than navigating handoffs between separate groups (Porter & Lee, 2013).


Structural enablers include:


  • Product operating models: Organizing around value streams or customer journeys rather than functions

  • Empowered teams: Delegating authority for work design to teams closest to the work rather than centralizing in separate process engineering groups

  • Shared metrics: Measuring team performance on outcomes rather than activity volumes to incentivize optimization across human and AI contributions

  • Flexible resource allocation: Enabling teams to adjust human capacity as AI capabilities absorb or generate work rather than fixed headcount allocations

  • Cross-functional career paths: Creating advancement opportunities that reward breadth across domain and technical expertise


Change Management and Psychological Safety


Technical work redesign fails without corresponding attention to human dimensions. Employees experiencing role changes, capability requirement shifts, and workflow disruptions require transparency, agency, and support to engage productively rather than resist defensively.


Research on organizational change demonstrates that participation, transparency, and skill development support predict adoption success more strongly than technical system quality (Kotter, 2008). Workers who understand redesign rationales, participate in process analysis and solution development, and receive support for capability transitions engage constructively. Those who experience change as imposed, opaque, or threatening resist both actively and passively.


Microsoft's AI transformation emphasized employee involvement. Rather than implementing top-down redesigns, teams engaged workers in mapping their own activities, identifying AI augmentation opportunities, and piloting workflow changes. This participatory approach surfaced implementation challenges leadership would have missed, built workforce buy-in, and enabled rapid iteration based on practitioner feedback (Wilson & Daugherty, 2018).


Leading organizations pair AI deployment with explicit career pathway evolution. As AI absorbs routine analysis tasks, they redefine roles to emphasize strategic interpretation, cross-functional collaboration, and continuous learning. Simultaneously, they offer reskilling support and create new specialist roles in areas like AI system optimization and human-AI interaction design. By demonstrating continued career opportunity despite task automation, organizations maintain engagement and reduce unwanted attrition (Kolbjørnsrud et al., 2016).


Change practices supporting successful redesign:


  • Transparent communication: Clear explanation of redesign rationales, expected impacts, and timeline before implementation

  • Worker participation: Involving frontline employees in process analysis and redesign rather than treating them as passive recipients

  • Pilot feedback loops: Structured mechanisms for workers to report problems and suggest modifications during rollout

  • Skill transition support: Accessible training, coaching, and temporary performance accommodation as workers adapt to modified roles

  • Employment security commitments: Clear policies regarding redeployment versus displacement when AI absorbs work

  • Celebration of early wins: Visible recognition of teams and individuals successfully navigating transitions


The psychological contract between organizations and workers fundamentally shifts when AI augments work. Historical expectations—perform assigned tasks competently and receive employment security—give way to expectations of continuous learning, flexibility, and proactive adaptation. Organizations that acknowledge this shift explicitly and provide corresponding support achieve dramatically higher adoption success than those that deploy technology while maintaining traditional employment relationships (Rousseau, 1995).


Building Long-Term Adaptive Capacity

Beyond immediate redesign initiatives, leading organizations build structural capabilities for continuous work evolution as AI capabilities advance. Three pillars support this sustained adaptability.


Continuous Work Sensing and Redesign Mechanisms


Rather than treating work design as periodic projects, mature organizations embed ongoing analysis and optimization into operating rhythms. This requires moving beyond episodic process improvement initiatives to continuous monitoring, experimentation, and evolution (Westerman et al., 2014).

Leading technology companies exemplify this approach through systematic workflow monitoring and rapid experimentation. Teams continuously track metrics spanning productivity, quality, worker sentiment, and customer experience. When deviations emerge—positive or negative—teams investigate root causes and test modifications including AI capability adjustments, task reallocation, or interface changes. This creates organizational learning loops that compound over time.


This requires:


  • Real-time performance dashboards: Accessible metrics on workflow efficiency, quality, and worker experience disaggregated to enable root cause analysis

  • Scheduled retrospectives: Regular team reflection on what is and is not working well in current work design

  • Experimentation infrastructure: Capability to test workflow variations and measure impact systematically

  • Institutional memory systems: Documentation of what has been tried, what worked, and lessons learned to prevent repeated failures

  • External scanning: Monitoring adjacent industries and research for emerging AI capabilities that might enable new work designs


Distributed Design Authority and Technical Fluency


Centralized process engineering groups cannot keep pace with AI evolution across diverse organizational contexts. Effective organizations build work design capability throughout the organization, empowering frontline teams and middle management to analyze and optimize work continuously.


Research on organizational agility demonstrates that distributed decision-making authority, when combined with adequate capability development and light-touch governance, enables faster adaptation than centralized control (Worley et al., 2014). Teams with local context can identify optimization opportunities and implement changes more rapidly than distant specialists, provided they possess necessary design skills.


Spotify's well-documented operating model illustrates this principle. Engineering squads possess significant autonomy to design their own workflows, select tools, and evolve practices based on local context. Rather than standardizing work design centrally, the company invests in building design literacy across teams and facilitates knowledge sharing about effective practices. This distributed approach enables rapid local adaptation while avoiding one-size-fits-all solutions that fit few contexts well (Kniberg & Ivarsson, 2012).


Building distributed capability requires:


  • Design thinking training: Equipping managers and workers with frameworks for analyzing work, identifying improvement opportunities, and prototyping solutions

  • Access to design support: Available expertise in process analysis, change management, and human-AI interaction to assist teams lacking specialized skills

  • Knowledge sharing platforms: Communities of practice and repositories where teams document and share effective work designs

  • Design review processes: Light-touch governance ensuring distributed designs meet minimum quality and risk standards without stifling autonomy

  • Resource flexibility: Authority for teams to adjust staffing, tools, and workflows based on redesign insights rather than requiring lengthy approval processes


Purpose, Meaning, and Human Contribution Clarity


As AI capabilities expand, fundamental questions about distinctive human contribution and work meaning intensify. Organizations that articulate clear answers—what humans uniquely provide that AI cannot, why that matters, and how roles connect to meaningful outcomes—sustain engagement through technological change better than those that treat workers as interchangeable with algorithms when sufficient capability exists.


Research on work meaning demonstrates that purpose, social contribution, and growth opportunity drive engagement more powerfully than compensation alone (Pratt & Ashforth, 2003). When AI absorbs tasks that previously provided mastery satisfaction or professional identity, organizations must actively cultivate alternative sources of meaning or risk disengagement.


Healthcare organizations exemplify effective approaches by explicitly positioning clinicians as integrators synthesizing AI-generated insights, patient preferences, evidence-based guidelines, and contextual judgment to develop personalized care plans. This framing emphasizes irreducible human contributions—empathy, ethical reasoning, creative problem-solving for complex cases—rather than tasks AI might eventually automate. Leading institutions invest in developing these capabilities and measuring outcomes that reflect this integration quality (Topol, 2019).


Customer-focused organizations differentiate through relationship depth that transcends transactional service. While AI handles routine inquiries, human service representatives are positioned as advisors who understand customer context and build lasting relationships. This clarity about distinctive human contribution shapes hiring, development, and work design toward capabilities AI cannot replicate (Huang & Rust, 2018).


Sustaining meaningful work requires:


  • Explicit value propositions: Clear articulation of what makes human contribution valuable as AI capabilities expand

  • Capability investment: Development resources focused on distinctively human skills like creativity, ethical reasoning, relationship building, and contextual judgment

  • Outcome focus: Performance systems measuring impact on customers, communities, or mission rather than task completion efficiency alone

  • Autonomy preservation: Work design that maintains meaningful human agency even as AI provides recommendations

  • Stakeholder connection: Enabling workers to see and understand how their contributions benefit specific people or purposes rather than abstract metrics


Conclusion

The disappointing returns from AI investments stem not from technological immaturity but from organizational design inertia. When companies layer intelligent capabilities onto work structures conceived for purely human execution, they create friction that dissipates potential value. The small minority who achieve significant ROI share a common characteristic: they treat AI adoption as work redesign rather than tool deployment.


This redesign imperative spans multiple dimensions. It requires deconstructing work into tasks and making explicit allocation decisions between human and AI execution based on capability fit rather than legacy assumptions. It demands cultivating hybrid expertise that blends domain knowledge with digital fluency, enabling workers to collaborate productively with algorithmic colleagues. It necessitates governance frameworks clarifying accountability when human-AI systems make consequential decisions. It often requires organizational structure evolution to enable the coordination that effective redesign demands. And it depends on change practices that build psychological safety and worker agency rather than imposing top-down transformation.


Beyond individual initiatives, sustained value requires building institutional capabilities for continuous sensing and optimization as AI capabilities advance. Organizations must distribute design authority, develop widespread technical fluency, and maintain clear purpose regarding irreducible human contribution. This prevents the brittleness that emerges when static work designs inevitably become misaligned with evolving technological capabilities.


The stakes extend beyond ROI percentages. Work design choices profoundly shape employee wellbeing, stakeholder experiences, and competitive positioning. Poor design creates role confusion, deskilling anxiety, and accountability ambiguity for workers while generating frustrating service interactions for customers. Thoughtful design enhances work meaning, develops capabilities, and creates stakeholder value difficult for competitors to replicate.


For practitioners, the imperative is clear: stop asking "What can AI do for this process?" and start asking "How should we reconstruct this work now that AI capabilities exist?" That shift from augmentation to redesign, from pilots to transformation, separates organizations achieving returns from those generating PowerPoint presentations. The technology exists. The methodologies exist. What remains is organizational commitment to doing the hard work of work redesign.


References

  1. Autor, D. H. (2015). Why are there still so many jobs? The history and future of workplace automation. Journal of Economic Perspectives, 29(3), 3-30.

  2. Brynjolfsson, E. (1993). The productivity paradox of information technology. Communications of the ACM, 36(12), 66-77.

  3. Brynjolfsson, E., & Mitchell, T. (2017). What can machine learning do? Workforce implications. Science, 358(6370), 1530-1534.

  4. Chamorro-Premuzic, T., Akhtar, R., Winsborough, D., & Sherman, R. A. (2017). The datafication of talent: How technology is advancing the science of human potential at work. Current Opinion in Behavioral Sciences, 18, 13-16.

  5. Davenport, T. H. (2018). The AI advantage: How to put the artificial intelligence revolution to work. MIT Press.

  6. Davenport, T. H., & Kirby, J. (2016). Only humans need apply: Winners and losers in the age of smart machines. Harper Business.

  7. Davenport, T. H., & Ronanki, R. (2018). Artificial intelligence for the real world. Harvard Business Review, 96(1), 108-116.

  8. Fountaine, T., McCarthy, B., & Saleh, T. (2019). Building the AI-powered organization. Harvard Business Review, 97(4), 62-73.

  9. Holstein, K., Wortman Vaughan, J., Daumé III, H., Dudik, M., & Wallach, H. (2019). Improving fairness in machine learning systems: What do industry practitioners need? Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1-16.

  10. Huang, M. H., & Rust, R. T. (2018). Artificial intelligence in service. Journal of Service Research, 21(2), 155-172.

  11. Jarrahi, M. H., Kenyon, S., Brown, A., & Doerfler, C. (2023). Artificial intelligence and knowledge management: A partnership between human and AI. Business Horizons, 66(1), 87-99.

  12. Kniberg, H., & Ivarsson, A. (2012). Scaling agile @ Spotify with tribes, squads, chapters & guilds. Spotify.

  13. Kolbjørnsrud, V., Amico, R., & Thomas, R. J. (2016). How artificial intelligence will redefine management. Harvard Business Review Digital Articles, 2-6.

  14. Kotter, J. P. (2008). A sense of urgency. Harvard Business Press.

  15. Lebovitz, S., Levina, N., & Lifshitz-Assaf, H. (2021). Is AI ground truth really true? The dangers of training and evaluating AI tools based on experts' know-what. MIS Quarterly, 45(3), 1501-1526.

  16. Möhlmann, M., & Zalmanson, L. (2017). Hands on the wheel: Navigating algorithmic management and Uber drivers' autonomy. Proceedings of the International Conference on Information Systems.

  17. Parker, S. K. (2014). Beyond motivation: Job and work design for development, health, ambidexterity, and more. Annual Review of Psychology, 65, 661-691.

  18. Porter, M. E., & Lee, T. H. (2013). The strategy that will fix health care. Harvard Business Review, 91(10), 50-70.

  19. Pratt, M. G., & Ashforth, B. E. (2003). Fostering meaningfulness in working and at work. In K. S. Cameron, J. E. Dutton, & R. E. Quinn (Eds.), Positive organizational scholarship: Foundations of a new discipline (pp. 309-327). Berrett-Koehler.

  20. Ransbotham, S., Kiron, D., Gerbert, P., & Reeves, M. (2017). Reshaping business with artificial intelligence. MIT Sloan Management Review, 59(1), 1-17.

  21. Ransbotham, S., Khodabandeh, S., Fehling, R., LaFountain, B., & Kiron, D. (2020). Winning with AI. MIT Sloan Management Review and Boston Consulting Group.

  22. Rousseau, D. M. (1995). Psychological contracts in organizations: Understanding written and unwritten agreements. Sage Publications.

  23. Sia, S. K., Soh, C., & Weill, P. (2016). How DBS Bank pursued a digital business strategy. MIS Quarterly Executive, 15(2), 105-121.

  24. Topol, E. J. (2019). High-performance medicine: The convergence of human and artificial intelligence. Nature Medicine, 25(1), 44-56.

  25. Westerman, G., Bonnet, D., & McAfee, A. (2014). Leading digital: Turning technology into business transformation. Harvard Business Press.

  26. Wilson, H. J., & Daugherty, P. R. (2018). Collaborative intelligence: Humans and AI are joining forces. Harvard Business Review, 96(4), 114-123.

  27. Worley, C. G., Williams, T., & Lawler III, E. E. (2014). The agility factor: Building adaptable organizations for superior performance. Strategy & Leadership, 42(2), 44-47.

ree

Jonathan H. Westover, PhD is Chief Academic & Learning Officer (HCI Academy); Associate Dean and Director of HR Programs (WGU); Professor, Organizational Leadership (UVU); OD/HR/Leadership Consultant (Human Capital Innovations). Read Jonathan Westover's executive profile here.

Suggested Citation: Westover, J. H. (2025). When AI Investments Fail: Why Work Redesign, Not Technology Deployment, Unlocks ROI. Human Capital Leadership Review, 27(1). doi.org/10.70175/hclreview.2020.27.1.3

Human Capital Leadership Review

eISSN 2693-9452 (online)

Subscription Form

HCI Academy Logo
Effective Teams in the Workplace
Employee Well being
Fostering Change Agility
Servant Leadership
Strategic Organizational Leadership Capstone
bottom of page