top of page
HCL Review
nexus institue transparent.png
Catalyst Center Transparent.png
Adaptive Lab Transparent.png
Foundations of Leadership
DEIB
Purpose-Driven Workplace
Creating a Dynamic Organizational Culture
Strategic People Management Capstone

Human Agency in AI-Augmented Work: Building Meaningful Control in the Age of Intelligent Systems

Listen to this article:


Abstract: As artificial intelligence systems become deeply embedded in organizational workflows, questions of human agency—the capacity to act with intention, autonomy, and meaningful control—have moved from philosophical inquiry to operational imperative. This article examines how AI augmentation reshapes individual and organizational agency across knowledge work contexts. Drawing on autonomy theory, human-AI interaction research, and emerging regulatory frameworks, we analyze the mechanisms through which intelligent systems can either enhance or diminish workers' sense of control, competence, and authorship. We identify evidence-based organizational responses including transparency architectures, capability development programs, and governance structures that preserve human judgment in AI-mediated decision environments. The article concludes with forward-looking recommendations for building agency-preserving systems that align algorithmic assistance with human flourishing and organizational accountability.

The integration of artificial intelligence into professional work has accelerated dramatically in recent years, with generative models, predictive analytics, and autonomous systems now mediating tasks from clinical diagnosis to content creation, from financial underwriting to strategic planning. This transformation raises fundamental questions about the nature of human agency in AI-augmented environments: When algorithms shape the information workers see, suggest the decisions they make, and automate the tasks they once performed, what remains of individual autonomy, intentionality, and meaningful control?


These concerns have practical urgency. Organizations report simultaneously higher productivity from AI adoption and increased worker anxiety about deskilling, algorithmic opacity, and diminished professional identity. Regulatory bodies worldwide are responding with frameworks that emphasize human oversight and meaningful control—concepts that remain underspecified in operational terms. The European Union's AI Act mandates "human oversight" for high-risk systems but provides limited guidance on what constitutes genuine versus performative control. Similar tensions appear in ISO/IEC 42001:2023 standards for AI management systems, which require organizations to ensure human agency while offering few concrete mechanisms for doing so.


The stakes extend beyond individual wellbeing to organizational performance and societal trust. Research on automation complacency demonstrates that poorly designed human-AI collaboration can lead to degraded decision quality despite sophisticated algorithmic support. When humans cannot understand, challenge, or override AI recommendations effectively, systems become brittle—vulnerable to algorithmic errors, edge cases, and shifting contexts that demand human judgment. The organizational challenge is not simply to deploy AI systems, but to design work arrangements that preserve and enhance human agency while capturing the benefits of algorithmic assistance.


This article examines human agency in AI-augmented work through an evidence-based lens. We define agency in operational terms, analyze how AI systems reshape the conditions for autonomous action, review organizational interventions that preserve meaningful control, and propose forward-looking capabilities for agency-sustaining work design. Our analysis draws on autonomy theory, human-AI interaction research, and organizational practice to provide actionable guidance for leaders navigating this transformation.


The Human Agency Landscape in AI-Augmented Work

Defining Agency in AI-Mediated Contexts


Human agency refers to the capacity to act with intention, exercise meaningful control over one's actions, and author one's professional identity and outcomes. In organizational contexts, agency encompasses three interrelated dimensions:


  • Operational autonomy: The ability to exercise judgment, make independent decisions, and control one's work processes within role boundaries

  • Epistemic agency: The capacity to understand the basis of recommendations, challenge assumptions, and form independent assessments of situations

  • Developmental agency: The opportunity to build competence, expand capabilities, and shape one's professional trajectory through meaningful engagement with challenging work


AI systems can affect each dimension. A clinical decision support system that provides opaque recommendations may preserve operational autonomy (the physician can ignore the advice) while undermining epistemic agency (the physician cannot assess the recommendation's validity). A content generation tool may enhance productivity while limiting developmental agency by reducing opportunities to practice foundational skills.


The concept of "meaningful human control" has emerged in AI ethics and policy discourse as a normative standard, but operationalizing this concept requires attention to specific mechanisms. Research on human-AI interaction distinguishes several levels of control:


  • Decisional control: Authority over final outcomes and the ability to override algorithmic recommendations

  • Procedural control: Influence over how algorithms process information and generate outputs

  • Informational control: Access to explanations that enable informed judgment about algorithmic suggestions

  • Temporal control: Ability to pause, question, and deliberate rather than responding to algorithmic prompts reflexively


Effective agency in AI-augmented work requires attention to all four dimensions. Simply preserving decisional control—the right to make final choices—proves insufficient when workers lack the information, time, or capability to exercise that control meaningfully.


Prevalence and Patterns of AI Augmentation


AI systems now mediate work across virtually all knowledge-intensive sectors. The scope of augmentation varies considerably:


Diagnostic and analytical support systems assist professionals in pattern recognition and risk assessment. Medical imaging algorithms flag potential pathologies for radiologist review. Credit scoring models assess loan applications. Recruitment platforms screen candidate profiles. These systems typically present recommendations while preserving formal decisional authority with human professionals.


Generative assistance tools produce content, code, analysis, and creative outputs based on user prompts or contextual signals. Large language models draft communications, synthesize research, and generate strategic recommendations. Code completion systems suggest entire functions or modules. These tools reshape the creative process itself, shifting human work from initial production to evaluation, refinement, and integration of algorithmically generated content.


Workflow automation and orchestration platforms embed AI throughout process chains, making decisions about task routing, priority, and resource allocation. Customer service systems triage inquiries and suggest responses. Supply chain platforms optimize inventory and logistics. Project management tools allocate work based on predicted capacity and skill matches. In these contexts, AI shapes not just individual decisions but entire work systems.


The distribution of these technologies follows clear patterns. Organizations with high volumes of routine cognitive work and substantial data infrastructure have adopted AI most extensively. Financial services, technology, healthcare, and professional services lead in deployment rates. However, adoption is spreading rapidly to education, government, creative industries, and other sectors previously characterized by high autonomy and tacit expertise.


Research indicates that AI augmentation correlates with certain organizational characteristics. Larger organizations with established data governance capabilities deploy AI more extensively. Organizations facing competitive pressure or regulatory requirements for consistency adopt decision-support systems at higher rates. Critically, deployment patterns often reflect existing power dynamics—AI systems tend to be imposed on frontline workers while augmenting decision-makers with sophisticated analytical tools.


Organizational and Individual Consequences of AI Augmentation

Organizational Performance Impacts


The performance effects of AI augmentation depend heavily on implementation approach and the preservation of human agency in system design. Research on human-AI collaboration reveals several consistent patterns:


Productivity gains from AI augmentation vary by task structure and worker experience. Studies of professional work suggest productivity improvements ranging from 20-40% for well-structured tasks where AI provides relevant pattern recognition or content generation. However, these gains often emerge unevenly across worker populations and task types. AI assistance tends to show larger productivity effects for less experienced workers on routine tasks, while showing smaller or even negative effects on complex, non-routine work requiring contextual judgment.


Quality outcomes present a more complex picture. AI systems can improve consistency and reduce certain types of errors, particularly those involving fatigue, attentional lapses, or failure to consider base rates. However, research on automation complacency demonstrates that human-AI teams can perform worse than either humans or AI alone when collaboration is poorly designed. Parasuraman and Manzey (2010) identified the automation bias phenomenon: decision-makers over-rely on automated recommendations, failing to detect algorithmic errors that humans working independently would catch. This effect intensifies when systems provide high accuracy most of the time but fail catastrophically on edge cases.


Innovation and adaptation capabilities may be compromised by AI augmentation that reduces human engagement with foundational tasks. Organizations report concerns about skill degradation among workers who rely heavily on AI assistance for routine tasks. When junior professionals outsource basic analysis to algorithms, they may fail to develop the pattern recognition capabilities needed for expert judgment. This creates organizational brittleness—dependence on systems that work well in stable contexts but struggle with novel situations requiring human creativity and adaptation.

Risk and accountability challenges emerge from the distribution of agency between humans and AI systems. When decisions result from complex interactions between algorithmic recommendations, human judgments, and organizational constraints, accountability becomes fragmented. Regulatory frameworks increasingly require organizations to demonstrate meaningful human oversight, but operationalizing this requirement proves difficult when humans lack the information or capability to second-guess sophisticated AI systems effectively.


Individual Wellbeing and Professional Identity Impacts


The effects of AI augmentation on worker wellbeing and professional identity vary substantially based on how systems are designed and implemented:


Autonomy and control perceptions decline when workers experience AI systems as surveillance, constraint, or usurpation of professional judgment. Research on algorithmic management shows that opaque systems imposing rigid controls on work processes generate resistance, stress, and reduced job satisfaction. Conversely, AI tools that workers perceive as genuinely assistive—amplifying their capabilities while preserving control—can enhance autonomy perceptions and professional efficacy.


Skill development and mastery concerns arise particularly among early-career professionals whose learning opportunities become mediated by AI systems. When junior workers rely on generative AI for tasks that previously built foundational capabilities, they may struggle to develop the tacit expertise needed for professional advancement. Organizations report tension between short-term productivity gains from AI augmentation and long-term capability development needs.


Professional identity and meaning can be threatened when AI systems perform tasks previously central to professional self-concept. Physicians describe ambivalence about diagnostic algorithms that might perform pattern recognition more accurately than human clinicians. Writers express concern about language models that generate compelling prose. These tensions are not simply about employment displacement but about the meaning and value of human expertise itself.


Stress and cognitive load effects depend on implementation approach. Well-designed AI assistance can reduce cognitive burden by handling routine processing and allowing professionals to focus on complex judgment. However, poorly designed systems can increase stress by requiring constant vigilance to catch algorithmic errors, imposing rigid constraints on work processes, or creating uncertainty about performance expectations and evaluation criteria.


The literature suggests that individual impacts depend substantially on workers' sense of agency in relation to AI systems. When workers understand how systems function, can challenge recommendations effectively, and perceive their expertise as valued and utilized, AI augmentation can enhance professional efficacy. When systems are opaque, override human judgment without clear rationale, or reduce work to algorithmic compliance, wellbeing declines even when productivity metrics improve.


Evidence-Based Organizational Responses

Table 1: Strategies for Preserving Human Agency in AI Environments

Intervention Category

Specific Strategy

Description

Operational Mechanisms

Intended Impact on Agency

Implementation Practical Examples

Transparency and Explainability Architectures

Calibrated Trust through Confidence Communication

Systems provide not just recommendations but also uncertainty estimates and confidence intervals to help users gauge when to rely on AI.

Confidence scoring with recommendations; displaying algorithmic confidence levels to identify cases for scrutiny.

Epistemic agency

Displaying a percentage confidence level alongside a medical diagnosis or credit risk prediction.

Transparency and Explainability Architectures

Feature Contribution Displays

Clarifying which input factors most significantly influenced a specific algorithmic output to help users interpret results.

Showing input feature weights or data points in language appropriate to user expertise.

Epistemic agency

A credit model highlighting that 'payment history' was the primary reason for a loan denial.

Human-Centered Workflow Design

Cognitive Forcing Functions

Introducing deliberate friction or 'pauses' to prevent users from reflexively accepting AI outputs and counteracting automation bias.

Active confirmation requirements; mandatory review periods; staged decision processes.

Operational autonomy

A system requiring a user to manually select specific findings from an AI-generated summary before they can proceed.

Procedural Justice and Participation

Override and Escalation Pathways

Ensuring workers have the formal right and technical ability to deviate from AI recommendations based on professional judgment.

Decision rights matrices; documentation requirements for overrides rather than punitive consequences.

Operational autonomy

A 'manual override' button in a logistics platform that allows a manager to reroute shipments against AI suggestions.

Procedural Justice and Participation

Co-design Processes

Engaging end users in defining requirements and shaping how human-AI collaboration is structured within the workflow.

Cross-functional design teams; pilot programs with voluntary participation.

Operational autonomy

Including frontline medical staff in the design phase of a new clinical decision support tool.

Capability Development

Foundational Skill Preservation

Ensuring workers maintain core expertise so they can function independently of AI and detect algorithmic errors.

Regular 'manual mode' exercises; mentoring programs between experienced and junior workers.

Developmental agency

Periodically requiring accountants to perform audits without the use of automated analytical tools.

Governance and Accountability

Collective Oversight and Worker Representation

Creating organizational bodies that allow workers to influence broad AI policies and deployment strategies.

AI ethics committees with worker representation; worker councils for AI augmentation.

Developmental agency

An ethics committee that includes a union representative to review the impact of new surveillance algorithms.

Governance and Accountability

Algorithmic Impact Assessments

Regularly evaluating how deployed AI systems affect human factors like autonomy, professional judgment, and skill development.

Multi-dimensional impact assessments; regular system audits with user input.

Developmental agency

Conducting a semi-annual review of whether a generative AI tool has led to deskilling in the junior writing pool.

Organizations facing the agency challenges of AI augmentation can draw on emerging evidence about effective interventions. The following approaches show promise for preserving meaningful human control while capturing AI's potential benefits:


Transparency and Explainability Architectures


Enabling workers to exercise informed judgment about AI recommendations requires systems that provide comprehensible explanations of algorithmic reasoning. Research on explainable AI distinguishes several forms of transparency:


Input transparency clarifies what information systems use to generate recommendations. This includes both the general categories of data considered (e.g., "this credit model uses payment history, debt-to-income ratio, and employment stability") and, where feasible, the specific data points influencing individual recommendations.


Process transparency describes how systems transform inputs into outputs. For simpler models, this might involve showing decision rules or feature weights. For complex deep learning systems, techniques like attention visualization or counterfactual explanations can provide insight into reasoning patterns without requiring full algorithmic transparency.


Confidence and uncertainty communication helps users calibrate their reliance on AI recommendations. Systems that communicate not just predictions but confidence intervals and uncertainty estimates enable more sophisticated human-AI collaboration. Research on calibrated trust suggests that exposing algorithmic uncertainty actually improves combined human-AI performance relative to overconfident systems.


Contestability mechanisms allow users to challenge recommendations, request additional explanation, or flag cases where algorithmic outputs seem inappropriate. The process of contestation itself reinforces agency even when the underlying algorithmic recommendation doesn't change.


Organizations implementing transparency architectures should recognize that explanation is not one-size-fits-all. Effective transparency design requires understanding users' expertise, decision contexts, and information needs. Highly technical explanations may empower data scientists while overwhelming frontline workers. The goal is not maximum transparency but appropriately calibrated transparency that enables the intended user to exercise informed judgment.


Practical approaches to transparency architecture include:


  • Confidence scoring with recommendations: Displaying not just predicted outcomes but algorithmic confidence levels, allowing users to identify cases warranting additional scrutiny

  • Feature contribution displays: Showing which input factors most influenced specific recommendations in language appropriate to user expertise

  • Comparative case examples: Presenting similar past cases and their outcomes to provide context for current recommendations

  • Explanation granularity controls: Allowing users to request different levels of detail based on their needs and expertise

  • Audit trails and decision logs: Recording which recommendations were accepted, modified, or rejected and the rationale for human decisions


Procedural Justice and Participation in AI Deployment


How organizations design and implement AI systems affects worker agency as much as the technical capabilities of the systems themselves. Research on procedural justice suggests that involving affected workers in AI deployment decisions enhances both acceptance and effective use:


Co-design processes engage end users in defining requirements, evaluating systems, and shaping implementation approaches. When workers contribute to decisions about what tasks should be augmented, what information systems should provide, and how human-AI collaboration should be structured, they develop greater understanding of system capabilities and limitations while preserving a sense of control over their work environment.


Transparency about deployment rationale helps workers understand why organizations are introducing AI systems and what goals implementations are intended to serve. When workers perceive AI as imposed without explanation or justification, resistance increases and effective collaboration suffers. Clear communication about organizational objectives, expected benefits, and acknowledged risks builds trust and facilitates adaptation.


Worker voice in ongoing refinement ensures that AI systems evolve based on user experience rather than solely on technical metrics. Organizations that create channels for workers to provide feedback, report issues, and suggest improvements demonstrate respect for worker expertise while improving system quality.


Opt-out and override mechanisms preserve worker agency by ensuring that AI assistance remains genuinely assistive rather than coercive. While complete opt-out may not be feasible in all contexts, providing workers with meaningful ability to challenge, modify, or work around algorithmic recommendations reinforces their sense of control and professional authority.


Effective approaches to procedural justice in AI deployment include:


  • Cross-functional design teams: Including frontline workers alongside data scientists and product managers in system design decisions

  • Pilot programs with voluntary participation: Testing systems with workers who choose to participate before broader deployment

  • Regular feedback sessions: Creating structured opportunities for users to share experiences and suggest improvements

  • Transparent performance metrics: Sharing both algorithmic performance data and user satisfaction metrics, demonstrating that both technical effectiveness and worker experience matter

  • Clear escalation pathways: Establishing processes for users to override algorithmic recommendations when professional judgment warrants, with documentation of rationale rather than punishment for deviation


Capability Development and Hybrid Expertise Building


Preserving agency in AI-augmented work requires developing new capabilities that enable effective human-AI collaboration. Organizations committed to meaningful human control invest in building what might be called "hybrid expertise"—the ability to combine algorithmic assistance with human judgment effectively:


AI literacy training helps workers understand how different types of AI systems function, what their capabilities and limitations are, and how to interpret their outputs appropriately. This doesn't require workers to become data scientists, but rather to develop informed mental models of algorithmic reasoning that support calibrated trust and effective collaboration.


Critical evaluation skills enable workers to assess AI recommendations rather than accepting them reflexively. This includes understanding when to seek additional information, how to identify potential algorithmic biases or errors, and how to integrate algorithmic insights with contextual knowledge and professional judgment.


Augmentation strategy development treats effective use of AI tools as a learnable skill. Just as professionals learn to use specialized equipment or software, they can develop strategies for leveraging AI assistance while maintaining quality and professional standards. This might include techniques for prompt engineering with generative models, approaches for validating algorithmic recommendations, or methods for integrating AI-generated content into professional workflows.


Foundational skill preservation ensures that workers maintain core capabilities even as routine tasks become augmented or automated. Organizations concerned about long-term capability development create deliberate practice opportunities that prevent skill degradation while still capturing productivity benefits from AI assistance.


Practical capability development approaches include:


  • Onboarding programs for AI tools: Structured training that goes beyond basic functionality to build understanding of system capabilities, limitations, and appropriate use cases

  • Case-based learning from algorithmic failures: Using examples where AI systems made errors to develop pattern recognition for when human oversight is critical

  • Mentoring programs pairing experienced and junior workers: Ensuring that tacit expertise transfer continues even as routine tasks become mediated by AI

  • Regular "manual mode" exercises: Periodically requiring workers to perform tasks without AI assistance to maintain foundational skills

  • Communities of practice for AI-augmented work: Creating spaces for workers to share strategies, challenges, and innovations in human-AI collaboration


Governance Structures and Accountability Mechanisms


Meaningful human control requires not just individual capability but organizational structures that clearly assign responsibility and enable effective oversight. Research on AI governance suggests several critical elements:


Clear role definition specifies who has authority and responsibility for different types of decisions in AI-augmented work. This includes clarifying when humans must review algorithmic recommendations, who can override or modify automated decisions, and who is accountable for outcomes when decisions result from human-AI collaboration.


Performance monitoring tracks both algorithmic outputs and human decision quality in AI-augmented contexts. Effective monitoring recognizes that algorithm performance metrics alone provide insufficient insight—organizations need visibility into how humans are using AI tools, where human-AI collaboration is working well, and where breakdowns occur.


Regular auditing and review ensures that AI systems continue to align with organizational values and regulatory requirements as contexts evolve. This includes technical audits of algorithmic performance, process audits of how systems are being used in practice, and impact assessments examining effects on workers and other stakeholders.


Escalation and override procedures create clear pathways for workers to deviate from algorithmic recommendations when professional judgment warrants. These procedures should encourage rather than penalize appropriate human intervention while creating accountability for override decisions.


Governance approaches that preserve agency include:


  • AI ethics committees with worker representation: Oversight bodies that include end users alongside technical experts and business leaders in decisions about AI deployment and refinement

  • Decision rights matrices for hybrid processes: Clear documentation of who has authority for different types of decisions in AI-augmented workflows

  • Regular system audits with user input: Technical performance reviews supplemented by structured feedback from workers about system usability, appropriateness, and impact on work quality

  • Documentation requirements for overrides: Processes that require workers to record rationale when deviating from algorithmic recommendations, creating learning opportunities without punitive consequences

  • Algorithmic impact assessments: Regular evaluations of how AI systems affect worker autonomy, capability development, and professional identity


Human-Centered Workflow Design


Beyond specific interventions, organizations can design AI-augmented work processes that preserve and enhance human agency by attending to the structure of human-AI collaboration:


Complementary task allocation assigns tasks to humans and AI based on comparative advantage rather than simply automating what is technically feasible. This approach recognizes that humans excel at contextual judgment, ethical reasoning, creative synthesis, and adaptation to novel situations, while AI systems excel at pattern recognition at scale, rapid processing of quantitative information, and consistent application of defined rules.


Sequential rather than substitutive design structures workflows so that AI and humans contribute at different stages rather than competing for the same tasks. For example, an AI system might perform initial screening or information gathering, with humans focusing on interpretation, integration, and judgment about complex cases.


Deliberate friction and cognitive engagement introduces appropriate challenges that maintain skill development and prevent automation complacency. This might include requiring workers to review and validate AI outputs actively rather than passively accepting recommendations, or ensuring that routine cases remain mixed with complex ones that demand full human engagement.


Temporal and attentional design considers how AI systems affect cognitive load and decision quality. Systems that interrupt workflows constantly or demand split attention between multiple interfaces may reduce effective agency even if they preserve formal decision authority. Effective design provides algorithmic assistance at points where it enhances rather than disrupts human reasoning.


Workflow design approaches include:


  • Staged decision processes: Structuring AI-augmented decisions so that algorithmic recommendations arrive after humans have formed initial independent assessments, reducing anchoring effects

  • Active confirmation requirements: Requiring users to engage meaningfully with AI recommendations rather than accepting defaults, through mechanisms like mandatory review periods or structured consideration of alternative options

  • Complexity-based routing: Using AI to identify cases that require minimal versus extensive human involvement, allowing professionals to focus attention where judgment adds most value

  • Collaborative refinement interfaces: Designing tools where humans and AI iteratively improve outputs through back-and-forth interaction rather than one-way automation

  • Cognitive forcing functions: Building in deliberate pauses or review requirements for consequential decisions, preventing reflexive acceptance of algorithmic recommendations


Building Long-Term Agency-Sustaining Capabilities

Preserving human agency in AI-augmented work requires not just immediate interventions but sustained organizational capabilities that evolve with technology. Three capabilities appear particularly critical for the long term:


Dynamic Capability for Human-AI Reconfiguration


Organizations need capacity to continuously reassess and reconfigure human-AI task allocation as both technologies and work contexts evolve. What represents appropriate division of labor today may become inappropriate as AI capabilities advance, workforce skills develop, or organizational priorities shift.


This dynamic capability involves several elements. Organizations need sensing mechanisms that detect when current human-AI configurations are degrading worker agency or effectiveness. This might include regular capability assessments, monitoring of worker engagement and satisfaction, and systematic collection of feedback about AI tool usability and impact.


Reconfiguration processes enable organizations to adjust task allocation, interaction design, and governance structures based on evolving evidence. Rather than treating initial AI deployment decisions as fixed, organizations with this capability maintain flexibility to modify systems, reallocate tasks, or even withdraw technologies that prove harmful to worker agency or organizational effectiveness.


Learning systems capture and disseminate insights about effective human-AI collaboration across the organization. This includes documenting what works well, what challenges emerge, and what strategies workers develop for effective augmentation. Organizations that treat AI augmentation as an ongoing learning process rather than a one-time implementation develop superior capability to preserve agency while adapting to technological change.


Practical approaches include:


  • Regular "job crafting" reviews: Periodic sessions where workers and managers collaboratively assess and adjust how AI tools are integrated into work processes

  • Experimentation frameworks: Structures that enable controlled testing of different human-AI collaboration approaches with rigorous evaluation of both productivity and agency impacts

  • Cross-team learning networks: Communities that share experiences and innovations in AI-augmented work across organizational boundaries

  • Capability assessment protocols: Regular evaluation of both technological capabilities and human skill development to inform task allocation decisions

  • Sunset and modification procedures: Clear processes for modifying or discontinuing AI systems that prove detrimental to worker agency or organizational effectiveness


Distributed Agency and Collective Oversight


Individual agency in AI-augmented work depends partly on collective structures that enable workers to shape the broader systems within which they operate. Organizations can build capacity for what might be called "distributed agency"—workers' ability to influence not just their individual use of AI tools but the organizational systems and policies governing AI augmentation.


This involves creating meaningful channels for worker input into AI governance decisions, not as token consultation but as genuine collaboration in shaping how technologies are deployed and refined. Workers who use AI systems daily develop expertise about their capabilities, limitations, and impacts that complements technical specialists' knowledge. Organizations that create structures capturing this distributed expertise make better decisions about AI deployment while enhancing workers' sense of control over their work environment.


Worker councils or committees focused specifically on AI augmentation provide structured venues for collective voice. These bodies can review proposed deployments, recommend modifications to existing systems, and escalate concerns about tools that undermine worker agency or capability development.


Transparent AI roadmaps that workers can understand and influence ensure that technological change doesn't feel imposed without warning or rationale. When workers have visibility into planned AI deployments and genuine opportunity to shape implementation approaches, they experience greater sense of control even in contexts of significant technological change.


Collective skill development recognizes that building capability for effective AI augmentation requires organizational investment, not just individual effort. Organizations committed to distributed agency create learning opportunities, provide time for skill development, and treat AI literacy as a collective organizational capability rather than an individual responsibility.


Approaches to distributed agency include:


  • Worker representation in AI strategy development: Including frontline workers in discussions about what capabilities to develop and how to deploy them

  • Collaborative system evaluation: Engaging workers in assessing new AI tools before broad deployment, with genuine authority to recommend modifications or rejection

  • Shared governance of algorithmic performance metrics: Involving workers in defining what constitutes "good" performance for AI systems beyond narrow technical metrics

  • Collective bargaining or agreements about AI deployment: Creating formal structures, where appropriate, that give workers negotiating power over how AI technologies are introduced and managed

  • Public documentation of AI governance decisions: Making the rationale for AI deployment decisions visible to affected workers, creating accountability for those decisions


Purpose-Aligned and Values-Centered AI Integration


The most sustainable approach to preserving agency may involve ensuring that AI augmentation aligns with clear organizational purpose and explicit values rather than being driven primarily by technical capabilities or competitive pressure.


Organizations that articulate what they're trying to accomplish with AI augmentation—what human capabilities they want to enhance, what work they want to make more meaningful, what outcomes they're trying to improve—create clearer frameworks for evaluating specific deployments. This purpose-driven approach helps distinguish between AI applications that genuinely serve organizational mission and human flourishing versus those that simply automate what's technically feasible.


Values-explicit design requires organizations to articulate what matters beyond efficiency and productivity. If autonomy, mastery, purpose, and human development are genuine organizational values, then AI systems should be evaluated not just on productivity metrics but on how they affect these dimensions of work. Making values explicit in system requirements and evaluation criteria ensures they influence implementation rather than being displaced by easily quantifiable metrics.


Human flourishing as a design criterion reframes the question from "what can we automate?" to "how can we use AI to make work more meaningful, capability-building, and human?" This perspective might lead to different deployment choices—for example, using AI to handle genuinely tedious tasks while preserving or expanding opportunities for complex judgment, creative synthesis, and skill development.


Long-term capability stewardship recognizes organizational responsibility for maintaining human expertise even as AI systems become more capable. This means making deliberate choices about what capabilities to preserve through continued human engagement versus what can be safely delegated to automated systems.


Approaches to purpose-aligned integration include:


  • Mission-driven AI strategy: Explicitly connecting AI deployment decisions to organizational purpose and values rather than treating technology as an end in itself

  • Multi-dimensional impact assessment: Evaluating proposed AI systems on dimensions including worker agency, capability development, and professional meaning alongside productivity and efficiency

  • Stakeholder impact modeling: Systematically considering how AI deployments affect different stakeholder groups, including workers at various experience levels, rather than optimizing for aggregate metrics

  • Capability preservation requirements: Building into AI deployment decisions explicit consideration of what human capabilities need to be maintained even when automation is technically feasible

  • Regular purpose alignment reviews: Periodically reassessing whether deployed AI systems continue to serve stated organizational values and mission or have drifted toward narrow optimization


Conclusion

The integration of AI into organizational work presents fundamental questions about human agency that require thoughtful, evidence-based responses. The evidence reviewed in this article suggests that AI augmentation can either enhance or diminish worker agency depending on how systems are designed, deployed, and governed.


Organizations committed to preserving meaningful human control while capturing AI's potential benefits should focus on several priorities:


First, transparency and explainability must be designed for users' actual needs and contexts, not as technical compliance exercises. Workers need access to information that enables informed judgment about when to trust, question, or override algorithmic recommendations.


Second, procedural justice in AI deployment matters as much as technical sophistication. Involving workers in design decisions, providing clear rationale for implementations, and creating genuine voice in ongoing refinement builds both system quality and worker acceptance.


Third, capability development must be intentional and ongoing. Effective AI augmentation requires new skills in critical evaluation, hybrid expertise, and human-AI collaboration that don't develop automatically through exposure to systems.


Fourth, governance structures need to evolve to assign clear accountability and enable effective oversight in contexts where decisions emerge from complex human-AI interaction rather than purely human or purely algorithmic processes.


Finally, workflow design should structure human-AI collaboration to maintain cognitive engagement, preserve foundational skills, and support rather than undermine professional identity and meaning.


Looking forward, organizations will need dynamic capabilities to continuously reconfigure human-AI task allocation as technologies and contexts evolve. This requires treating AI augmentation not as a one-time implementation but as an ongoing organizational learning process. It requires distributed agency structures that give workers collective voice in shaping the systems that mediate their work. And it requires explicit commitment to values and purpose that extend beyond narrow productivity optimization to encompass human flourishing and long-term capability development.


The technical capabilities of AI systems will continue to advance rapidly. The critical question is not what can be automated but what should be automated, and how augmentation can be structured to enhance rather than diminish the human capacities for judgment, creativity, adaptation, and meaningful work that remain essential to organizational effectiveness and individual wellbeing. Organizations that address these questions thoughtfully will be better positioned not just to deploy AI successfully but to create work environments where both humans and intelligent systems contribute to their full potential.


Research Infographic



References

  1. Barandiaran, X. E., Di Paolo, E., & Rohde, M. (2009). Defining agency: Individuality, normativity, asymmetry, and spatio-temporality in action. Adaptive Behavior, 17(5), 367–386.

  2. Buçinca, Z., Malaya, M. B., & Gajos, K. Z. (2021). To trust or to think: Cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1), 1–21.

  3. European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act).

  4. Green, B. (2021). The flaws of policies requiring human oversight of government algorithms. Computer Law & Security Review, 45, Article 105681.

  5. Green, B., & Chen, Y. (2019). Disparate interactions: An algorithm-in-the-loop analysis of fairness in risk assessments. Proceedings of the Conference on Fairness, Accountability, and Transparency, 90–99.

  6. International Organization for Standardization. (2023). ISO/IEC 42001:2023 Information technology—Artificial intelligence—Management system.

  7. Jacovi, A., & Goldberg, Y. (2020). Towards faithfully interpretable NLP systems: How should we define and evaluate faithfulness? Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 4198–4205.

  8. Parasuraman, R., & Manzey, D. H. (2010). Complacency and bias in human use of automation: An attentional integration. Human Factors, 52(3), 381–410.

  9. Zhu, H., Rahwan, I., & Crandall, J. W. (2024). Worse than you think: Human-AI mutual learning amplifies stereotypical biases. arXiv preprint arXiv:2410.21840.

Jonathan H. Westover, PhD is Chief Research Officer (Nexus Institute for Work and AI); Associate Dean and Director of HR Academic Programs (WGU); Professor, Organizational Leadership (UVU); OD/HR/Leadership Consultant (Human Capital Innovations). Read Jonathan Westover's executive profile here.

Suggested Citation: Westover, J. H. (2026). Human Agency in AI-Augmented Work: Building Meaningful Control in the Age of Intelligent Systems. Human Capital Leadership Review, 32(4). doi.org/10.70175/hclreview.2020.32.4.1

Human Capital Leadership Review

eISSN 2693-9452 (online)

future of work collective transparent.png
Renaissance Project transparent.png

Subscription Form

HCI Academy Logo
Effective Teams in the Workplace
Employee Well being
Fostering Change Agility
Servant Leadership
Strategic Organizational Leadership Capstone
bottom of page