top of page
Home
Bio
Pricing
Podcast Network
Advertise with Us
Be Our Guest
Academy
Learning Catalog
Learners at a Glance
The ROI of Certification
Corporate L&D Solutions
Research
Research Initiatives
Nexus Institute
Catalyst Center
Adadptive Organization Lab
Future of Work Collective
Renaissance Project
Collaboration Form
Research Models and Tools
Research One Sheets
Research Snapshots
Research Videos
Research Briefs
Research Articles
Free Educational Resources
HCL Review
Contribute to the HCL Review
HCL Review Archive
HCL Review Slide Decks and Infographics
HCL Review Process
HCL Review Reach and Impact
HCI Press
From HCI Academic Press
From HCI Popular Press
Publish with HCI Press
Free OER Texts
Our Impact
Invest with HCI
Industry Recognition
Philanthropic Impact
Kiva Lending Impact
Merch
More
Use tab to navigate through the menu items.
Beyond Hiring Metrics: Developing a Holistic Approach to Measure Quality of Hire
CATALYST CENTER FOR WORK INNOVATION
8 hours ago
7 min read
Collaborating with People You Don't Like
RESEARCH BRIEFS
1 day ago
7 min read
HR's Vital Role in Advocating for and Protecting Employees in an Unhealthy Workplace
ADAPTIVE ORGANIZATION LAB
2 days ago
6 min read
Reaching Untapped Talent: Strategies for Identifying and Developing High Potential Employees
CATALYST CENTER FOR WORK INNOVATION
3 days ago
7 min read
Unlocking Human Potential: A Practitioner's Guide to Motivation Theory in Organizational Settings
RESEARCH BRIEFS
4 days ago
8 min read
Advancing Data Literacy for Better Problem-Solving
CATALYST CENTER FOR WORK INNOVATION
5 days ago
5 min read
Work-Related Factors and Cognitive Health: Evidence-Based Insights for Organizational Practice
CATALYST CENTER FOR WORK INNOVATION
6 days ago
31 min read
A Shorter Workweek as a Policy Response to AI-Driven Labor Displacement: Economic Stabilization in the Age of Automation
NEXUS INSTITUTE FOR WORK AND AI
Feb 9
26 min read
Design Thinking: An Essential Framework for Innovating in Uncertain Times
CATALYST CENTER FOR WORK INNOVATION
Feb 8
8 min read
Leaders Who Don't Listen: An Ongoing Organizational Struggle
CATALYST CENTER FOR WORK INNOVATION
Feb 7
7 min read
Human Capital Leadership Review
Beyond Hiring Metrics: Developing a Holistic Approach to Measure Quality of Hire
CATALYST CENTER FOR WORK INNOVATION
8 hours ago
7 min read
Collaborating with People You Don't Like
RESEARCH BRIEFS
1 day ago
7 min read
HR's Vital Role in Advocating for and Protecting Employees in an Unhealthy Workplace
ADAPTIVE ORGANIZATION LAB
2 days ago
6 min read
Reaching Untapped Talent: Strategies for Identifying and Developing High Potential Employees
CATALYST CENTER FOR WORK INNOVATION
3 days ago
7 min read
Building Olympic-Caliber Teams in the Age of AI
4 days ago
4 min read
Unlocking Human Potential: A Practitioner's Guide to Motivation Theory in Organizational Settings
RESEARCH BRIEFS
4 days ago
8 min read
How to Design a Benefits Package That Actually Attracts Gen Z Talent
5 days ago
4 min read
The Top U.S. States Where Work Stress Is Driving Early Aging, According to New Study
5 days ago
6 min read
Nearly 1 in 10 U.S. Workers Admit to a Workplace Affair in the Past Year, New Data Reveals
5 days ago
4 min read
1
2
3
4
5
HCL Review Research Videos
Human Capital Innovations
Play Video
Play Video
05:35
AI Makes School Easier—But At What Cost
Explore Dr. Jonathan H. Westover’s “Beyond Learning Outcomes: The Hidden Costs of AI in Education” in this concise breakdown. We unpack how AI boosts productivity but can cause cognitive offloading, skill atrophy, equity gaps, academic integrity issues, and reduced learner agency. Learn evidence-based interventions—assessment redesign, AI literacy, transparent policies, and adaptive governance—that preserve human learning while harnessing AI’s benefits. Perfect for educators, administrators, instructional designers, and students who want practical strategies to balance innovation with long-term capability development. If this video helps you, please like and share to spread the conversation. #AIinEducation #EdTech #AcademicIntegrity #AILiteracy #DrJonathanHWestover OUTLINE: 00:00:00 - AI's Rapid Rise 00:01:15 - Cognitive Offloading and Skill Atrophy 00:02:25 - Equity and Access in the AI Era 00:03:25 - Rethinking Assessment in a World of AI 00:04:23 - Practical Steps for Educational Leaders
Play Video
Play Video
03:10
Capability Over Performance
This research examines the paradoxical relationship between artificial intelligence and modern education, noting that immediate efficiency gains often mask long-term cognitive skill atrophy. While AI tools offer personalized learning and reduced administrative burdens, they also threaten academic integrity and the development of critical thinking through excessive cognitive offloading. The research highlights a growing equity gap, where students with existing advantages use technology to accelerate while others become dependent on it as a substitute for learning. To mitigate these risks, the research advocates for assessment redesigns that prioritize the learning process and institutional frameworks that build resilience through transparent communication. Ultimately, the research argues for a balanced pedagogical approach that leverages technological power without sacrificing the essential human struggle required for true intellectual mastery.
Play Video
Play Video
03:31
The AI Washing Scam Why Jobs Vanish Before AI Works
This video explores the widespread narrative of an impending AI-driven job apocalypse and contrasts it with actual data and expert analysis. Although headlines claim mass layoffs due to AI, the numbers reveal a more nuanced reality. In 2025, US companies announced approximately 1.2 million job cuts, but only about 4.5% of these (around 55,000) were directly attributed to AI, a modest increase from 2023 but still a small fraction overall. Experts term this phenomenon “AI washing,” where companies use AI as a buzzword to justify layoffs before AI technologies are fully capable of replacing human labor. Many firms lack mature AI systems able to deliver promised efficiencies, with over half reporting minimal benefits from AI integration. Highlights 🤖 AI layoffs in 2025 accounted for only about 4.5% of total job cuts, much lower than popular fears suggest. 📉 “AI washing” is a trend where companies prematurely link layoffs to AI without mature technology to back it up. 😰 Worker anxiety about AI has surged from 28% to 40% between 2024 and 2026 despite minimal job mix changes. ⏳ The productivity paradox means AI often causes initial slowdowns before improvements are seen. 🏢 Case studies show layoffs are often due to broader factors like culture and pandemic effects, not just AI. 🔑 Best practices for AI integration include transparency, fairness, retraining, and involving employees in redesigning work. 🌟 Companies that pair AI with human skill development achieve better products, customer retention, and resilience. Key Insights 🤔 The Discrepancy Between Public Perception and Data: Despite widespread media coverage suggesting an AI-driven job apocalypse, empirical data reveals that AI-related layoffs remain a small fraction of total cuts. This highlights the need for critical evaluation of headlines and encourages decision-makers to rely on robust metrics rather than fear-based narratives. The public’s anxiety, while understandable, is not fully supported by current labor market trends, indicating a lag between perception and reality. 🧩 “AI Washing” Undermines Trust and Effectiveness: The practice of attributing layoffs to AI before the technology is mature enough to replace jobs creates a credibility gap. This “AI washing” damages institutional trust, discourages genuine AI adoption, and risks losing valuable institutional knowledge prematurely. Organizations must resist short-term PR gains in favor of long-term strategic integrity to ensure sustainable transformation. ⏳ The Productivity Paradox Requires Patience and Adaptation: AI’s impact on productivity is complex. While managers may observe significant efficiency gains, frontline workers often do not experience immediate benefits and may even face slowdowns. This J-curve effect—initial productivity dips followed by gains—reflects the need for organizational learning, workflow redesign, and cultural adjustments. Companies must be patient and invest in change management rather than expecting instant returns. 🤝 Human-Centered AI Integration is Crucial: The most successful AI implementations focus on complementing human strengths such as judgment, creativity, and relationships rather than attempting outright replacement. This requires transparent communication about AI’s current capabilities and limitations, collaborative work redesign involving employees, and clear criteria for what tasks get automated. Such inclusivity fosters trust and smoother transitions. 🛠️ Skill Building and Procedural Justice Drive Sustainable Outcomes: Offering meaningful retraining, redeployment options, and fair severance packages is essential. Training employees to supervise and interpret AI outputs ensures quality control and helps build internal capabilities. Procedural justice not only supports ethical workforce management but also improves organizational resilience and product quality as AI tools mature. 🏛️ Government Role in AI Oversight: Regulatory bodies should establish performance gates for AI systems before allowing human job displacement. Continuous monitoring and readiness to roll back AI deployments if quality or safety declines are vital. This oversight protects workers and consumers while encouraging responsible AI development. 🌐 Strategic Focus on Metrics Over Narratives: Organizations that prioritize measurable outcomes over hype—tracking real productivity improvements, customer satisfaction, and employee engagement—are better positioned to capitalize on AI’s potential. Chasing narratives without substance risks wasting resources and alienating key stakeholders. The video’s call to action invites real-world input, emphasizing the value of data-driven decision-making and shared experiences in navigating AI’s evolving landscape. Like and share to spread the conversation. #AIWashing #Layoffs #ProductivityParadox #Automation #Reskilling #ResponsibleAI
Play Video
Play Video
04:45
AI Washing and Phantom Productivity
This research explores the rising trend of AI-washing, a practice where executives falsely attribute workforce reductions to artificial intelligence to mask traditional cost-cutting motives. Research indicates a significant misalignment between the massive surge in AI-related layoffs and the actual, limited deployment of functional automation technology. These premature staff cuts often lead to institutional knowledge loss, diminished employee trust, and a long-term decline in innovation capacity. The research argues that organizations should instead view technology as a complement to human expertise through transparent communication and robust upskilling initiatives. Ultimately, sustainable success depends on evidence-based integration rather than using speculative automation as a convenient rhetorical shield for restructuring.
Play Video
Play Video
16:17
A Conversation about AI-Washing and the Phantom Productivity Paradox
This conversation explores the rising trend of AI-washing, a practice where executives falsely attribute workforce reductions to artificial intelligence to mask traditional cost-cutting motives. Research indicates a significant misalignment between the massive surge in AI-related layoffs and the actual, limited deployment of functional automation technology. These premature staff cuts often lead to institutional knowledge loss, diminished employee trust, and a long-term decline in innovation capacity. The conversation argues that organizations should instead view technology as a complement to human expertise through transparent communication and robust upskilling initiatives. Ultimately, sustainable success depends on evidence-based integration rather than using speculative automation as a convenient rhetorical shield for restructuring. See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Play Video
Play Video
17:10
A Conversation about Going Beyond Learning Outcomes and Navigating the Hidden Costs of Educationa...
This conversation explores the paradoxical relationship between artificial intelligence and modern education, noting that immediate efficiency gains often mask long-term cognitive skill atrophy. While AI tools offer personalized learning and reduced administrative burdens, they also threaten academic integrity and the development of critical thinking through excessive cognitive offloading. They highlight a growing equity gap, where students with existing advantages use technology to accelerate while others become dependent on it as a substitute for learning. To mitigate these risks, they advocate for assessment redesigns that prioritize the learning process and institutional frameworks that build resilience through transparent communication. Ultimately, they argue for a balanced pedagogical approach that leverages technological power without sacrificing the essential human struggle required for true intellectual mastery. See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Play Video
Play Video
13:57
The Missing Piece Org Design for Scalable Agentic AI
This video explores the innovative approach of agentic AI, where multiple specialized AI agents collaborate as a team to accomplish complex tasks more efficiently than a single large AI system. This method draws inspiration from human organizational structures, emphasizing the importance of coordination and communication among agents. However, as the number of agents increases, the system faces challenges such as communication breakdowns, duplicated efforts, confusion, and rising costs. These issues mirror long-standing problems in human organizations as they scale. The key to overcoming these problems lies not in making individual AI agents smarter but in designing better organizational structures for their interactions. Highlights 🤖 Agentic AI uses many specialized AI agents working together instead of one large AI. 🧩 Coordination breaks down as agent numbers grow, causing inefficiency and errors. 🏢 Organizational design, not just smarter agents, is key to scaling AI systems. 📊 Span of control principle limits how many agents a single manager can effectively oversee. 📄 Boundary objects serve as structured, shared tools for clear communication between agents. 🔗 Managing task coupling balances communication needs and autonomy for efficiency. 🚀 Hierarchical AI teams with middle managers improve scalability, reduce costs, and boost reliability. Key Insights 🤝 Agentic AI draws directly from human organizational principles: The challenges faced by growing teams of AI agents mirror those long known in human organizations, such as communication overhead, coordination problems, and managerial bottlenecks. This parallel suggests that effective AI system design should leverage established management theories rather than solely focusing on technological advancement. Understanding this connection provides a roadmap for building scalable AI teams by adapting proven organizational structures. 🧠 Intelligence of individual agents is insufficient without smart team design: Simply improving the capabilities of each AI agent does not solve systemic issues arising from poor interaction design. The intelligence of the collective depends heavily on how agents communicate, delegate, and coordinate their work. This shifts the focus from isolated AI research to interdisciplinary approaches involving organizational behavior, systems engineering, and human factors. 📏 Span of control limits are critical for AI team performance: Research in human organizations shows that managers can effectively oversee only about five to seven direct reports. Applying this to AI means no single orchestrator should manage dozens or hundreds of agents directly. Ignoring this leads to overwhelmed orchestrators, excessive communication costs, and fragile systems prone to failure. Designing hierarchical layers with middle managers empowers better focus, delegation, and error containment. 🗂 Boundary objects reduce ambiguity and improve communication efficiency: By replacing unstructured chat with structured, shared artifacts, AI agents gain a common reference point that streamlines information exchange. This approach makes handoffs explicit, reduces misunderstandings, and creates an auditable workflow. It also respects AI context window limitations by distilling only relevant data, mitigating hallucinations and costly errors. 🔄 Calibrating coupling between tasks optimizes collaboration: Recognizing when tasks require tight integration versus independent parallel work prevents communication bottlenecks and integration failures. Highly coupled tasks demand frequent, synchronous interactions, while loosely coupled tasks benefit from autonomy and asynchronous checkpoints. This nuanced management of dependencies enhances system responsiveness and throughput. 🧩 Hierarchical team structures enhance robustness and scalability: Introducing middle manager agents creates manageable clusters of worker agents, simplifying communication pathways and enabling the top-level orchestrator to focus on strategic goals. This structure distributes coordination duties and localizes failures, making the system more resilient and cost-effective. If you found this useful, please like and share to spread the ideas. #AgenticAI #OrganizationalTheory #MultiAgentSystems #AIorgDesign #SpanOfControl OUTLINE: 00:00:00 - Why Large AI Teams Fail 00:01:20 - Lessons From Organizations 00:02:42 - Managing Agent Span of Control 00:04:02 - Build Hierarchies That Scale 00:05:06 - The Power of Boundary Objects 00:06:40 - Boundary Objects in Action 00:07:56 - Calibrating Agent Interdependence 00:09:29 - Designing Checkpoints and Memory 00:11:01 - A Practical Guide for Leaders 00:13:03 - Review, Results, and Leader Playbook
Play Video
Play Video
03:49
Organizational Design for Agentic AI
This research explores how organizational theory can solve the coordination failures currently hindering multi-agent AI systems. While modern AI agents are technically advanced, they often struggle with information degradation and excessive overhead when working in large groups. The research argues that developers should apply human management principles, such as span of control and structured communication protocols, to design more reliable hierarchies. By using boundary objects and calibrated coupling mechanisms, organizations can prevent the "telephone game" effect and improve token efficiency. Ultimately, the research suggests that the future of scalable AI depends on viewing these systems through the lens of organizational design rather than just technical capability. This shift in perspective aims to move agentic workflows from unreliable prototypes to stable, economically viable enterprise solutions.
Blog: HCI Blog
Human Capital Leadership Review
Featuring scholarly and practitioner insights from HR and people leaders, industry experts, and researchers.
HCL Review Research Infographics
HCL Review Articles
All Articles
Nexus Institute for Work and AI
Catalyst Center for Work Innovation
Adaptive Organization Lab
Work Renaissance Project
Research Briefs
Research Insights
Webinar Recaps
Book Reviews
Transformative Social impact
Search
Mar 4, 2025
4 min read
CATALYST CENTER FOR WORK INNOVATION
Building a Culture Where All Can Thrive
bottom of page