top of page
Home
Bio
Pricing
Podcast Network
Advertise with Us
Be Our Guest
Academy
Learning Catalog
Learners at a Glance
The ROI of Certification
Corporate L&D Solutions
Research
Research Initiatives
Nexus Institute
Catalyst Center
Adadptive Organization Lab
Future of Work Collective
Renaissance Project
Collaboration Form
Research Models and Tools
Research One Sheets
Research Snapshots
Research Videos
Research Briefs
Research Articles
Free Educational Resources
HCL Review
Contribute to the HCL Review
HCL Review Archive
HCL Review Slide Decks and Infographics
HCL Review Process
HCL Review Reach and Impact
HCI Press
From HCI Academic Press
From HCI Popular Press
Publish with HCI Press
Free OER Texts
Our Impact
Invest with HCI
Industry Recognition
Philanthropic Impact
Kiva Lending Impact
Merch
More
Use tab to navigate through the menu items.
Quantifying and Optimizing Human-AI Synergy: Evidence-Based Strategies for Adaptive Collaboration
RESEARCH BRIEFS
11 hours ago
28 min read
Neuroscience Hacks to Enhance Learning Agility in Leaders: A Practitioner's Guide to Brain-Based Development
CATALYST CENTER FOR WORK INNOVATION
1 day ago
37 min read
Polymathic Leadership in Industry 5.0: Bridging Human Ingenuity and Technological Transformation
NEXUS INSTITUTE FOR WORK AND AI
2 days ago
20 min read
Cognitive Frameworks for Organizational Performance and Innovation
RESEARCH BRIEFS
3 days ago
24 min read
To Unlock the Full Value of AI, Invest in Your People: Building Capability Systems That Translate Adoption into Business Impact
RESEARCH BRIEFS
4 days ago
20 min read
The Leadership Aspiration Crisis: Why High-Performers Are Declining Advancement and What Organizations Must Do
RESEARCH BRIEFS
5 days ago
24 min read
The Hidden Costs of Open-Plan Offices: What Research Reveals About Employee Well-Being and Performance
RESEARCH BRIEFS
6 days ago
22 min read
Embracing Lifelong Learning: How Organizations Can Foster a Culture of Continuous Development While Achieving Goals
CATALYST CENTER FOR WORK INNOVATION
Jan 10
6 min read
When Innovation Feels Like Betrayal: Why Trust, Not Technology, Determines AI Adoption
NEXUS INSTITUTE FOR WORK AND AI
Jan 9
14 min read
AI-Enabled People Analytics and the Emerging Crisis of Managerial Accountability
NEXUS INSTITUTE FOR WORK AND AI
Jan 8
20 min read
Human Capital Leadership Review
Quantifying and Optimizing Human-AI Synergy: Evidence-Based Strategies for Adaptive Collaboration
RESEARCH BRIEFS
11 hours ago
28 min read
Neuroscience Hacks to Enhance Learning Agility in Leaders: A Practitioner's Guide to Brain-Based Development
CATALYST CENTER FOR WORK INNOVATION
1 day ago
37 min read
Survey: CEOs Start 2026 on Edge, Citing Uncertainty as Top Threat
2 days ago
4 min read
Nothing to See Here: Nearly Half of Employees Hide Their AI Use at Work
2 days ago
4 min read
Polymathic Leadership in Industry 5.0: Bridging Human Ingenuity and Technological Transformation
NEXUS INSTITUTE FOR WORK AND AI
2 days ago
20 min read
Cognitive Frameworks for Organizational Performance and Innovation
RESEARCH BRIEFS
3 days ago
24 min read
Why Rural Resilience Needs Ethics, Not Just Technology
4 days ago
4 min read
How Better Organizational Navigation Reduces Workplace Friction
4 days ago
5 min read
To Unlock the Full Value of AI, Invest in Your People: Building Capability Systems That Translate Adoption into Business Impact
RESEARCH BRIEFS
4 days ago
20 min read
1
2
3
4
5
HCL Review Research Videos
Blog: HCI Blog
Human Capital Leadership Review
Featuring scholarly and practitioner insights from HR and people leaders, industry experts, and researchers.
Human Capital Innovations
Play Video
Play Video
36:34
AI Adoption as Screening Design: When Candidate Choice Becomes Signal, by Jonathan H. Westover PhD
Play Video
Play Video
15:47
Leading With Hope When Hope Feels Lost: An Evidence-Based Framework for Resilient Leadership, by ...
Abstract: Leaders across sectors increasingly report difficulty sustaining hope amid accelerating crises, information overload, and fractured social trust. This article synthesizes psychological research on hope theory with organizational scholarship on sensemaking and leadership to offer evidence-based strategies for cultivating and communicating hope during prolonged uncertainty. Drawing on Snyder's hope theory, recent multidimensional models of hope, and research on adaptive leadership, we examine why hope feels uniquely challenging in contemporary organizational contexts and outline six practical domains—cognitive, affective, behavioral, social, spiritual/existential, and developmental—through which leaders can strengthen their own hope and foster collective resilience. Case examples from healthcare, technology, education, and manufacturing illustrate how organizations sustain hope through transparent communication, distributed sensemaking, and deliberately designed moments of collective efficacy. The article concludes that hope is not merely an emotional state to be recovered but a dynamic, relational capacity that leaders can intentionally practice and amplify, even—and especially—when it feels most elusive.
Play Video
Play Video
04:01
The Trust Paradox: Bridging the Cognitive and Emotional AI Gap
This presentation explores the adoption paradox where high organizational investment in artificial intelligence often results in failure due to misaligned human trust. It distinguishes between cognitive trust, based on rational competence, and emotional trust, rooted in affective safety, identifying four distinct psychological configurations that dictate how employees interact with AI. When these trust dimensions are lacking, workers may manipulate or withdraw their digital data, creating a negative feedback loop that degrades the system’s accuracy and utility. To resolve this, the author advocates for holistic strategies that include transparent communication, ethical governance, and psychological support rather than relying on technical improvements alone. Ultimately, the source argues that successful implementation depends on managing the human-AI relationship through a culture of procedural justice and individual empowerment.
Play Video
Play Video
10:16
AI Adoption Fails 80% of the Time—Here’s the Trust Playbook
The video discusses the widespread excitement and challenges surrounding artificial intelligence (AI) adoption in the workplace. Despite massive investments and promises that AI will revolutionize productivity, up to 80% of AI projects fail due to mistrust and poor implementation. AI functions primarily through automation or augmentation, but its success hinges on building two types of trust among users: cognitive trust (the rational belief that AI is reliable) and emotional trust (the feeling of safety and fairness when using AI). These trust dynamics create four user groups—full trust, full distrust, uncomfortable trust, and blind trust—each influencing how people interact with AI. Poor trust leads to negative behaviors such as withdrawing or manipulating data, which in turn deteriorate AI performance, causing a vicious cycle of failure and workplace conflict. Highlights 🤖 AI projects often fail despite high investment, with up to 80% not meeting expectations. 🧠 Cognitive trust (rational belief in AI’s competence) and ❤️ emotional trust (feeling safe and respected) are both essential for AI adoption. 🔄 Low trust leads to behaviors like withdrawing and manipulating data, causing AI to perform poorly and perpetuate failure. 🚦 Four user trust profiles exist: full trust, full distrust, uncomfortable trust, and blind trust, each affecting AI use differently. 🔍 Transparency, training, and staged rollouts build cognitive trust by making AI understandable and reliable. 🤝 Emotional trust stems from fairness, employee involvement, psychological safety, and clear ethical guidelines. 🔑 Leaders must diagnose trust levels and apply tailored strategies to foster a healthy AI-human partnership. Key Insights 🤖 AI as a Tool, Not Magic: AI is fundamentally about pattern recognition and prediction based on data, functioning either by automating tasks or augmenting human decision-making. Recognizing AI as a mathematical tool rather than a magical solution tempers unrealistic expectations and focuses efforts on practical integration. This grounded view helps organizations approach AI deployment with a clear understanding of its capabilities and limitations. 🧠❤️ Dual Nature of Trust is Critical: Successful AI adoption depends equally on cognitive trust—users’ rational confidence in AI’s accuracy and reliability—and emotional trust—their sense of safety and fairness in using AI. Ignoring either dimension leads to rejection or misuse of AI. For instance, even a perfectly performing AI will be rejected if users feel it threatens their job security or invades their privacy. This dual trust model highlights the complexity of human-AI interaction beyond technical performance. 🔄 Vicious Cycle of Distrust and Poor Data Quality: Low trust causes users to withdraw, restrict, or manipulate AI usage, which starves or poisons the AI’s learning process. AI systems depend on good data to improve; when fed poor or biased data, their performance degrades, further eroding trust. This negative feedback loop explains why many AI initiatives fail and emphasizes that technology alone cannot solve trust issues—it requires social and organizational interventions. 🚦 User Trust Profiles Shape AI Adoption: People’s trust in AI falls into four quadrants: full trust (enthusiasts who engage deeply), full distrust (skeptics who avoid AI), uncomfortable trust (users who rely on AI but feel uneasy), and blind trust (users who accept AI without question). Understanding these profiles helps leaders tailor communication and training strategies to different groups, improving overall adoption rates. 🔍 Transparency and Gradual Rollout as Foundations for Cognitive Trust: Clear explanations of AI functions and limitations, combined with incremental deployment, demystify AI and set realistic expectations. Piloting AI with small groups allows iterative improvement based on user feedback, reducing fear and resistance. Companies like Salesforce and Microsoft exemplify this approach, showing that transparency and patience build confidence. 🤝 Fairness, Inclusion, and Psychological Safety Build Emotional Trust: Emotional trust emerges when employees feel involved in AI development and deployment, believe processes are fair, and experience a safe environment to express concerns. Co-creation of AI tools, as practiced by Unilever’s HR team, fosters ownership and acceptance. Leaders must cultivate empathy and maintain ethical AI governance to prevent fears about surveillance or job displacement, which undermine trust. If this helped, please like and share the video. #AI #AIDoption #TrustInAI #EthicalAI #OrganizationalChange OUTLINE: 00:00:00 - Why Are We So Bad at This? 00:01:17 - Are You a Believer, a Skeptic, or Just Confused? 00:03:46 - How Bad Vibes Create Bad AI 00:06:10 - It’s Not Rocket Science, It’s People Science 00:08:12 - Fixing AI Is About Fixing How We Work Together
Play Video
Play Video
16:32
A Conversation about Leading With Hope When Hope Feels Lost
This conversation explores hope as a learnable, multidimensional skill that is essential for effective leadership during times of chronic crisis and uncertainty. It moves beyond viewing hope as a mere feeling, instead defining it as a dynamic capacity involving goal-setting, agency, and social trust. Leaders are encouraged to utilize evidence-based strategies such as transparent communication, distributed decision-making, and the celebration of small wins to foster organizational resilience. They argue that material supports and purpose-driven cultures are necessary foundations for psychological well-being and sustained performance. By treating hope as a developmental practice rather than an innate trait, organizations can improve engagement and navigate complex environments more effectively. Ultimately, the text demonstrates that intentional leadership practices can restore collective confidence even when traditional sources of stability have eroded. See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Play Video
Play Video
25:06
The Leadership Rhythm That Shapes Tomorrow, with Jonathan Escobar Marin
In this HCI Webinar, I talk with Jonathan Escobar Marin about his recent book, " LEAD TO BEAT: The Leadership Rhythm That Shapes Tomorrow." Jonathan Escobar Marin has been bridging the nuanced gap between strategy and execution – and between executives and their associates – for the world's leading corporations for over two decades. His distinguished efforts have resulted in an exceptional track record, marked by over 320 successful transformations across more than thirty-two countries. After leaving school at sixteen, he later transformed adversity into advantage, ultimately guiding global boards and C-suites across FMCG, retail, pharmaceutical, and tech sectors, where he has consistently empowered firms to outperform markets and dismantle organizational complacency.
Play Video
Play Video
41:06
The Hidden Cost of Trust Misalignment: How Emotional and Cognitive Dissonance Undermines AI Adopt...
Abstract: Artificial intelligence adoption in organizations fails at rates approaching 80%, despite substantial investment and strategic priority. This article synthesizes findings from a real-world qualitative study tracking AI implementation in a software development firm to reveal how organizational members develop four distinct trust configurations—full trust, full distrust, uncomfortable trust, and blind trust—each triggering different behavioral responses that fundamentally shape AI performance and adoption outcomes. Unlike previous research assuming use/non-use as the primary behavioral outcome, this analysis demonstrates that organizational members actively detail, confine, withdraw, or manipulate their digital footprints based on trust configurations, creating a vicious cycle where biased or asymmetric data degrades AI performance, further eroding trust and stalling adoption. The article offers evidence-based interventions addressing both cognitive trust (through transparency, training, and realistic expectation-setting) and emotional trust (through psychological safety, ethical governance, and leadership emotional contagion), while highlighting the critical insight that organizational culture alone cannot guarantee AI adoption success. Organizations must develop personalized, trust-configuration-specific strategies that recognize the intricate interplay between rational evaluation and emotional response in technology adoption.
Play Video
Play Video
17:33
Neuroscience Hacks to Enhance Learning Agility in Leaders: A Practitioner's Guide to Brain-Based ...
Abstract: Learning agility—the capacity to rapidly learn from experience and apply that learning to novel, complex challenges—has emerged as a critical predictor of leadership potential and performance. This article synthesizes current neuroscience research with the five widely studied dimensions of learning agility: mental agility, people agility, change agility, results agility, and self-awareness. Drawing on Williams and Nowack's (2022) neuroscience framework and broader empirical evidence, we examine how specific brain structures and neural pathways underpin each dimension and translate these insights into evidence-based organizational interventions. Organizations face mounting pressure to identify and develop adaptive leaders capable of navigating volatility, uncertainty, complexity, and ambiguity. Understanding the neurobiological foundations of learning agility enables practitioners to design more effective development programs that leverage brain plasticity, optimize cognitive and emotional regulation, and accelerate behavioral change. We present concrete, research-validated strategies spanning cognitive reappraisal techniques, sleep optimization protocols, mental rehearsal practices, and feedback design principles that consulting psychologists, executive coaches, and talent development professionals can implement immediately. The integration of neuroscience with learning agility research offers a promising pathway to enhance leadership effectiveness while advancing our theoretical understanding of adult development and organizational learning.
HCL Review Research Infographics
HCL Review Articles
All Articles
Nexus Institute for Work and AI
Catalyst Center for Work Innovation
Adaptive Organization Lab
Work Renaissance Project
Research Briefs
Research Insights
Webinar Recaps
Book Reviews
Transformative Social impact
Search
Jun 19, 2025
6 min read
RESEARCH INSIGHTS
The Habits of High-Trust Teams and High-Trust Organizations
bottom of page