top of page
Home
Bio
Pricing
Podcast Network
Advertise with Us
Be Our Guest
Academy
Learning Catalog
Learners at a Glance
The ROI of Certification
Corporate L&D Solutions
Research
Research Models and Tools
Research in Popular Media
Research One Sheets
Research Snapshots
Research Videos
Research Briefs
Research Articles
Free Educational Resources
HCL Review
Contribute to the HCL Review
HCL Review Archive
HCL Review Process
HCL Review Reach and Impact
HCI Press
From HCI Academic Press
From HCI Popular Press
Publish with HCI Press
Free OER Texts
Our Impact
Invest with HCI
Industry Recognition
Philanthropic Impact
Kiva Lending Impact
Merch
More
Use tab to navigate through the menu items.
Mastering the AI Capability Gap: Why Domain Experts Must Lead AI Integration Before the Window Closes
RESEARCH BRIEFS
2 hours ago
16 min read
The GenAI Divide: Why 95% of Enterprise AI Investments Fail—and How the 5% Succeed
RESEARCH BRIEFS
1 day ago
34 min read
When the Going Gets Tough: Identifying and Overcoming Burnout as a Sign it May be Time for a New Job Opportunity
LEADERSHIP IN PRACTICE
2 days ago
7 min read
From Silence to Stewardship: Business Faculty Responses to Administrative Incompetence
RESEARCH BRIEFS
3 days ago
24 min read
The AI Skills Paradox: Why Meta-Competencies Trump Technical Know-How in the Age of Intelligent Automation
RESEARCH BRIEFS
4 days ago
20 min read
Quiet Cracking: The Silent Erosion of Employee Engagement and the Strategic Imperative of Purpose-Driven Leadership
RESEARCH BRIEFS
5 days ago
20 min read
AI Shaming in Organizations: When Technology Adoption Threatens Professional Identity
RESEARCH BRIEFS
6 days ago
27 min read
The Hidden Cost of Being "Good": Rethinking Academic Excellence and Early Career Researcher Wellbeing
Dec 3
17 min read
Restructuring for AI: The Power of Small, High-Agency Teams and the Path to Enterprise-Scale Coordination
RESEARCH BRIEFS
Dec 2
17 min read
Beyond Credentials: How Skills-Based Hiring Drives Organizational Performance and Social Equity
RESEARCH BRIEFS
Dec 1
19 min read
Human Capital Leadership Review
Mastering the AI Capability Gap: Why Domain Experts Must Lead AI Integration Before the Window Closes
RESEARCH BRIEFS
2 hours ago
16 min read
Leading Through Change by Strengthening Human Capacity
23 hours ago
6 min read
How quickly Fortune 500 CEOs earn your annual salary
1 day ago
3 min read
The GenAI Divide: Why 95% of Enterprise AI Investments Fail—and How the 5% Succeed
RESEARCH BRIEFS
1 day ago
34 min read
Gen X Underestimated Retirement. Now, They’re Not Sure They Can Catch Up
2 days ago
6 min read
Survey Reveals 70% of Workers Believe Nepotism is Alive and Well in U.S. Workplaces
2 days ago
3 min read
When the Going Gets Tough: Identifying and Overcoming Burnout as a Sign it May be Time for a New Job Opportunity
LEADERSHIP IN PRACTICE
2 days ago
7 min read
Gen Z Is Sending a Warning About Post-Layoff Culture and Leaders Should Pay Attention
3 days ago
4 min read
From Silence to Stewardship: Business Faculty Responses to Administrative Incompetence
RESEARCH BRIEFS
3 days ago
24 min read
1
2
3
4
5
HCL Review Videos
Play Video
Play Video
24:35
Performance Management, Workplace Dynamics, and Employer Liabilities, with Mark F. Kluger
In this podcast episode, Dr. Jonathan H. Westover talks with performance management, workplace dynamics, and employer liabilities. Mark F. Kluger practices exclusively in the area of labor and employment law on behalf of employers. For ten years, before founding Kluger Healey, LLC, he was Chairman of the Labor and Employment Department of one of New Jersey’s oldest law firms. Mark is a frequent speaker and writer on sexual harassment and discrimination avoidance, workplace diversity, performance management, union avoidance, and a myriad of other employment-related subjects and regularly conducts training sessions for employers on these critical topics. In addition, Mark has extensive experience in counseling employers on issues involving discipline and discharge, reductions in force, mergers and acquisitions, compliance with wage and hour, disability, COBRA, and family and medical leave laws. He regularly drafts all forms of employment policies and handbooks, severance agreements, employment contracts, non-competition and confidentiality agreements, and affirmative action plans. Mark also represents employers in collective bargaining, grievance arbitration, NLRB proceedings, and picket line issues. Mark graduated from Vassar College in 1984 and Cornell University Law School in 1987. He was an Adjunct Professor at Seton Hall Law School from 1991-1996 and served as a member and President of the Board of Education in North Caldwell, New Jersey from 2002-2008. Check out all of the podcasts in the HCI Podcast Network (https://www.podbean.com/podcast-network/HCI) !
Play Video
Play Video
06:11
I Used AI to Teach Culture—Here’s What Actually Works
This video explores the pervasive use of AI tools by students, revealing that 89% of students rely on AI for schoolwork, yet most lack the skills to discern AI’s accuracy. This dilemma cannot be resolved by banning AI, as these tools have become integral to education. Instead, educators must teach students how to use AI responsibly and critically, much like teaching someone to drive a car safely. The presenter introduces Geert Hofstede’s Six Dimensions of Culture as a practical framework to help students understand cultural differences in a structured way. The educational approach involves students using AI to compare two countries on one cultural dimension, then refining their AI queries through prompt engineering to improve the quality of responses. Highlights 🤖 89% of students use AI tools for schoolwork but often can’t verify AI’s accuracy. 🚫 Banning AI tools in education is futile; instead, responsible use and critical thinking must be taught. 📚 Hofstede’s Six Dimensions of Culture provide a structured lens for understanding cultural differences. 💡 Students learn prompt engineering to improve AI query quality and results. 🔍 Verification is key: students compare AI outputs with authoritative sources to detect errors. 🤝 Group discussions encourage collaborative critique and exploration of AI’s biases and errors. ⚖️ Ethical implications of blind AI trust are discussed, highlighting real-world risks and responsibilities. Key Insights 🤖 AI as a Double-Edged Sword in Education: The widespread use of AI tools by students offers unprecedented access to information and assistance but also introduces risks of misinformation and overreliance. Teaching students to critically evaluate AI responses transforms AI from a potential liability into an educational asset, fostering digital literacy and critical thinking skills essential for the modern age. 🛠️ Frameworks Anchor Abstract Concepts: Culture is an inherently complex and abstract concept. Using Hofstede’s Six Dimensions gives students concrete criteria to analyze and compare cultures systematically. This structured approach prevents superficial judgments and promotes nuanced understanding, showing how theoretical frameworks can enhance critical engagement with AI-generated content. 🎯 Prompt Engineering Enhances AI Utility: The process of refining AI prompts to elicit more precise and relevant answers demonstrates the importance of question design. Students learn that vague queries produce generic, often unsatisfactory results, whereas specific, well-crafted questions lead to richer, more accurate AI responses. This skill is transferable beyond education, applicable to all AI interactions. 🔎 Verification Builds Critical Thinking: By systematically comparing AI outputs with trusted sources, students actively engage in fact-checking, recognizing AI’s limitations and errors. This hands-on verification cultivates skepticism and analytical skills, essential in a digital landscape where misinformation can spread easily and AI outputs are not inherently reliable. 🤝 Collaborative Learning Enhances Understanding: Small group discussions create a supportive environment for students to share doubts and discoveries, normalizing AI errors and reducing stigma around being “wrong.” This peer interaction reinforces learning, encourages diverse perspectives on AI’s strengths and weaknesses, and promotes collective problem-solving. ⚙️ Understanding AI’s Mechanisms Demystifies Errors: Encouraging students to consider why AI makes specific mistakes—such as bias from limited training data or the probabilistic nature of language models—deepens their comprehension of AI technology. This meta-cognitive approach helps students see AI not as an infallible oracle but as a tool shaped by human input and limitations. ⚖️ Ethical Awareness is Crucial for Responsible AI Use: Discussing the consequences of blindly trusting AI, particularly in high-stakes situations like international negotiations, raises students’ awareness of the ethical dimensions of AI reliance. It underscores the importance of responsible use, accountability, and the potential real-world impact of misinformation, preparing students to be conscientious users and creators in an AI-driven world. Like and share if this helps your course planning — comments welcome. #AIinEducation #CrossCulturalManagement #Hofstede #CriticalAILiteracy #ExperientialLearning OUTLINE: 00:00:00 - The AI Paradox in the Classroom 00:01:09 - Using Hofstede's Dimensions 00:02:29 - Prompting and Verifying 00:03:44 - Making Sense of It All 00:04:39 - Building Smarter Students, Not Just Smarter Tools
Play Video
Play Video
37:32
Leveraging AI to Teach Cross-Cultural Management: An Evidence-Based Pedagogical Approach, by Jona...
As artificial intelligence tools become ubiquitous in higher education, management educators face the challenge of integrating these technologies while maintaining pedagogical rigor and teaching critical evaluation skills. This article examines an experiential exercise that uses AI as both a learning tool and object of study in teaching cross-cultural management, specifically Hofstede's Cultural Dimensions framework. Drawing on experiential learning theory, constructivist pedagogy, and emerging research on AI literacy in business education, we analyze how structured AI interactions can simultaneously develop cultural competence and critical AI literacy. The article presents evidence-based design principles, documented implementation experiences from business schools, and forward-looking recommendations for educators seeking to balance technological innovation with foundational learning objectives. This pedagogical approach addresses the dual imperative of preparing students for AI-augmented workplaces while cultivating the analytical skepticism necessary to evaluate AI-generated information.
Play Video
Play Video
G-P's 2025 World at Work Report and the Outlook for 2026, with Laura Maffucci
In this HCI Webinar Dr. Jonathan H. Westover talks with Laura Maffucci about G-P's 2025 World at Work Report and the outlook for 2026. Laura Maffucci is G-P’s Head of HR, overseeing the global workforce, talent, and employee experience with a people-first mindset. She values diversity of thought as essential for a healthy workspace. In her 20+ year career in HR, Maffucci has spoken on global and national platforms about compensation, employee well-being and mental health. She’s a staunch advocate for the employee experience and creating a culture of inclusivity. Maffucci is passionate about the future of work, normalizing the value of work everywhere, and enabling employees globally to be their best selves and add value wherever they go and whatever they do.
Play Video
Play Video
38:37
Closing the Digital Skills Gap: Building Organizational Capability for the AI Era, by Jonathan H....
Abstract: Organizations face mounting pressure to develop digital fluency across their entire workforce, not merely within technical departments. Research indicates companies with advanced digital and AI capabilities outperform competitors by two to six times in total shareholder returns, yet only 28 percent plan significant upskilling investments despite 80 percent acknowledging it as the most effective gap-closing strategy. This analysis examines the strategic imperative for comprehensive digital skill development, exploring organizational performance impacts, individual wellbeing consequences, and evidence-based interventions. Drawing on recent practitioner insights and academic research, the article synthesizes effective approaches including targeted skill-building programs, learner-centered design, technology-embedded learning, and manager-as-teacher models. Case examples from consumer goods, professional services, and retail sectors illustrate successful implementation strategies. The article concludes by proposing forward-looking capabilities in learning integration, AI-powered instruction, and knowledge democratization to build sustainable competitive advantage in an accelerating technological landscape.
Play Video
Play Video
06:32
Beat the AI Curve - Upskill or Fall Behind
This video explores the critical and urgent need for organizations to embrace digital transformation and artificial intelligence (AI) literacy across all levels of the workforce. It highlights that digital fluency is no longer confined to specialized IT or data science roles but is essential for every employee, from factory workers to executives. This widespread digital competency is vital not only for gaining competitive advantages but also for survival in an increasingly automated and technology-driven business landscape. The video stresses the risks organizations face if they fail to bridge the digital skills gap, including declining employee morale, reduced engagement, and stifled innovation. Highlights 🚀 Digital transformation and AI literacy are essential across all organizational roles, not just IT specialists. 📊 Digital fluency means effectively using digital tools relevant to one’s role, not necessarily coding skills. ⚠️ Failure to upskill leads to skill obsolescence, job anxiety, reduced morale, and decreased innovation. 🎯 Focused, strategic upskilling on critical skills drives the biggest business impact. 🤝 Leadership must model learning behaviors and foster a culture of psychological safety and experimentation. 📚 Learning ecosystems combining self-paced, live, and peer coaching make training more effective. 🧑💻 AI-powered adaptive learning and immersive VR/AR tools personalize and enhance skill development. Key Insights 🌐 Digital fluency as a survival imperative: The video underscores that digital literacy is not optional but a survival mechanism for modern businesses. Organizations that fail to equip employees with digital skills risk losing competitive ground and operational relevance. This insight stresses that digital transformation must be holistic, encompassing every function, to maintain agility and market position. 🛠️ Pragmatic definition of digital fluency: The clarification that digital fluency is about comfort and critical use of digital tools tailored to one’s role is crucial. This demystifies the concept, making it accessible and actionable for various job functions, from data analytics to customer service, enabling broader organizational adoption without overwhelming employees. 🚫 Barriers to upskilling are systemic and multifaceted: The video identifies four key barriers—outdated training, education-business misalignment, lack of learning time, and absence of skill gap assessments—that collectively hinder progress. This comprehensive view highlights that solving the skills gap requires systemic changes, not just isolated training interventions. 😟 Psychological impact on employees: The analysis of how skill obsolescence and AI fears create job anxiety and erode morale reveals the human dimension of digital transformation. This insight calls attention to the necessity of addressing emotional and cultural factors alongside technical training to sustain engagement and foster innovation. 🎯 Strategic, focused upskilling over broad programs: The recommendation to resist “teach everyone everything” and instead identify critical skills tied to strategic priorities offers a practical roadmap. This focused approach enables faster, more impactful learning outcomes and better resource allocation, ensuring that upskilling efforts align directly with business goals. 👥 Leadership’s role in cultural transformation: The emphasis on leaders modeling learning behavior, admitting knowledge gaps, and fostering psychological safety spotlights the critical role of culture in enabling continuous learning. Leaders who actively protect learning time and encourage experimentation create an environment where digital skills can flourish. 🤖 Leveraging technology for personalized learning: The video insightfully explores the use of AI-driven adaptive learning platforms and immersive VR/AR experiences to tailor training to individual needs and simulate real-world practice safely. This technological integration makes learning more engaging, effective, and scalable, crucial for addressing diverse workforce requirements. If this helped, please like and share to spread these actionable insights. #DigitalSkills #Upskilling #AI #DigitalTransformation #LearningInTheFlowOfWork OUTLINE: 00:00:00 - Why Upskilling Is No Longer Optional 00:01:18 - How Skill Gaps Erode Value 00:02:33 - The Human Impact of the Digital Skills Gap 00:03:32 - A Practical Blueprint for Digital Fluency 00:04:27 - The Long Game
Play Video
Play Video
13:36
How CLIO Sees Everything—Without Seeing You
This video discusses the critical challenge of understanding how millions of people use AI assistants daily while simultaneously protecting their privacy. Although companies collect vast amounts of data from these interactions, public access to this information remains extremely limited, creating a “data gap” that hinders research and safe AI development. To address this, a system called Clio was developed—a privacy-preserving analytical pipeline designed to extract meaningful usage patterns from millions of AI conversations without exposing any individual’s private information. Highlights 🔍 Clio enables large-scale analysis of AI assistant usage without compromising privacy. 🛡️ Privacy is protected through multi-layered automated defenses and aggregation techniques. 👩💻 Dominant AI use cases include coding, writing, research, and learning. 🌐 AI usage patterns differ across cultures and languages, reflecting diverse needs. 🚨 Clio detects misuse and abuse patterns early, supporting proactive safety measures. 🤖 Automation minimizes human exposure to sensitive conversations. 🔄 Continuous auditing and monitoring keep privacy protections robust over time. Key Insights 🔐 Privacy by Design is Essential: Clio exemplifies the necessity of integrating privacy into AI system design from day one. By focusing on data minimization, automation, and layered privacy protections, it avoids the pitfalls of ad hoc, reactive privacy measures. This approach is critical for maintaining user trust and meeting ethical and legal obligations. Without such design principles, AI developers risk exposing sensitive user data or failing to learn effectively from real-world usage. 🧩 Aggregation Prevents Re-Identification: Clio’s requirement that clusters represent large groups of conversations ensures individual users cannot be singled out. This statistical barrier is a powerful privacy tool, as it makes linking any data point back to a particular person practically impossible. It balances the need for granular insights with the imperative to protect personal identities, a challenge often underestimated in AI analytics. 🤖 Automation Reduces Human Risk and Cost: By automating fact extraction, clustering, summarization, and privacy auditing, Clio avoids exposing human reviewers to sensitive content. This not only protects reviewer mental well-being but also improves scalability and speed, enabling real-time analysis of millions of conversations. Automation is a key enabler for ethically and efficiently managing large-scale AI data. 🌍 Cultural and Linguistic Variations Matter: The system’s ability to identify different usage patterns across languages and cultures highlights AI’s adaptability and diverse applications. For example, Japanese users discuss elder care more frequently, while Spanish users focus more on finance. Understanding these nuances helps tailor AI development to meet specific community needs and supports more inclusive, globally relevant AI tools. 🚨 Early Detection of Misuse Enables Proactive Safety: Clio’s clustering approach allows safety teams to observe widespread patterns of abuse or harmful behavior without reading individual chats. This macro-level visibility empowers teams to swiftly update safety protocols and filters before issues escalate, shifting safety work from reactive incident response to proactive risk management. 📊 High-Level Insights Replace Guesswork: The ability to generate synthetic summaries that accurately reflect user behavior without revealing personal data transforms how AI developers understand user needs and challenges. This evidence-based approach fosters better product decisions, more effective improvements, and safer AI deployments, moving beyond assumptions to grounded knowledge. 🔄 Ongoing Auditing Maintains Privacy Integrity: Clio’s continuous testing, including red team attacks and privacy audits, ensures that its layered defenses remain effective as AI and user behaviors evolve. This commitment to vigilance is vital, as static privacy solutions can degrade over time. Persistent oversight strengthens the system’s resilience, safeguarding privacy in an ever-changing technological landscape. If you found this useful, please like and share! #AI #Privacy #AISafety #Clio #Anthropic #Claude #Governance
Play Video
Play Video
38:43
Clio: Privacy-Preserving Insights into Real-World AI Use, by Jonathan H. Westover PhD
Abstract: This paper presents Clio (Claude insights and observations), a privacy-preserving platform that uses AI assistants to analyze and surface aggregated usage patterns across millions of conversations without requiring human reviewers to read raw user data. The system addresses a critical gap in understanding how AI assistants are used in practice while maintaining robust privacy protections through multiple layers of safeguards. We validate Clio's accuracy through extensive evaluations, demonstrating 94% accuracy in reconstructing ground-truth topic distributions and achieving undetectable levels of private information in final outputs through empirical privacy auditing. Applied to one million Claude.ai conversations, Clio reveals that coding, writing, and research tasks dominate usage, with significant cross-language variations—for example, Japanese conversations discuss elder care at higher rates than other languages. We demonstrate Clio's utility for safety purposes by identifying coordinated abuse attempts, monitoring for unknown risks during high-stakes periods like capability launches and elections, and improving existing safety classifiers. By enabling scalable analysis of real-world AI usage while preserving privacy, Clio provides an empirical foundation for AI safety and governance.
Blog: HCI Blog
Human Capital Leadership Review
Featuring scholarly and practitioner insights from HR and people leaders, industry experts, and researchers.
All Articles
Research Briefs
Research Insights
Looking Ahead
Leadership in Practice
Leadership Insights
Leadership for Change
Webinar Recaps
Book Reviews
Transformative Social impact
Search
Nov 4, 2024
4 min read
LOOKING AHEAD
In 2025, Let's Wise Up about Flexibility in the Workplace
bottom of page