Introducing Anthropic Interviewer: What 1,250 Professionals Tell Us About Working with AI
- Jonathan H. Westover, PhD
- Dec 30, 2025
- 15 min read
Listen to this article:
Abstract: This research introduces Anthropic Interviewer, an AI-powered tool designed to conduct large-scale qualitative interviews at unprecedented scale while maintaining conversational depth. To validate this methodology, we deployed the system to interview 1,250 professionals—comprising 1,000 general workforce participants, 125 scientists, and 125 creative professionals—about their experiences integrating AI into their work. Results indicate predominantly positive sentiment regarding AI's productivity impact, with 86% of general workforce participants reporting time savings and 97% of creatives noting efficiency gains. However, significant concerns emerged around social stigma (69% of general workforce), professional displacement (55% expressing anxiety), and verification reliability (particularly among scientists). Thematic analysis revealed divergent adoption patterns: general workforce professionals envision AI-augmented supervisory roles; creatives navigate productivity gains against peer judgment and identity concerns; scientists desire AI partnership but withhold trust for core research tasks. This study demonstrates both the viability of AI-mediated qualitative research at scale and provides empirical insight into how professionals across diverse domains are experiencing AI's integration into knowledge work.
The integration of artificial intelligence into professional practice represents one of the defining organizational and sociological transformations of our era. Millions of professionals now interact with AI systems daily, yet systematic understanding of these interactions—their patterns, impacts, emotional dimensions, and evolutionary trajectories—remains fragmented. Organizations developing AI technologies require this understanding not merely for product optimization but to navigate the fundamental questions of how these systems reshape human work, identity, and capability.
Traditional qualitative research methods, while rich in depth, face scalability constraints that limit their applicability to phenomena affecting millions. Conversely, behavioral analytics—such as usage pattern analysis—reveal what people do with AI but not why they do it, how they feel about it, or what futures they envision. This gap between behavioral trace data and experiential understanding creates blind spots precisely where insight matters most: at the intersection of technology adoption, professional identity, and organizational change.
Anthropic's recent work has examined AI usage patterns across economic sectors, categorizing interactions as either augmentation (collaborative task performance) or automation (direct task completion). While informative, this approach captures only what transpires within AI conversations, leaving unexplored how outputs are subsequently applied, how adoption decisions are made, how organizational and social contexts shape usage, and how professionals emotionally and cognitively process these technological shifts.
This study introduces Anthropic Interviewer, an AI-powered system that conducts structured yet adaptive qualitative interviews at scale, combining conversational depth with quantitative breadth. To validate this methodology, we deployed it across 1,250 professionals spanning general workforce occupations, creative disciplines, and scientific research. The findings illuminate not only AI's current role in knowledge work but also the psychological and social dynamics mediating its adoption—insights critical for organizations navigating implementation, policymakers considering regulatory frameworks, and researchers studying technology's societal impacts.
The AI Integration Landscape in Professional Work
Defining AI-Mediated Work Across Professional Contexts
AI-mediated work encompasses a spectrum of human-machine collaboration arrangements wherein artificial intelligence systems augment, automate, or otherwise transform professional task completion. This integration manifests differently across occupational domains, reflecting variations in task characteristics, professional norms, regulatory constraints, and the nature of valued outputs (Raisch & Krakowski, 2021).
In augmentative arrangements, AI functions as a collaborative partner, enhancing human capabilities while preserving human agency and decision authority. A data analyst using AI to explore statistical patterns or a writer employing AI for stylistic suggestions exemplifies this mode. Automative arrangements involve AI directly performing tasks with minimal human intervention—for instance, AI-generated code implementations or automated content creation (Coombs et al., 2020).
Beyond this functional taxonomy lies a deeper dimension of professional integration: how AI reshapes occupational identity, workflow structure, and the cognitive architecture of professional practice. For educators, AI might transform pedagogical design; for healthcare professionals, it may alter diagnostic reasoning patterns. Understanding AI's impact therefore requires examining not merely task delegation but the evolution of professional roles, expertise boundaries, and the psychological contracts between workers and their occupations (Eloundou et al., 2023).
State of Practice: Adoption Patterns and Emotional Terrain
Recent organizational studies suggest AI adoption follows predictable patterns mediated by task characteristics, perceived control, and social legitimacy (Lebovitz et al., 2022). However, aggregate adoption statistics obscure the emotional complexity and contextual variation characterizing individual experiences.
Self-reported time savings represent one quantifiable dimension: among our general workforce sample, 86% reported productivity gains, while 97% of creative professionals noted efficiency improvements. Yet these metrics coexist with substantial anxiety—55% of general workforce participants expressed concern about AI's implications for their professional futures. This paradox of simultaneous appreciation and apprehension characterizes contemporary AI integration across knowledge work domains.
Social dynamics powerfully shape adoption trajectories. Organizational culture around AI legitimacy influences whether professionals openly acknowledge AI use or conceal it from colleagues—a pattern particularly pronounced among creative professionals, where 70% reported managing peer judgment concerns. One fact-checker's observation captures this dynamic: "A colleague recently said they hate AI and I just said nothing. I don't tell anyone my process because I know how a lot of people feel about AI."
Professional identity also mediates integration patterns. Workers gravitating toward tasks defining their occupational self-concept while delegating peripheral activities reflects what organizational scholars term "identity-consistent automation"—preserving core professional meaning while accepting efficiency in supporting tasks (Raisch & Krakowski, 2021). A pastor's reflection illustrates this principle: "if I use AI and up my skills with it, it can save me so much time on the admin side which will free me up to be with the people."
Organizational and Individual Consequences of AI Integration
Productivity and Performance Impacts
Quantifiable performance improvements associated with AI integration manifest across multiple dimensions: task completion speed, output volume, work quality, and cognitive capacity reallocation. Among creative professionals, productivity gains proved particularly dramatic—one web content writer reported daily output increasing from 2,000 to over 5,000 words, while a photographer reduced project turnaround from twelve weeks to three.
These efficiency gains enable resource reallocation toward higher-value activities. The pastor's admin time reduction enabling greater congregational engagement, or the photographer's accelerated editing permitting more intentional creative refinement, exemplifies this value migration. Organizations capturing this reallocation potential realize compounding benefits: not merely faster task completion but enhanced strategic focus, relationship development, or creative innovation (Brynjolfsson et al., 2023).
However, productivity metrics inadequately capture implementation challenges reflected in participants' frustration levels. While satisfaction remained high across occupational categories, professionals simultaneously reported substantial friction—technical limitations, workflow integration challenges, output verification requirements, and organizational policy constraints. This satisfaction-frustration coexistence suggests AI's productivity promise remains partially unrealized, constrained by implementation barriers requiring organizational intervention.
Professional Identity and Wellbeing Impacts
AI's impact on professional wellbeing proves multidimensional and sometimes contradictory. Reduced time pressure and administrative burden correlate with stress reduction and increased work satisfaction. A social media manager's reflection—"I'm less stressed, honestly. It has created a ton of efficiency for me so I can focus on my favorite aspects of the job"—illustrates this positive dimension.
Conversely, AI integration threatens professional identity foundations for workers whose self-concept centers on tasks now automatable. Creative fiction writers expressing concern that "a novel written by AI might have a great plot and be technically brilliant, but it won't have the deeper nuances that only a human can weave throughout the story" articulate anxieties extending beyond job security to existential questions of human distinctiveness and creative authenticity (Huang & Rust, 2018).
Displacement anxiety manifests across occupational categories but varies in intensity and response. Among general workforce participants expressing concern (55%), adaptation strategies diverged: 25% established protective boundaries limiting AI to peripheral tasks; 25% proactively expanded their roles toward AI oversight and specialized expertise; 8% expressed anxiety without clear remediation plans. This variation in adaptive capacity suggests differential resilience rooted in factors including occupational mobility, skill transferability, and organizational support for role evolution.
The experience of creatives proves particularly instructive regarding identity impacts. Despite reporting productivity benefits, creative professionals navigate tensions between efficiency gains and professional authenticity concerns. This reflects what scholars term "moral displacement anxiety"—concern not primarily about job loss but about the meaning and value of creative work when automated alternatives exist (Glaveanu & Kaufman, 2020).
Evidence-Based Organizational Responses
Establishing Transparent Communication and Social Legitimacy
Organizations successfully integrating AI create cultural environments where technology use represents legitimate practice rather than shameful dependency. This requires deliberate communication strategies addressing both practical implementation and symbolic meaning.
Evidence: Research on technology acceptance demonstrates that perceived legitimacy—the belief that tool use aligns with professional norms and organizational values—significantly predicts adoption and effective utilization (Davis, 1989; Venkatesh et al., 2003). When professionals conceal AI use due to peer judgment, organizations forfeit opportunities for knowledge sharing, collaborative learning, and normative evolution toward effective practices.
Effective approaches to building legitimacy include:
Leadership modeling where executives and respected professionals openly discuss their AI integration practices, normalizing adoption while acknowledging limitations
Community learning forums creating spaces for practitioners to share implementation strategies, troubleshoot challenges, and collectively develop norms around appropriate use
Explicit policy frameworks that articulate where AI integration is encouraged, permitted with safeguards, or restricted—reducing ambiguity that drives concealment
Celebration of hybrid achievements recognizing outputs that effectively combine human and AI contributions rather than privileging "unassisted" work
At Microsoft, the AI transformation team established cross-functional communities of practice where employees share AI integration experiences across domains—from engineering to sales to creative design. These forums normalized experimentation while surfacing implementation patterns and boundary cases requiring policy attention. By creating social infrastructure around AI adoption, Microsoft accelerated legitimate integration while maintaining quality standards (Brynjolfsson & McElheran, 2016).
Implementing Procedural Justice in AI Deployment
Procedural justice—the perceived fairness of decision-making processes—proves critical for AI acceptance, particularly when automation threatens professional roles or autonomy. Employees experiencing AI deployment as consultative rather than imposed demonstrate higher adoption rates, greater system trust, and reduced resistance (Colquitt et al., 2001).
Evidence: Organizational justice research demonstrates that process fairness often matters more for employee attitudes than outcome favorability. When workers participate in decisions affecting their work, perceive genuine consideration of their input, and receive explanation for implementation choices, they respond more positively even to potentially threatening changes (Greenberg, 1990).
Procedural justice mechanisms for AI deployment include:
Co-design processes engaging affected professionals in determining where, how, and under what constraints AI integrates into workflows
Transparency about decision criteria explaining why certain tasks target automation while others remain human-performed
Appeal and modification channels allowing workers to contest or refine AI implementations that create unanticipated problems
Phased implementation with feedback loops enabling iterative adjustment based on user experience rather than one-time deployment
Siemens's approach to AI integration in manufacturing engineering illustrates procedural justice principles. Rather than centrally mandating AI tool adoption, Siemens established cross-functional teams including engineers, quality specialists, and production managers to identify automation opportunities collaboratively. These teams determined implementation priorities, developed usage guidelines, and maintained authority to modify or discontinue tools creating workflow problems. This consultative approach generated higher adoption rates and more contextually appropriate implementations than top-down mandates (Bessen, 2019).
Building Verification Capabilities and Trust Calibration
Scientists' reluctance to employ AI for core research despite recognizing its potential illustrates a broader challenge: many AI applications require output verification exceeding the cognitive cost of original task completion. Organizations addressing this challenge develop systematic verification capabilities while calibrating trust to AI's actual reliability boundaries.
Evidence: Research on automation trust identifies "calibration"—aligning trust levels with system capabilities—as critical for effective human-AI collaboration. Both overtrust (excessive reliance leading to uncaught errors) and undertrust (excessive verification negating efficiency gains) undermine performance (Lee & See, 2004; Parasuraman & Riley, 1997).
Verification and calibration strategies include:
Task-specific reliability profiling systematically documenting AI performance across different task types to guide appropriate reliance
Staged verification protocols implementing graduated checking intensity based on output stakes and demonstrated reliability
Hybrid verification systems where AI assists human verification (e.g., highlighting claims requiring fact-checking) rather than requiring complete manual review
Error pattern training educating users about characteristic AI failure modes to enhance detection efficiency
At the pharmaceutical company Recursion, computational biologists use AI extensively for data analysis and pattern identification but implement tiered verification based on downstream consequence. Exploratory hypothesis generation receives minimal checking, while findings informing experimental design undergo systematic validation, and results supporting clinical decisions require independent replication. This calibrated approach balances efficiency with scientific rigor while building appropriate trust boundaries (Fleming, 2021).
Developing Role Evolution and Capability Building Programs
The widespread vision among general workforce participants of transitioning toward AI oversight roles—reported by 48%—creates organizational opportunities and obligations. Rather than waiting for displacement anxiety to crystallize, forward-thinking organizations proactively develop pathways toward evolved professional identities.
Evidence: Research on technology-induced occupational change demonstrates that proactive reskilling initiatives yield superior outcomes to reactive displacement responses. Organizations investing in capability building before automation threatens jobs experience lower turnover, higher morale, and more successful technology integration (Acemoglu & Restrepo, 2020; Autor, 2015).
Role evolution initiatives include:
AI fluency development building employee capacity to effectively collaborate with AI systems through prompt engineering, output evaluation, and iterative refinement
Oversight role creation establishing positions focused on managing AI systems, monitoring quality, handling exceptions, and continuous improvement
Specialization pathway mapping identifying domains where deep expertise becomes increasingly valuable as AI handles routine elements
Hybrid skill cultivation developing capabilities combining technical AI literacy with domain expertise and human-centered skills
At consulting firm BCG, the firm developed its "Consulting 2.0" initiative recognizing that AI would reshape consultant roles. Rather than viewing this as threatening, BCG created learning pathways helping consultants transition toward higher-value activities: complex problem framing, client relationship management, and insight synthesis while delegating research, analysis, and document production to AI. This proactive approach maintained morale while accelerating the firm's competitive positioning (Faraj et al., 2018).
Supporting Financial and Benefit Structures for Transitions
For professionals facing genuine displacement risk—such as voice actors observing "certain sectors of voice acting have essentially died due to the rise of AI"—organizational and policy responses require financial dimensions alongside capability building. While this study focused primarily on professionals experiencing integration rather than replacement, the voice actor's observation signals a trajectory some occupational segments will follow.
Evidence: Economic research on technological displacement demonstrates that adjustment support—including income bridges, retraining funding, and relocation assistance—substantially improves transition outcomes and reduces social resistance to beneficial technologies (Autor et al., 2014).
Financial transition supports include:
Transition income bridges providing salary continuation during retraining periods
Education and certification funding covering costs of acquiring credentials in adjacent or alternative domains
Entrepreneurship support enabling displaced workers to leverage domain expertise in new business models
Geographic mobility assistance helping workers relocate to regions where their skills remain in demand
Denmark's "flexicurity" model combines relatively permissive automation and displacement with robust transition support including generous unemployment benefits, comprehensive retraining programs, and active labor market policies helping workers find new positions. This approach maintains social cohesion while enabling technological advancement—a balance increasingly relevant as AI capabilities expand (Andersen & Svarer, 2007).
Building Long-Term AI Integration Capability
Establishing Adaptive Learning and Feedback Systems
Effective AI integration represents not a one-time implementation but a continuous learning process as both technologies and organizational needs evolve. The satisfaction-frustration coexistence observed across our sample suggests implementation challenges persist even among generally positive adopters, signaling opportunities for ongoing refinement.
Organizations building sustainable AI integration capabilities establish structured learning systems capturing user experiences, identifying friction points, and translating insights into improved implementations. This requires moving beyond deployment metrics (adoption rates, usage frequency) toward experience metrics (workflow fit, cognitive load, quality outcomes).
Continuous learning mechanisms include:
Systematic experience sampling regularly capturing how professionals experience AI integration through brief surveys, interviews, or usage diaries
Error and friction logging creating low-barrier channels for reporting problems, near-misses, or inefficiencies
Cross-functional learning networks enabling professionals in different domains to share implementation insights and adaptation strategies
Rapid iteration cycles translating feedback into refined implementations on weeks rather than months timescales
Amazon Web Services established "mechanisms" teams charged with capturing customer AI implementation experiences and translating them into improved tooling, documentation, and best practice guidance. This institutionalized learning approach enabled AWS to refine AI services based on real-world usage patterns rather than engineering assumptions, accelerating effective adoption (Brynjolfsson & McAfee, 2014).
Cultivating Distributed Expertise and Decision Authority
The diversity of professional contexts, task characteristics, and organizational constraints renders centralized AI implementation decisions suboptimal. A communications professional's observation that "my role will eventually become focused around prompting, overseeing, training and quality-controlling the models" points toward distributed intelligence models where domain experts maintain decision authority while collaborating with AI capabilities.
Effective organizations resist the temptation toward either complete centralization (IT departments mandating tools) or complete decentralization (every individual making independent choices). Instead, they establish federated models combining central standards with distributed adaptation authority.
Distributed governance structures include:
Domain-specific implementation teams with authority to adapt general AI capabilities to their professional context
Boundary-spanning roles connecting centralized AI expertise with distributed domain knowledge
Escalation and exception processes enabling local teams to surface cases requiring centralized policy attention
Practice sharing infrastructure allowing distributed innovations to diffuse across organizational units
Spotify's "squad" model exemplifies distributed authority principles. Cross-functional teams maintain autonomy over their domain's AI integration decisions while operating within company-wide principles and sharing learnings through community forums. This structure enables context-appropriate implementations while preventing fragmentation or duplication (Kniberg & Ivarsson, 2012).
Preserving Human Meaning and Professional Purpose
Perhaps the most profound challenge surfacing in our data—particularly among creative professionals—concerns meaning preservation as automation expands. A gamebook writer's observation that "there's rarely a point where I've really felt like the AI is driving the creative decision-making" despite using AI tools suggests the importance many professionals place on maintaining creative agency even when efficiency might favor greater automation.
Organizations successfully navigating this terrain recognize that work provides not only economic value but identity, purpose, and social connection. Sustainable AI integration preserves or enhances these dimensions rather than reducing professional experience purely to efficiency metrics.
Meaning-preservation strategies include:
Core-periphery task mapping systematically distinguishing identity-central activities from supporting tasks, protecting the former while automating the latter
Agency-by-design principles implementing AI tools that enhance human decision authority rather than displacing it
Purpose articulation and reinforcement explicitly connecting AI-enabled efficiency gains to expanded capacity for high-meaning activities
Community and belonging maintenance ensuring automation doesn't erode social connections that make work meaningful
At Pixar Animation Studios, AI tools assist with technical rendering and repetitive animation tasks, but core creative decisions—character development, story arc, emotional beats—remain firmly in human hands. This division reflects deliberate choices about what defines creative work at Pixar rather than purely technical feasibility boundaries. By protecting meaning-central activities, Pixar maintains creative community cohesion while capturing efficiency benefits (Catmull & Wallace, 2014).
Conclusion
This research introduces both a methodological innovation—AI-mediated large-scale qualitative interviewing—and substantive insights into how professionals across diverse domains experience AI integration into their work. The 1,250 interviews conducted by Anthropic Interviewer reveal a complex landscape characterized by simultaneous optimism and anxiety, productivity gains and implementation friction, social stigma and growing acceptance.
Several actionable insights emerge for organizations navigating AI integration:
First, productivity metrics alone inadequately capture AI's impact. While time savings and output volume increases prove substantial, professionals' experiences encompass identity, meaning, social legitimacy, and wellbeing dimensions requiring explicit organizational attention.
Second, the gap between self-reported augmentation emphasis (65%) and behavioral data showing near-equal augmentation-automation split (47%-49%) suggests implementation reality may diverge from professional self-conception. Organizations should investigate whether this reflects measurement artifacts or meaningful signals about how professionals maintain agency narratives amid increasing automation.
Third, the 55% reporting professional anxiety despite 86% reporting productivity gains points toward a dual-track organizational response requirement: simultaneously capturing efficiency benefits while proactively addressing displacement concerns through role evolution pathways, capability building, and, where necessary, transition support.
Fourth, domain differences matter profoundly. Scientists' trust limitations, creatives' identity concerns, and general workforce participants' supervisory role visions require tailored implementation approaches rather than universal solutions.
Fifth, social dynamics powerfully mediate adoption. The 69-70% reporting peer judgment concerns across samples indicates organizational culture work—normalizing appropriate use, establishing clear guidelines, enabling learning communities—may unlock substantial unrealized value.
Looking forward, the capability to conduct systematic qualitative research at scale opens new possibilities for understanding AI's evolving societal role. As these technologies advance and adoption deepens, continued investigation of professional experiences, adaptation strategies, and organizational practices will prove essential for navigating this transformation in ways that enhance human capability, preserve meaningful work, and support those facing genuine disruption. The professionals who generously shared their experiences through Anthropic Interviewer have illuminated not only current realities but pathways toward futures where AI integration serves human flourishing.
References
Acemoglu, D., & Restrepo, P. (2020). Robots and jobs: Evidence from US labor markets. Journal of Political Economy, 128(6), 2188–2244.
Andersen, T. M., & Svarer, M. (2007). Flexicurity—Labour market performance in Denmark. CESifo Economic Studies, 53(3), 389–429.
Autor, D. H. (2015). Why are there still so many jobs? The history and future of workplace automation. Journal of Economic Perspectives, 29(3), 3–30.
Autor, D. H., Dorn, D., & Hanson, G. H. (2014). The China syndrome: Local labor market effects of import competition in the United States. American Economic Review, 103(6), 2121–2168.
Bessen, J. (2019). Learning by doing: The real connection between innovation, wages, and wealth. Yale University Press.
Brynjolfsson, E., & McAfee, A. (2014). The second machine age: Work, progress, and prosperity in a time of brilliant technologies. W. W. Norton.
Brynjolfsson, E., & McElheran, K. (2016). The rapid adoption of data-driven decision-making. American Economic Review, 106(5), 133–139.
Brynjolfsson, E., Li, D., & Raymond, L. R. (2023). Generative AI at work. National Bureau of Economic Research Working Paper No. 31161.
Catmull, E., & Wallace, A. (2014). Creativity, Inc.: Overcoming the unseen forces that stand in the way of true inspiration. Random House.
Colquitt, J. A., Conlon, D. E., Wesson, M. J., Porter, C. O., & Ng, K. Y. (2001). Justice at the millennium: A meta-analytic review of 25 years of organizational justice research. Journal of Applied Psychology, 86(3), 425–445.
Coombs, C., Hislop, D., Taneva, S. K., & Barnard, S. (2020). The strategic impacts of intelligent automation for knowledge and service work: An interdisciplinary review. Journal of Strategic Information Systems, 29(4), 101600.
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–340.
Eloundou, T., Manning, S., Mishkin, P., & Rock, D. (2023). GPTs are GPTs: An early look at the labor market impact potential of large language models. arXiv preprint arXiv:2303.10130.
Faraj, S., Pachidi, S., & Sayegh, K. (2018). Working and organizing in the age of the learning algorithm. Information and Organization, 28(1), 62–70.
Fleming, N. (2021). How artificial intelligence is changing drug discovery. Nature, 557(7707), S55–S57.
Glaveanu, V. P., & Kaufman, J. C. (2020). The creativity matrix: Spotlights and blind spots in our understanding of the phenomenon. Journal of Creative Behavior, 54(4), 884–896.
Greenberg, J. (1990). Organizational justice: Yesterday, today, and tomorrow. Journal of Management, 16(2), 399–432.
Huang, M. H., & Rust, R. T. (2018). Artificial intelligence in service. Journal of Service Research, 21(2), 155–172.
Kniberg, H., & Ivarsson, A. (2012). Scaling agile at Spotify. Retrieved from Spotify Labs.
Lebovitz, S., Lifshitz-Assaf, H., & Levina, N. (2022). To engage or not to engage with AI for critical judgments: How professionals deal with opacity when using AI for medical diagnosis. Organization Science, 33(1), 126–148.
Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80.
Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors, 39(2), 230–253.
Raisch, S., & Krakowski, S. (2021). Artificial intelligence and management: The automation–augmentation paradox. Academy of Management Review, 46(1), 192–210.
Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425–478.

Jonathan H. Westover, PhD is Chief Academic & Learning Officer (HCI Academy); Associate Dean and Director of HR Programs (WGU); Professor, Organizational Leadership (UVU); OD/HR/Leadership Consultant (Human Capital Innovations). Read Jonathan Westover's executive profile here.
Suggested Citation: Westover, J. H. (2025). Introducing Anthropic Interviewer: What 1,250 Professionals Tell Us About Working with AI. Human Capital Leadership Review, 29(2). doi.org/10.70175/hclreview.2020.29.2.4






















