top of page
HCL Review
nexus institue transparent.png
Catalyst Center Transparent.png
Adaptive Lab Transparent.png
Foundations of Leadership
DEIB
Purpose-Driven Workplace
Creating a Dynamic Organizational Culture
Strategic People Management Capstone

Why AI Demands a New Breed of Leaders: The Case for Chief Innovation and Transformation Officers

Listen to this article:


Abstract: Artificial intelligence is reshaping organizational operations in ways that extend far beyond technical implementation. While 85% of IT leaders report that CIOs are becoming organizational changemakers, most continue to focus primarily on operational functions rather than the cultural and organizational transformations AI demands. This gap creates significant risks, as evidenced by high-profile failures at companies like Zillow and Air Canada. Research indicates that 91% of data leaders identify cultural challenges—not technology—as the primary barrier to data-driven transformation. This article examines why traditional technology leadership roles often lack the bandwidth and mandate to address AI's human and organizational implications, proposes an expanded leadership model combining technical expertise with organizational psychology and change management, and explores early examples of organizations successfully implementing this approach through roles that bridge innovation, transformation, and cultural change.

The integration of artificial intelligence into organizational operations represents more than a technological shift—it constitutes a fundamental reimagining of how work gets done, who does it, and what leadership means in the digital age. Yet despite AI's profound implications for workforce dynamics, organizational culture, and strategic direction, most companies continue to treat AI implementation as primarily a technical challenge (Davenport & Ronanki, 2018). This disconnect between AI's broad organizational impact and its narrow technical framing has led to predictable failures, from Zillow's $300 million loss on AI-driven home buying to Air Canada's chatbot missteps that compounded customer grief during bereavement travel (Hoque et al., 2025).


The stakes could not be higher. Organizations worldwide are investing billions in AI capabilities, yet research consistently shows that cultural and organizational factors—not technical limitations—represent the primary barriers to success. According to recent surveys, 91% of enterprise data leaders cite cultural challenges and change management as impediments to becoming data-driven, while only 9% point to technology issues (NewVantage Partners, 2023). Meanwhile, 61% of CIOs report having less time for strategic responsibilities than in previous years, even as their organizations expect them to lead transformational change (Foundry, 2024).


This article argues for a new leadership model that addresses AI's full spectrum of organizational implications. We examine why traditional technology leadership roles often fall short, explore emerging alternatives, and provide practical frameworks for organizations seeking to build the leadership capacity their AI ambitions require.


The AI Leadership Landscape

Defining Leadership for AI in Organizational Context


Leadership for AI implementation encompasses far more than technical oversight of machine learning models or cloud infrastructure. Effective AI leadership requires integrating technical knowledge with organizational psychology, change management expertise, strategic vision, and ethical judgment (Fountaine et al., 2019). This multidimensional leadership addresses three core domains:


  • Technical stewardship: Ensuring AI systems are built with appropriate architectures, security protocols, and performance standards while managing technical debt and integration challenges

  • Organizational transformation: Guiding cultural shifts, managing resistance, redesigning workflows, and developing new collaboration models between humans and AI systems

  • Strategic and ethical alignment: Connecting AI initiatives to organizational purpose, managing societal impacts, and ensuring responsible innovation that balances efficiency with human values


Traditional technology leadership roles such as CIO and CTO typically emphasize the first domain while giving insufficient attention to the latter two. This imbalance creates a leadership gap at precisely the moment when AI's organizational and ethical dimensions demand elevated attention.


State of Practice: The CIO's Expanding but Constrained Mandate


Chief information officers have evolved considerably from their origins as heads of data processing departments. Modern CIOs increasingly function as strategic business leaders rather than purely technical managers, with 85% of IT leaders reporting that CIOs are becoming organizational changemakers (Foundry, 2024). However, this evolution has occurred unevenly and faces significant constraints.


Despite recognition of the CIO's strategic importance, operational demands continue to dominate these leaders' time and attention. The Foundry survey reveals that only 28% of CIOs identify leading transformation as their top priority, while 61% report having less time available for strategic responsibilities compared to previous years. This compression of strategic bandwidth occurs even as organizations increasingly expect CIOs to drive AI-enabled transformation (Foundry, 2024).


The role's fundamental tension lies in balancing system maintenance and operational excellence with transformational leadership. CIOs must ensure that existing technology infrastructure runs reliably while simultaneously leading adoption of emerging technologies that may eventually replace those very systems. This dual mandate creates competing demands on attention, resources, and organizational capital.


Furthermore, traditional CIO roles often lack the explicit mandate to address the cultural, psychological, and workforce development dimensions of AI implementation. While some forward-thinking CIOs have expanded their portfolios to encompass these areas, such expansion typically occurs through individual initiative rather than formal role design. This ad hoc approach leaves critical leadership gaps that can derail AI initiatives.


Organizational and Individual Consequences of Leadership Gaps

Organizational Performance Impacts


When organizations fail to provide adequate leadership for AI's human and organizational dimensions, the consequences often appear first in financial results and operational metrics. High-profile failures illustrate the costs of treating AI as primarily a technical challenge.


Zillow's algorithmic homebuying venture, Zillow Offers, provides a sobering example. The company developed AI-powered valuation models (Zestimates) that it used as the basis for purchasing homes directly from sellers, intending to resell them at a profit. However, the initiative failed to account adequately for the complex human and market factors that influence real estate values beyond what algorithms could predict. The result was $300 million in losses, 2,000 layoffs, and a stock price decline exceeding 20% as investor confidence eroded (Hoque et al., 2025). While technical issues contributed to the failure, the fundamental problem was strategic: insufficient consideration of how AI predictions translate to real-world business models and inadequate human oversight of algorithmic decision-making.


Research by Boston Consulting Group found that companies appointing chief transformation officers experienced significantly higher total shareholder returns in the year following the appointment, with hiring of such roles increasing more than 140% from 2019 to 2021 (BCG, 2022). This correlation suggests that dedicated transformation leadership creates measurable business value, likely by reducing implementation failures and accelerating value realization from technology investments.


The financial software company Intuit discovered similar patterns through its own experience. The company invested heavily in AI capabilities for its flagship products (TurboTax, QuickBooks, and Credit Karma) but initially struggled with fragmented implementation efforts. After establishing more centralized innovation and transformation leadership that could coordinate across business units while addressing cultural change, Intuit saw faster adoption rates and higher customer satisfaction scores for AI-powered features (McKinsey, 2023).


Individual Wellbeing and Stakeholder Impacts


Leadership gaps in AI implementation affect not only business metrics but also the wellbeing of employees, customers, and other stakeholders. When organizations fail to thoughtfully manage the human dimensions of AI, the consequences can be deeply personal.


Air Canada's chatbot implementation illustrates how technical deployments without adequate human-centered leadership can harm vulnerable customers. The airline deployed a generative AI chatbot to assist travelers with bookings, but the system made errors about bereavement fares—the discounted rates airlines offer to passengers traveling due to a family death. Customers already coping with grief found themselves facing additional frustration and conflict with the airline. While the chatbot's technical failures were correctable, the deeper issue was inadequate consideration of the emotional contexts in which AI systems interact with humans and insufficient human-in-the-loop safeguards for sensitive situations (Hoque et al., 2025).


Research on workforce impacts of automation consistently shows that how organizations manage transitions matters as much as whether jobs are displaced. A study by Acemoglu and Restrepo (2020) found that automation's effects on workers varied significantly based on whether companies provided reskilling support, maintained transparent communication, and demonstrated genuine commitment to workforce development. Organizations that treated automation primarily as a technical efficiency initiative saw higher turnover, lower morale, and reduced organizational commitment among remaining employees compared to those that emphasized human-centered transformation.


California State University's experience with its AI Workforce Acceleration Board demonstrates the risks of insufficient stakeholder engagement. The university announced an ambitious initiative to integrate AI across all systems and services, led by representatives from ten major AI companies. Within one week, the initiative faced fierce opposition from faculty and students who objected both to the goals and to their exclusion from planning. The backlash forced the university to pause implementation (Hoque et al., 2025). While the initiative may have been technically sound, the failure to engage stakeholders in shaping the vision and addressing concerns about impacts on teaching, learning, and employment undermined its viability.


Evidence-Based Organizational Responses

Establishing Dedicated Innovation and Transformation Leadership


Organizations increasingly recognize that successful AI implementation requires leadership roles explicitly designed to bridge technical, strategic, and human dimensions. While titles vary—chief innovation officer, chief transformation officer, chief AI officer, or hybrid variations—the most effective roles share common characteristics: broad organizational scope, explicit accountability for cultural change, and integration of technical expertise with change management capabilities.


Research supports this leadership model. Organizations with dedicated transformation leadership report higher success rates for digital initiatives, faster adoption of new technologies, and better employee engagement during change (BCG, 2022). The key is creating roles with sufficient authority, clear mandates, and the right combination of technical and organizational development skills.


Effective approaches include:


  • Consolidating fragmented responsibilities: Rather than distributing AI leadership across multiple disconnected roles, organizations increasingly consolidate innovation, transformation, and technology oversight under integrated leadership that can coordinate across domains

  • Elevating organizational change to co-equal status with technical implementation: Successful organizations explicitly frame cultural change, workforce development, and stakeholder engagement as core leadership responsibilities rather than secondary concerns

  • Providing sufficient authority and resources: Transformation leadership requires the ability to work across organizational silos, challenge existing practices, and reallocate resources—capabilities that demand C-suite authority and CEO sponsorship


PepsiCo exemplifies this approach. In 2020, the company appointed Athina Kanioura as chief strategy and transformation officer, a role overseeing digitization across the entire business including AI implementation. Kanioura's portfolio explicitly combines technology strategy with organizational transformation, recognizing that technical capabilities only create value when integrated into changed business processes and supported by evolved organizational culture (PepsiCo, 2020). Under this leadership model, PepsiCo has accelerated AI adoption across supply chain optimization, consumer insights, and commercial operations while maintaining strong employee engagement through transparent communication about technology's role.


Standard Chartered Bank took a similar approach, appointing Roel Louwhoff as chief transformation, technology, and operations officer in 2021. Louwhoff's role integrates IT leadership with explicit accountability for operational change and transformation management. This structure enables the bank to coordinate technical implementation of AI systems for risk management, customer service, and operational efficiency with the organizational changes needed to realize value from those systems (Standard Chartered, 2021). The integrated leadership model helped Standard Chartered accelerate cloud migration, implement AI-powered fraud detection, and redesign customer-facing processes while managing workforce transitions.


TIAA demonstrates how this approach extends beyond traditional technology companies. Sastry Durvasula serves as chief operating, information, and digital officer, overseeing organizational, technical, and operational change affecting 60% of TIAA employees. Durvasula explicitly frames workforce transformation as central to his mandate, noting that "since change from AI is imminent at this point, it's important to manage the upskilling/reskilling and transition of our people into their next job post revolution" (Hoque et al., 2025). This perspective elevates human capital development to the same level as technical implementation, ensuring that AI initiatives consider workforce impacts from inception rather than treating them as downstream consequences.


Developing Cross-Functional AI Governance Structures


Effective AI leadership requires robust governance structures that bring together diverse perspectives and expertise. Technical decisions about AI systems have ethical, legal, workforce, and customer experience implications that demand input from multiple organizational functions.


Organizations implementing strong AI governance typically establish cross-functional bodies that combine technical experts with representatives from legal, compliance, HR, customer experience, and business leadership. These governance structures operate at multiple levels: strategic boards that set overall direction and ethical principles, tactical committees that review specific implementations and resolve conflicts, and operational teams that ensure day-to-day adherence to standards.


JPMorgan Chase's approach to AI governance illustrates these principles. Under the direction of Teresa Heitsenrether, global head of data and analytics, the bank established a comprehensive AI governance structure that explicitly connects AI implementation to organizational purpose and values. The governance framework includes ethics-focused review processes, specific protocols for AI decision-making in investment scenarios, and clear escalation paths when AI systems encounter situations requiring human judgment (JPMorgan Chase, 2023).


Effective governance approaches include:


  • Ethics review boards with diverse membership: Including ethicists, legal experts, customer advocates, and affected employee representatives ensures that AI implementations are evaluated from multiple perspectives

  • Clear decision rights and escalation paths: Defining who can approve different types of AI implementations, what circumstances require executive review, and how conflicts get resolved prevents bottlenecks while maintaining appropriate oversight

  • Transparent communication mechanisms: Regular reporting to stakeholders about AI initiatives, their purposes, expected impacts, and governance oversight builds trust and surfaces concerns early

  • Continuous monitoring and feedback loops: Governance extends beyond initial approval to include ongoing monitoring of AI system performance, impacts, and alignment with organizational values


Salesforce demonstrates cross-functional governance in practice through its Office of Ethical and Humane Use of Technology. The office brings together technical AI experts with ethicists, policy specialists, and stakeholder representatives to establish guidelines for responsible AI development and use. This governance structure has shaped both Salesforce's internal AI implementations and the ethical guidelines built into AI products sold to customers (Salesforce, 2023). The company's AI personas—agents designed to guide product decisions, training programs, and feature prioritization—operate within frameworks established by this cross-functional governance body.


Building Systematic Change Management Capabilities


AI implementation succeeds or fails based largely on how well organizations manage the human side of change. While technical implementation often receives the bulk of planning attention and resources, research consistently shows that change management capabilities predict AI initiative outcomes more strongly than technical sophistication (Davenport & Ronanki, 2018).


Effective change management for AI requires structured approaches that address multiple dimensions: transparent communication about AI's purposes and impacts, employee involvement in shaping implementations, systematic skill development, and attention to the psychological and cultural shifts AI demands.


Organizations with strong change management capabilities approach AI implementation as organizational transformation rather than technology deployment. They invest in helping employees understand not just what is changing but why, create opportunities for input and influence, provide resources for skill development, and demonstrate genuine commitment to supporting workforce transitions.


Research by Prosci indicates that projects with excellent change management are six times more likely to meet objectives than those with poor change management (Prosci, 2020). For AI initiatives specifically, change management effectiveness correlates with adoption rates, value realization, and employee satisfaction.


Effective approaches include:


  • Structured communication strategies: Regular, honest communication about AI initiatives, their strategic rationale, expected impacts, and timelines helps manage uncertainty and builds trust

  • Employee involvement and co-creation: Engaging employees in shaping AI implementations—identifying use cases, providing feedback on prototypes, participating in testing—creates ownership and surfaces practical insights that improve designs

  • Comprehensive skill development programs: Providing training not just in technical AI skills but in critical thinking, human-AI collaboration, and the judgment required to effectively work alongside intelligent systems

  • Psychological safety and support resources: Creating environments where employees feel safe expressing concerns, asking questions, and learning through experimentation rather than fearing mistakes


State Street Bank's recent advertisement for a chief transformation officer role within Global Technology Services exemplifies this comprehensive approach. The position's responsibilities explicitly include "cultural and organizational change efforts to embed agility, efficiency, and a customer-first mindset" alongside technical responsibilities for "automation, AI, blockchain, and cloud adoption" (State Street, 2025). This integration recognizes that technical capabilities only create value when supported by cultural and behavioral change.


Implementing AI Persona Management Frameworks


As organizations deploy increasingly autonomous AI agents, a new leadership challenge emerges: managing AI personas that interact with customers, process information, and make decisions with varying degrees of human oversight. AI persona management represents the intersection of technical AI development, brand management, customer experience design, and ethical governance.


AI personas are digital entities with specific traits, capabilities, and decision-making parameters designed to perform defined organizational roles. They may function as customer service agents, strategic advisers, process automation tools, or collaborative partners for human workers. Effective persona management ensures that these AI entities behave consistently with organizational values, communicate appropriately for their contexts, and escalate to human judgment when facing situations beyond their capabilities.


Organizations at the forefront of AI persona development are establishing governance frameworks that address persona design, behavior monitoring, and continuous improvement. These frameworks typically involve cross-functional teams combining AI technical expertise, customer experience insights, brand management, ethics review, and change management capabilities.


Salesforce's use of AI personas to guide internal decision-making illustrates this emerging practice. The company deploys AI agents that analyze product usage data, customer feedback, and market trends to inform decisions about product updates, training programs, and feature prioritization. These personas operate within defined parameters that specify what types of recommendations they can make autonomously versus what requires human review (Salesforce, 2023). The governance structure ensures that AI personas remain aligned with strategic goals while providing genuine decision support rather than creating automation without accountability.


Effective AI persona management approaches include:


  • Explicit persona design frameworks: Documenting each AI persona's purpose, capabilities, decision boundaries, communication style, and escalation protocols creates clarity about roles and expectations

  • Behavioral monitoring and quality assurance: Systematic review of AI persona interactions, decisions, and outcomes identifies drift from intended behaviors and surfaces improvement opportunities

  • Human-in-the-loop protocols for sensitive contexts: Defining circumstances that require human judgment—such as bereavement travel in Air Canada's case—and ensuring AI personas route these situations appropriately

  • Regular persona review and evolution: Treating AI personas as dynamic entities that require ongoing refinement based on performance data, stakeholder feedback, and changing business needs

  • Stakeholder transparency about AI interaction: Clearly indicating when customers or employees are interacting with AI personas rather than humans, and providing easy escalation to human assistance


Investing in Leadership Development for AI Age Capabilities


Organizations cannot simply hire their way to adequate AI leadership—they must also develop capabilities in existing leaders and high-potential employees. The combination of technical literacy, change management expertise, ethical judgment, and strategic vision required for AI-age leadership demands intentional development efforts.


Forward-thinking organizations are implementing leadership development programs that build AI literacy across the executive team while developing deep expertise in individuals positioned for dedicated innovation and transformation roles. These programs typically combine technical education, organizational change management training, ethics and governance exposure, and practical experience leading AI initiatives.


Research on leadership development effectiveness indicates that the most impactful programs combine multiple learning modalities: formal training, experiential learning through real projects, coaching and mentoring, and community learning through peer networks (Center for Creative Leadership, 2021). For AI leadership specifically, programs that integrate technical and organizational dimensions prove more effective than those focusing exclusively on either domain.


Effective development approaches include:


  • Executive AI literacy programs: Ensuring all senior leaders understand AI capabilities, limitations, and implications well enough to make informed strategic decisions and provide appropriate oversight

  • Rotation programs: Providing high-potential leaders with experiences across technical, operational, and change management roles builds the integrated perspective AI leadership requires

  • External partnership and learning networks: Engaging with academic institutions, industry associations, and peer companies accelerates learning and provides broader perspective than purely internal development

  • Action learning projects: Assigning leaders to real AI initiatives with defined deliverables and structured reflection processes develops capabilities more effectively than classroom-only approaches

  • Mentoring and coaching: Pairing developing leaders with experienced AI transformation leaders provides personalized guidance and accelerates development


Building Long-Term Organizational Capability for AI Integration

Establishing Cultural Foundations for Continuous Innovation


Successful AI integration over the long term requires more than episodic transformation initiatives—it demands cultural evolution toward continuous innovation, learning, and adaptation. Organizations that treat AI implementation as a one-time project inevitably fall behind as the technology evolves and competitive dynamics shift.


Building cultural foundations for continuous innovation involves several interrelated elements. First, organizations must cultivate psychological safety—the belief that taking interpersonal risks, asking questions, and acknowledging mistakes will not result in punishment (Edmondson, 2019). AI implementation involves experimentation and learning, which inherently includes failures alongside successes. Cultures that punish failure drive innovation underground or suppress it entirely.


Second, organizations need to embed learning into their operational rhythms. This means creating time and space for reflection, establishing feedback mechanisms that surface insights from AI implementations, and systematically capturing and sharing lessons across the organization. Too often, organizations rush from one initiative to the next without extracting learnings that could improve subsequent efforts.


Third, successful organizations balance what organizational theorist James March termed exploration and exploitation (March, 1991). Exploitation involves refining existing capabilities and improving current operations—the domain of incremental AI applications that automate processes or enhance existing products. Exploration involves searching for new opportunities and developing fundamentally new capabilities—the domain of transformative AI applications that enable new business models or create new value propositions. Organizations that over-emphasize either exploration or exploitation underperform those that maintain appropriate balance.


Organizations implementing these cultural foundations structure work to encourage experimentation, celebrate learning from both successes and failures, allocate resources specifically for exploratory innovation, and create career paths that value both operational excellence and innovation contributions. Leadership plays a critical role in modeling these behaviors, explicitly discussing failures and learnings, and reinforcing that innovation involves productive failure alongside success.


Developing Distributed Leadership and Empowerment Structures


While dedicated transformation leadership at the executive level provides strategic direction and coordination, sustainable AI integration requires leadership distributed throughout the organization. Frontline managers, team leaders, and individual contributors all need sufficient autonomy, resources, and capability to identify opportunities, implement improvements, and adapt AI systems to local needs.


This distributed leadership model contrasts with traditional command-and-control approaches where innovation directives flow top-down and implementation follows rigid plans. Research on digital transformation success consistently shows that organizations empowering distributed decision-making and innovation achieve better outcomes than those maintaining centralized control (Westerman et al., 2014).


Distributed leadership for AI implementation involves several key elements. First, organizations must develop AI literacy and capability broadly rather than concentrating expertise in specialized teams. When only data scientists and engineers understand AI, opportunities visible to frontline employees remain unidentified and implementations fail to incorporate practical insights about workflows and customer needs.


Second, governance frameworks must balance appropriate oversight with sufficient autonomy for local innovation. This often involves tiering AI initiatives by risk and impact: low-risk, localized applications can be implemented with minimal oversight, while high-risk or enterprise-wide initiatives require more extensive review and approval. The challenge is calibrating these tiers appropriately—too much centralization stifles innovation, while too little creates risk.


Third, organizations need to create support structures that enable distributed innovation without requiring every team to become AI experts. Centers of excellence, internal consulting capabilities, and platforms that simplify AI development all help democratize access to AI capabilities while maintaining quality standards.


Companies successfully implementing distributed AI leadership establish clear boundaries within which teams can innovate autonomously, provide accessible tools and platforms that simplify AI development, create networks connecting distributed innovators for learning and collaboration, and recognize and reward local innovation efforts. This approach enables organizations to simultaneously pursue numerous AI initiatives adapted to local needs while maintaining strategic coherence and appropriate risk management.


Creating Data and Model Stewardship Disciplines


AI systems depend fundamentally on data—for training models, making predictions, and monitoring performance. Organizations that fail to establish strong data stewardship disciplines find their AI initiatives constrained by data quality issues, governance gaps, and technical debt that accumulates as systems proliferate without coordination.


Effective data stewardship encompasses several dimensions. Data quality management ensures that data is accurate, complete, timely, and fit for intended purposes. Metadata management maintains clear documentation of data sources, definitions, transformations, and lineage. Access governance balances making data available for innovation with protecting privacy and security. And data architecture provides the technical infrastructure for efficiently storing, processing, and accessing data.


For AI specifically, model stewardship adds additional complexity. Organizations need systematic approaches for model development, validation, deployment, monitoring, and retirement. Model registries document what models exist, their purposes, performance characteristics, and dependencies. Model monitoring tracks ongoing performance and detects drift—when models' accuracy degrades due to changes in underlying patterns or data characteristics. Model governance establishes review processes for high-risk applications and ensures that models align with ethical standards.


Organizations building strong stewardship disciplines typically establish dedicated data and AI governance teams, implement comprehensive metadata and documentation requirements, create clear ownership and accountability for data and model assets, deploy automated monitoring and quality assurance tools, and integrate stewardship activities into development workflows rather than treating them as bureaucratic overhead. This systematic approach prevents the chaos that often emerges when organizations deploy numerous AI initiatives without coordinating their underlying data and model assets.


Conclusion

Artificial intelligence's transformational potential extends far beyond technical capabilities to encompass fundamental changes in how organizations operate, how work gets done, and how humans and machines collaborate. Realizing this potential requires leadership that integrates technical expertise with organizational psychology, change management capabilities, strategic vision, and ethical judgment.


Traditional technology leadership roles, while evolving, often lack either the bandwidth or the explicit mandate to address AI's full spectrum of organizational implications. The gap between AI's broad impact and narrow technical framing has led to predictable failures—from multi-hundred-million-dollar losses to damaged stakeholder relationships—that could have been prevented with more comprehensive leadership approaches.


Organizations are responding by creating new leadership models that bridge innovation, transformation, and cultural change. While titles vary, the most effective approaches share common characteristics: integrated authority over technical and organizational dimensions, explicit accountability for cultural change and workforce development, cross-functional governance structures that bring diverse perspectives to AI decisions, and systematic change management capabilities that address AI's human dimensions.


For organizations seeking to strengthen their AI leadership, several concrete actions emerge:


  • Assess current leadership capacity against AI's multidimensional requirements, identifying gaps in technical understanding, change management expertise, cross-functional coordination, and ethical governance

  • Consider organizational design options for addressing gaps, whether through creating dedicated innovation and transformation roles, expanding existing technology leadership mandates, or strengthening cross-functional governance without new executive positions

  • Invest in leadership development that builds AI-age capabilities in existing leaders while developing next-generation talent with the integrated perspectives successful AI leadership requires

  • Establish cultural foundations for continuous innovation, including psychological safety, systematic learning mechanisms, and appropriate balance between exploiting existing capabilities and exploring new possibilities

  • Build stewardship disciplines for data and AI models that enable innovation while managing risk and maintaining quality


The transition to AI-enabled operations represents one of the most profound organizational changes of our era. Success requires not just better technology but better leadership—leaders who understand both the promise and the peril of artificial intelligence, who can guide organizations through complex cultural and operational transformations, and who maintain focus on fundamentally human values even as machines take on increasingly sophisticated capabilities. Organizations that make this leadership transition will be well-positioned to capture AI's benefits while managing its risks. Those that treat AI as merely a technical challenge will continue to struggle with implementations that fail to achieve their potential—or worse, that create unintended harm.


References

  1. Acemoglu, D., & Restrepo, P. (2020). Robots and jobs: Evidence from U.S. labor markets. Journal of Political Economy, 128(6), 2188–2244.

  2. Boston Consulting Group. (2022). The new CxO: Chief transformation officer. BCG.

  3. Center for Creative Leadership. (2021). Leadership development that works: A guide for organizations. CCL Press.

  4. Davenport, T. H., & Ronanki, R. (2018). Artificial intelligence for the real world. Harvard Business Review, 96(1), 108–116.

  5. Edmondson, A. C. (2019). The fearless organization: Creating psychological safety in the workplace for learning, innovation, and growth. Wiley.

  6. Fountaine, T., McCarthy, B., & Saleh, T. (2019). Building the AI-powered organization. Harvard Business Review, 97(4), 62–73.

  7. Foundry. (2024). State of the CIO 2024. IDG Communications.

  8. Hoque, F., Davenport, T. H., & Nelson, E. (2025). Why AI demands a new breed of leaders. MIT Sloan Management Review, 66(3), 1–11.

  9. JPMorgan Chase. (2023). Annual report 2023. JPMorgan Chase & Co.

  10. March, J. G. (1991). Exploration and exploitation in organizational learning. Organization Science, 2(1), 71–87.

  11. McKinsey & Company. (2023). The state of AI in 2023: Generative AI's breakout year. McKinsey Global Institute.

  12. NewVantage Partners. (2023). Data and AI leadership executive survey 2023. NewVantage Partners.

  13. PepsiCo. (2020). PepsiCo announces leadership changes to accelerate digital transformation. PepsiCo press release.

  14. Prosci. (2020). Best practices in change management—2020 edition. Prosci.

  15. Salesforce. (2023). Ethical and humane use of technology: Our approach to trusted AI. Salesforce.

  16. Standard Chartered. (2021). Annual report 2021. Standard Chartered PLC.

  17. State Street. (2025). Chief transformation officer position description. State Street Corporation.

  18. Westerman, G., Bonnet, D., & McAfee, A. (2014). Leading digital: Turning technology into business transformation. Harvard Business Review Press.

Jonathan H. Westover, PhD is Chief Academic & Learning Officer (HCI Academy); Associate Dean and Director of HR Programs (WGU); Professor, Organizational Leadership (UVU); OD/HR/Leadership Consultant (Human Capital Innovations). Read Jonathan Westover's executive profile here.

Suggested Citation: Westover, J. H. (2026). Why AI Demands a New Breed of Leaders: The Case for Chief Innovation and Transformation Officers. Human Capital Leadership Review, 29(1). doi.org/10.70175/hclreview.2020.29.1.3

Human Capital Leadership Review

eISSN 2693-9452 (online)

future of work collective transparent.png
Renaissance Project transparent.png

Subscription Form

HCI Academy Logo
Effective Teams in the Workplace
Employee Well being
Fostering Change Agility
Servant Leadership
Strategic Organizational Leadership Capstone
bottom of page