top of page
HCL Review
nexus institue transparent.png
Catalyst Center Transparent.png
Adaptive Lab Transparent.png
Foundations of Leadership
DEIB
Purpose-Driven Workplace
Creating a Dynamic Organizational Culture
Strategic People Management Capstone

Some Things Don’t Change — And That’s the Point

What the St. Charles and NIIT 2026 Rebuilding L&D Benchmark Confirms



I have spent more than three decades in this field, and if there is one thing I have learned, it is that the most important truths in learning and development tend not to be new. They tend to be old truths that we keep rediscovering — usually because the world has changed enough to make the stakes feel different.


In 1990, Peter Senge published The Fifth Discipline: The Art and Practice of the Learning Organization. At its core, the book made a single, arresting argument: in the long run, the only sustainable competitive advantage an organization possesses is its ability to learn faster than its competitors. Everything else — technology, capital, talent, scale — can be acquired or replicated. The capacity to learn, adapt, and evolve continuously is the one thing that cannot be simply purchased or copied.


That was a radical idea in 1990. Thirty-five years later, it is not radical. It is urgent.


What AI Changes — and What It Doesn’t

I am an enthusiast about artificial intelligence — not uncritically, but genuinely. I have watched it move from a peripheral curiosity in our field to a force that is reshaping the fundamental architecture of how people work, how organizations perform, and how capability is built and deployed. That shift is real, and it is accelerating.


But here is what AI does not change: Senge’s insight holds, and it may hold more forcefully now than at any point since he articulated it.


When AI becomes ubiquitous — available to every competitor, embedded in every platform, accessible at commodity cost — the differentiation it once provided evaporates. The organizations that will separate themselves are not the ones that deployed AI first. They are the ones whose people know how to work with it, adapt as it evolves, and apply judgment in the spaces where AI still cannot. The learning organization is not a nice-to-have in an AI-driven world. It is the whole game.


What has changed is the velocity and the consequences. When I started in this field, organizations could afford a certain tolerance for slow adaptation. Competitive cycles were longer. The cost of not learning fast enough was real, but it was often survivable. Today, with AI compressing timelines and raising the complexity ceiling on nearly every role, the margin for delay has narrowed dramatically.


The Data Confirms What Many of Us Have Felt

The 2026 Rebuilding L&D for an AI-Driven World benchmark, conducted by NIIT and St. Charles Consulting Group, puts empirical weight behind a diagnosis that many experienced learning leaders have sensed for years. When senior leaders were asked to identify their top priorities for the next 12 to 24 months, five themes dominated: AI-enabled learning in the flow of work, evolving L&D into a genuine strategic partner, building a skills-based talent strategy, and integrating learning with broader HR systems. There is no ambiguity about the direction.


But when those same leaders assessed their organizations’ readiness to deliver on those priorities, a different picture emerged. The largest execution gaps sat directly beneath the most strategically important ambitions. The more transformational the priority, the less prepared the system behind it. The benchmark gave this a name: the Priority–Execution Gap. And it is not a small gap.


The defining tension of this moment is that ambition is rising faster than system readiness. AI intensifies that tension considerably. When learning moves into the flow of work — when guidance, prompts, and decision support are embedded directly into daily execution — weak governance, inconsistent skills definitions, and poor measurement do not stay hidden. They surface immediately, and at scale. In that environment, system readiness is not merely a good management concern. It becomes the binding constraint on everything else.


Strong Locally, Fragile Enterprise-Wide

One of the most clarifying findings in the benchmark is also one of the most familiar to anyone who has spent time working inside large organizations. Across the sample, design and development capabilities are relatively mature. Delivery innovation is common. Pilots launch quickly. Local teams produce results that look impressive in isolation.


What consistently lags are the components that require cross-functional integration: governance clarity, shared skills architecture, interoperable data, credible measurement, and coherent career pathways. The local machinery works. The enterprise plumbing does not.


This unevenness explains a cycle I have watched repeat throughout my career. Strategic priorities rise. Pilots launch and show promise. Attempts to scale expose integration gaps. Momentum slows, confidence erodes, and initiatives reset. The benchmark documents this cycle precisely, and it identifies where the diagnosis tends to go wrong: the stall gets attributed to change resistance or insufficient investment, when the structural explanation is more accurate. The system was never designed to absorb that level of pressure. Adding more pressure does not fix the design.


AI Amplifies Whatever Already Exists

There is a tendency to frame AI adoption as primarily a technology challenge. The benchmark data pushes back on that framing directly, and I think it is right to do so.


The research examined the interaction between AI learning readiness and overall skills and talent architecture — essentially asking whether organizations with high AI ambition but fragmented system readiness behave differently from those with both ambition and architectural coherence. The answer is unambiguous. Organizations with strong AI interest but weak infrastructure tend to produce rapid experimentation and localized success, paired with persistent difficulty generalizing results. AI exposes gaps faster. Content behaves inconsistently across contexts. Measurement struggles to separate signal from noise.


By contrast, organizations with stronger skills architecture, clearer governance, and integrated data experience AI as a multiplier. Shared standards provide context. Measurement infrastructure enables pattern recognition. Decision rights clarify what should be automated and what should not.


The benchmark puts it plainly: AI does not create coherence. It amplifies whatever already exists. That is either a warning or a competitive opportunity, depending on what you have already built.

The Measurement Problem Is Not What Most People Think It Is


Few topics frustrate learning leaders more than measurement, and the benchmark data clarifies why. Leaders expect learning to influence productivity, workforce capability, internal mobility, and execution quality — these are reasonable expectations. Yet the research consistently reveals a gap between measurement activity and decision influence. Organizations are not under-measuring. Many are over-measuring. And the additional measurement is not helping.


The benchmark identifies three conditions that distinguish measurement that earns credibility from measurement that merely produces data. First, it focuses on a small number of prioritized outcomes rather than attempting to quantify everything. Second, the logic connecting learning activity to those outcomes is explicit and defensible. Third, the evidence is embedded into governance routines and decision processes — it is not compiled into a report and distributed; it is built into how decisions get made.


Measurement is not a reporting problem. It is a system design problem. Learning becomes credible when it reduces uncertainty for business leaders — not when it produces more charts. This is a reframing that many L&D functions have resisted because it requires fewer metrics and more discipline, but the data is unambiguous: adding dashboards to a credibility problem makes the credibility problem worse.


Four Archetypes — and Why Most Organizations Stay Stuck in Theirs

One of the more provocative findings in the benchmark is the identification of four structural archetypes that emerge from the interaction of ambition, architecture, credibility, and operating model. The researchers are careful to note that these are not maturity stages. They are structural equilibria. Without deliberate intervention, organizations tend to remain in their archetype regardless of additional investment or effort.


The Ambition-Led, Architecture-Light organization has high strategic pressure and visible innovation, but the innovation is fragile at scale — impressive in the pilot, unreliable in the rollout. Fragmented Excellence describes organizations with strong pockets of performance that consistently fail to generalize; the best teams are excellent, but the enterprise cannot replicate what they do. Measured, Not Believed captures the paradox of heavy analytics investment that has not translated into decision influence — the data exists; the trust does not. And the Deliberate System Architect sequences intentionally, building architecture before accelerating, with initiatives that compound rather than reset.


The implication that I find most important is this: sequencing, not intensity, changes system behavior. Organizations that have been investing heavily in learning transformation without sustained results are often investing in the wrong sequence — acceleration before architecture, autonomy before integrity, scale before standards. The fix is not more investment in the same direction. It is reordering the work.


The Imperative Is Human, Not Technological

The organizations the benchmark identifies as Deliberate System Architects share a common characteristic: they treat learning as enterprise infrastructure. Not a department. Not a service function. Infrastructure — designed to scale, governed to protect integrity, and measured to inform decisions. That framing is not new. It is what Senge was pointing toward in 1990 when he described the learning organization as an enterprise built around the premise that continuous adaptation is a core operational capability, not a periodic event.


In 2026, that premise has become structural reality. The benchmark makes clear that operating models are shifting toward hybrid and federated structures, with leaders seeking local responsiveness and faster execution. But the research also documents what happens when decentralization outpaces the integrity of the underlying system: standards vary, measurement becomes inconsistent, and trust erodes. Autonomy is not inherently risky. Autonomy without integrity is.


Human resources — and I use that phrase deliberately, in its most literal sense — must now adapt and evolve almost continuously. Not every few years in response to a reorganization. Not annually in response to a performance review. Continuously, as the tools change, the roles shift, and the definition of value creation is rewritten in real time. The skills-based talent strategy that respondents identified as a top priority is not a program or an initiative. It is a recognition that the architecture of how organizations define, develop, and deploy capability must itself become a dynamic system.


The Oldest New Idea in the Room

I am sometimes asked, as someone who has been in this field for thirty-plus years and follows AI with genuine enthusiasm, whether those two perspectives create tension. They do not. They reinforce each other.


AI does not diminish the importance of organizational learning. It raises the price of its absence. Every capability that AI makes available to your competitors is available to you as well. The question is whether your organization can learn — individually, collectively, systemically — fast enough and coherently enough to turn that access into advantage. And the benchmark is clear about what coherent looks like: shared skills architecture as the integration layer connecting learning to workforce planning, mobility, and measurement; governance that enables speed rather than constraining it; measurement that earns decision influence rather than producing reporting volume.


Senge wrote that in the long run, the only sustainable source of competitive advantage is an organization’s ability to learn faster than its competitors. He wrote that before the internet, before the smartphone, before machine learning became a commodity input. The fact that the statement is more true today than when he wrote it is not a commentary on how far AI has come.


It is a commentary on how right he was.


The rebuild the benchmark calls for is structural and deliberate. The data tells us where most organizations currently sit: strong locally, fragile enterprise-wide; ambitious in direction, uneven in readiness; active in measurement, limited in influence. The path forward is not a mystery. It requires sequencing architecture before acceleration, building shared skills infrastructure before distributing decision rights, and earning credibility through focused evidence rather than comprehensive reporting.


For those of us who have spent careers in this field, the moment carries a certain weight. The work has always mattered. The conditions now exist for it to matter at the scale it always should have.


Larry Durham is a visionary in enterprise learning and talent development. Over the last 30 years, he has worked with large professional services firms and many Fortune 500 companies to co-create innovative talent development solutions that yield measurable business outcomes. Larry’s work has provided him the opportunity to present/speak, consult and facilitate training in over 30 countries. Larry has been president of St. Charles Consulting Group for the last ten years. Before joining St. Charles in 2014, Larry spent ten years with PwC’s Human Capital Advisory Services, where he served as the Learning Practice Leader. Over the last two years, Larry has led numerous strategic engagements, such as 1) developing an AI strategy for the Talent and Learning function within a global organization, 2) transforming the leadership development program for under-represented groups to better align with diversity goals, 3) redesigning the firmwide Learning Operating Model, 4) leading a multi-faceted initiative to fundamentally improve the functional capabilities and cultural alignment of learning within an Audit practice. Beyond client work and managing the Firm, Larry actively shares his insights and experiences in learning, and his thought leadership is broadly recognized. His primary focus is on ways organizations define, support, and leverage the value of education within the enterprise (and beyond). Larry has written numerous articles on these and other topics in the talent development arena, and serves as host of The HIVE podcast, which addresses insights and innovations in talent development. He has been quoted in numerous media outlets, sharing expertise on workforce trends and strategies. In his most recent writing endeavor, Larry is a contributing author of the widely acclaimed The Talent-Fueled Enterprise - A Powerful Approach to Build Tomorrow’s Workforce, published in June 2024.In his most recent writing endeavor, Larry is a contributing author of the widely acclaimed The Talent-Fueled Enterprise - A Powerful Approach to Build Tomorrow’s Workforce, published in June 2024.

 

 
 

Human Capital Leadership Review

eISSN 2693-9452 (online)

future of work collective transparent.png
Renaissance Project transparent.png

Subscription Form

HCI Academy Logo
Effective Teams in the Workplace
Employee Well being
Fostering Change Agility
Servant Leadership
Strategic Organizational Leadership Capstone
bottom of page