AI in Education: Building Learning Systems That Elevate Rather Than Erode Human Capability
- Jonathan H. Westover, PhD
- Nov 18, 2025
- 18 min read
Listen to this article:
Abstract: The integration of artificial intelligence into educational settings presents a fundamental challenge: how to harness powerful generative technologies without undermining the very cognitive capabilities required to use them wisely. This paper examines the pedagogical implications of AI adoption across educational institutions, drawing on cognitive science, instructional research, and emerging practice to propose evidence-based responses. Analysis reveals that 92% of British undergraduates now use AI tools, yet much of this usage exists in a zone of ambiguity that risks hollowing out critical thinking, domain expertise, and analytical reasoning. Rather than treating AI as either a threat requiring surveillance or a solution demanding wholesale adoption, this paper argues for a third path: embedding AI use within transparent, reflective frameworks that make technology a catalyst for deeper learning. Key recommendations include managing cognitive load through purposeful AI integration, explicitly teaching metacognition alongside AI literacy, celebrating intellectual risk-taking through collaborative sense-making, and redesigning assessment as ongoing conversation rather than one-time product evaluation. The evidence suggests that institutional success depends less on technological sophistication than on grounding innovation in longstanding principles of how humans actually learn—principles that become more rather than less essential as machine capabilities advance.
In December 2024, a survey revealed that 92% of British undergraduates reported using AI tools in their studies (Freeman, 2025). By August 2025, the technology had become "a habit, as ubiquitous on campus as eating processed foods or scrolling social media" (Bogost, 2025). Yet despite this near-universal adoption, most educational institutions remain caught between inadequate responses: either treating AI as a cheating epidemic requiring surveillance, or embracing it as a efficiency tool without examining what gets lost in the automation.
The stakes extend beyond immediate concerns about academic integrity. As Carr (2025) observed, "to automate learning is to subvert learning." The process is the purpose. When children write stories or draw pictures, the goal is not to supply the world with content but to develop literacy and reflective participation in society. Undergraduates do not write essays because the world needs more such things; the writing itself builds the cognitive architecture required for complex thought.
This tension—between AI's seductive power to simulate knowledge and the irreducible human work of actually acquiring it—defines education's present crisis. Freely available tools can complete conventional assignments with ease, potentially preventing students from gaining the very capabilities required to use AI adeptly: critical discernment, domain expertise, research skills, and analytical reasoning. The challenge is not whether to integrate AI into education, but how to do so in ways that raise rather than lower the cognitive bar.
The Wider Context
This moment bears comparison to previous technological disruptions in education. Gopnik and colleagues (2024) argue persuasively that treating artificial intelligence as either miraculous or unprecedented represents a dangerous category error. Rather, AI joins a long lineage of cultural technologies—writing, print, libraries, the internet, language itself—that make information gathered or created by other humans useful in new contexts.
Yet the speed and scope of generative AI's capabilities introduce qualitatively new challenges. Students can now produce sophisticated-seeming analysis without understanding, eloquent prose without thought, and plausible arguments without reasoning. The gap between surface performance and actual learning has never been wider or easier to cross. For educators, the traditional signals of comprehension have become unreliable. For learners, the temptation to outsource cognition has never been more seductive or difficult to resist.
The educational responses emerging from this disruption divide roughly into three camps. The surveillance approach treats AI as contraband, deploying detection software, proctored exams, and handwritten requirements in an escalating arms race. The efficiency approach embraces AI as a productivity tool, automating grading and content delivery without examining pedagogical consequences. The elevation approach, which this paper advocates, treats AI as a context and catalyst for deeper learning—but only when bounded by clear pedagogical principles and transparent frameworks that preserve human agency.
The Educational Fundamentals Landscape
Defining Learning in the AI Context
Before examining AI's role in education, it is essential to establish what learning actually entails from a cognitive perspective. As Sweller (2006) noted, the brain functions as "a natural information processing system" that should not be considered in isolation from fundamental information-processing principles. Learning occurs when working memory successfully integrates new information with existing knowledge structures in long-term memory—a process easily overwhelmed by excessive cognitive load or poorly designed instructional environments.
This cognitive architecture has profound implications for AI integration. Systems that flood learners with information, provide answers without scaffolding understanding, or remove the productive struggle inherent in learning risk overwhelming working memory while simultaneously preventing the consolidation of knowledge in long-term storage. By contrast, well-designed AI applications can manage cognitive load by breaking complex tasks into digestible components, providing just-in-time support, and adapting difficulty to individual needs.
State of Practice: How Students and Faculty Currently Use AI
The current landscape reveals widespread but often problematic AI adoption. Freeman's (2025) survey found that students primarily use AI tools to digest overwhelming digital resources, create study materials, explain difficult concepts, and—critically—complete assignments in ways that institutions have not sanctioned or even acknowledged. Initial research by the author at City St George's University of London uncovered a troubling pattern: students felt compelled to create their own educational materials using ChatGPT because institutional resources seemed poorly designed or overwhelming, yet this usage occurred in a shadow zone without guidance, recognition, or support.
Faculty usage presents a parallel complexity. Many instructors quietly use AI for grading papers, drafting feedback, and creating assessment materials, yet these applications often exist in the same ambiguous space as student usage. The result is a mutual distrust: students assume faculty will punish AI use they themselves practice; faculty assume students cheat while they themselves struggle with how to integrate tools they find professionally useful.
This ambiguity exacts substantial costs. At workshops conducted by the author in November 2024 and May 2025, students consistently expressed frustration that institutions neither acknowledged their real needs—managing information overload, receiving meaningful feedback, achieving clarity about expectations—nor provided guidance for the tools they already used daily. Faculty echoed concerns about over-reliance, accuracy, and trust while simultaneously recognizing AI's potential as an explainer and adaptive tutor. The gap between these parallel anxieties and the absence of shared frameworks for addressing them represents education's most pressing challenge.
Organizational and Individual Consequences of AI Integration
Organizational Performance Impacts
The integration of AI into educational systems produces measurable consequences at institutional, programmatic, and classroom levels. Most immediately, the erosion of assessment validity creates cascading organizational challenges. When traditional assignments can be completed by AI, institutions face a crisis of credential legitimacy. Degrees whose attainment no longer reliably signals competence undermine the fundamental value proposition of educational institutions.
The surveillance response to this crisis imposes substantial costs. AI detection software produces unreliable results while consuming faculty time and damaging student-instructor relationships (MIT Sloan Teaching & Learning Technologies, n.d.). Increased proctoring requirements strain institutional resources. The shift toward handwritten exams and in-person assessments may restore some measure of control but often at the expense of assessing higher-order thinking skills that require extended engagement with complex materials.
Perhaps most damaging, the arms-race mentality prevents institutions from developing the very capabilities that would allow them to respond effectively. Faculty lack time and support for experimenting with AI integration. Professional development focuses on detection rather than pedagogy. Institutional policies emphasize prohibition rather than purposeful use. The result is organizational paralysis: institutions neither prevent AI use nor harness its potential, instead consuming resources in a defensive posture that satisfies no stakeholder.
Individual Wellbeing and Learning Impacts
For learners, the consequences extend beyond immediate academic outcomes. When students use AI to complete assignments without building understanding, they acquire neither domain knowledge nor the metacognitive skills required to learn effectively in AI-saturated environments. Research by Oakley and colleagues (2025) emphasizes that contextual knowledge—the rich web of associations, examples, and connections that distinguish expert from novice thinking—cannot be outsourced to external systems without cognitive cost.
The psychological dimensions prove equally significant. Students report feeling overwhelmed by poorly designed digital resources and unclear institutional expectations. They experience anxiety about both using and not using AI: using it risks academic misconduct charges; not using it risks falling behind peers who do. This double bind creates stress while preventing the development of transparent, thoughtful AI practices that would serve students well beyond graduation.
For faculty, the impacts manifest differently but with comparable intensity. The widespread availability of AI-generated assignments creates a crisis of trust and purpose. Many instructors report feeling demoralized when they suspect substantial portions of student work are AI-generated but lack reliable methods for verification or response. The additional labor of redesigning assessments, learning new technologies, and adapting pedagogy often arrives without corresponding reductions in other responsibilities. The result can be burnout, cynicism, or retreat to less effective but more defensible teaching approaches.
Yet these negative consequences are not inevitable. Emerging evidence from innovative programs suggests that when institutions provide clear guidance, supportive frameworks, and time for experimentation, both student and faculty experiences improve markedly. The distinction lies not in whether AI is present—it already is—but in whether its integration serves or subverts the fundamental purposes of education.
Evidence-Based Organizational Responses
Transparent Communication and Psychological Safety
The foundation for effective AI integration begins with honesty about what institutions can and cannot control, paired with explicit guidance about expectations and support. Rather than maintaining fictions about preventing AI use, leading institutions now acknowledge its ubiquity while providing frameworks for appropriate, transparent engagement.
Research on psychological safety in educational contexts demonstrates that students learn most effectively when they can acknowledge uncertainty and mistakes without fear of punishment (Gedikoglu, 2021). Applied to AI, this principle suggests that institutions should create environments where students can experiment with AI use, discuss challenges and limitations they encounter, and receive guidance rather than sanctions when initial applications prove problematic.
Several effective approaches have emerged:
Clear institutional policies that distinguish between supported and unsupported AI use, with rationales grounded in learning objectives rather than blanket prohibition
Explicit acknowledgment of student experiences and needs, including the reality that poorly designed resources drive students toward AI-generated substitutes
Transparent faculty modeling of thoughtful AI use, demonstrating how expert practitioners integrate tools while maintaining critical judgment
Regular opportunities for student-faculty dialogue about AI, treating usage as an ongoing negotiation rather than a policy to be enforced
Massachusetts Institute of Technology (MIT) exemplifies this approach through its RAISE (Responsible AI for Social Empowerment and Education) initiative, which embeds AI literacy across curricula while acknowledging that tools like ChatGPT are already integral to students' academic and professional lives. Rather than prohibiting use, MIT provides frameworks for recognizing hallucinations, identifying bias, and understanding the social context of AI systems—skills students need regardless of institutional policies (MIT, 2025).
Pedagogical Redesign Around Judgment
The most robust responses to AI availability involve restructuring tasks to foreground distinctively human contributions: synthesis, evaluation, ethical reasoning, and creative application. Rather than asking students to produce outputs AI can generate, innovative assignments ask students to work with AI outputs in ways that require and develop critical judgment.
Effective pedagogical redesign strategies include:
Requiring students to evaluate and critique AI-generated content, explaining strengths, limitations, and necessary revisions
Structuring assignments around comparative analysis, where students generate multiple AI responses and argue for which approach proves most insightful
Making AI usage explicit and reflective, requiring students to document prompts, evaluate responses, and justify choices
Designing tasks where AI provides starting points for human refinement, emphasizing iteration and improvement rather than initial output
Focusing assessment on the process of thinking rather than final products alone, using portfolios, reflections, and documented revision cycles
The Wharton School at the University of Pennsylvania has pioneered this approach in entrepreneurship courses, where students engage with AI in seven distinct roles: tutor, coach, mentor, teammate, tool, simulator, and student (Mollick & Mollick, 2023). Students maintain logs of their AI interactions alongside business plans, then submit reflections analyzing which machine-generated ideas they modified, retained, or discarded, and whether they identified hallucinations or biases. This approach treats AI not as a shortcut but as a collaborator whose contributions require human evaluation and integration—precisely the skill students need in AI-saturated professional environments.
Metacognitive Skill Building
Perhaps the most critical organizational response involves explicitly teaching metacognition: the capacity to monitor and regulate one's own thinking. As AI systems become more capable of generating surface-level competence, the ability to think about thinking—to recognize when understanding is genuine versus illusory, to identify gaps in knowledge, to adapt learning strategies—becomes increasingly valuable.
Evidence from educational psychology demonstrates that metacognitive skills predict academic success across domains and prove particularly crucial when learners face novel challenges (Muijs & Bokhove, 2020). In AI contexts, metacognition enables students to recognize when AI outputs align with or diverge from their understanding, to identify productive versus dependency-creating uses of technology, and to maintain agency in increasingly automated environments.
Approaches to building metacognitive capacity alongside AI literacy include:
Explicit instruction in prompt design as critical thinking practice, where crafting effective prompts requires clear articulation of goals, context, and evaluation criteria
Regular reflection on AI interactions, asking students to identify what they learned, where AI proved helpful or misleading, and how they adapted their approach
Teaching students to recognize and resist cognitive offloading, distinguishing between AI use that builds capacity versus creates dependency
Developing awareness of when and why to engage or disengage AI assistance, based on learning objectives and current capability levels
Creating opportunities for peer discussion of AI strategies, allowing students to learn from one another's successes and failures
Stanford University's HELM (Holistic Evaluation of Language Models) project exemplifies institutional commitment to making AI capabilities and limitations visible rather than mysterious, providing transparency that enables more sophisticated engagement (Stanford, n.d.). By helping students understand how language models actually function—their training processes, statistical nature, and systematic limitations—institutions can build the informed skepticism and strategic thinking required for effective AI use.
Assessment Redesign: From Products to Processes
The most transformative organizational response involves reconceptualizing assessment itself. Traditional approaches assume information scarcity, limited resources for generating analysis, and clear boundaries between individual work and collaboration. AI invalidates all three assumptions, requiring fundamental rethinking of what assessment measures and how.
The most promising approaches share several characteristics:
Process-based evaluation that values documented thinking over final outputs alone
Mastery-based progression where students demonstrate understanding through adaptive interactions rather than single performances
Portfolio approaches that track growth and development over time
Conversational assessment that treats evaluation as ongoing dialogue rather than isolated events
Integration of AI interaction records as evidence of learning, where logs of prompts, responses, and iterations reveal thinking processes
Multiple institutions have begun implementing these approaches:
University of Warwick research by Wang and colleagues (2025) examined college students' use of generative AI for STEM education, distinguishing between uses that scaffold learning (providing just-in-time support that students gradually learn to do without) versus crutch applications that prevent skill development. Their findings suggest that when students understand this distinction and receive credit for demonstrating mastery through whatever means necessary—including thoughtful AI use—learning outcomes improve while dependency decreases.
Stanford University has piloted approaches where AI itself provides initial feedback on student work, with human instructors then building on this foundation to offer deeper, more contextual guidance. This division of labor—AI handles rapid, formative feedback on mechanical issues while faculty focus on higher-order thinking—allows more frequent feedback cycles while directing limited human attention where it matters most.
City St George's, University of London, in partnership with the author, has developed a prototype "cognitive co-pilot" that tracks student understanding through Socratic dialogue, awarding credit based on demonstrated mastery rather than time spent or content completed. By generating records of interactions that reveal thinking processes, the system creates assessment artifacts that resist gaming while providing rich data for both students and instructors about learning progress and needs.
Capability Building Through Collaborative Learning
A fifth critical response involves leveraging AI not as a substitute for human interaction but as a catalyst for richer peer-to-peer engagement. When AI handles foundational skill-building and content delivery, it can free time and cognitive space for the collaborative sense-making that research consistently identifies as crucial for deep learning (Hammond, 2014).
Effective approaches include:
Using AI to prepare students for substantive discussions, ensuring all participants arrive with baseline knowledge that allows more sophisticated dialogue
Structuring group work where students collectively evaluate AI outputs, developing shared critical frameworks
Employing AI as a discussion moderator or provocateur, introducing perspectives or challenges that spur human debate
Creating assignments where AI serves as a common dataset or resource, leveling access while preserving space for human interpretation
Building communities of practice where students share AI strategies and insights, learning from one another's experimentation
Breakout Learning exemplifies this approach through platforms that use AI not to replace discussion but to moderate and evaluate small-group conversations, providing individual feedback based on participation and reasoning quality while the substantive exchange remains human (Breakout Learning, n.d.). This positions AI as infrastructure supporting rather than substituting for the interpersonal skill development that proves increasingly valuable as routine cognitive tasks become automatable.
Building Long-Term Educational Resilience
Grounding Innovation in Cognitive Science
The long-term success of AI integration depends on anchoring technological choices in established understanding of how humans learn. Cognitive load theory, developed by Sweller (1988, 2006), emphasizes that working memory has limited capacity and is easily overwhelmed. Well-designed educational technology should therefore reduce extraneous load (distractions, poor interface design, unnecessary complexity) while managing intrinsic load (the inherent difficulty of material) and optimizing germane load (productive mental effort that builds understanding).
Applied to AI, these principles suggest several design imperatives:
Constrained rather than open-ended interfaces that provide cognitive landmarks and clear pathways rather than overwhelming possibility
Structured progression through material that scaffolds complexity incrementally
Integration with rather than separation from existing learning ecosystems, avoiding cognitive cost of context-switching
Explicit connection to learning objectives, ensuring students understand not just what AI does but why using it serves their learning
Support for spaced practice and interleaving (Smolen et al., 2016; Rohrer et al., 2015), using AI to generate varied practice opportunities distributed over time
The cognitive co-pilot prototype developed by the author implements these principles through a knowledge base derived from critical thinking textbooks, providing structured progression through concepts while adapting difficulty based on demonstrated understanding. Rather than offering unlimited exploration, the system guides learners through a defined curriculum, managing cognitive load while building both domain knowledge and metacognitive awareness.
Distributed Educational Design Capacity
Sustainable AI integration requires moving beyond centralized technology decisions to develop distributed design capacity across faculty. The most effective systems emerge not from top-down mandates but from educators experimenting within their disciplines, sharing insights, and collectively refining approaches. This represents what Laurillard (2012) terms "teaching as a design science": creating conversational frameworks where learners, instructors, and systems learn together through iterative cycles of design, implementation, and refinement.
Building this capacity requires several organizational commitments:
Protected time for faculty experimentation with AI integration, acknowledging that pedagogical innovation demands substantial investment before yielding returns
Communities of practice where educators share successes and failures, learning from one another's disciplinary contexts
Institutional support for risk-taking, including acceptance that some AI integration experiments will fail and should be celebrated for the learning they generate
Cross-disciplinary dialogue about AI use, recognizing that literature, engineering, and medicine may require fundamentally different approaches
Recognition and reward for teaching innovation, elevating pedagogical excellence to parity with research productivity
The consortium developing the cognitive co-pilot—bringing together Sage (publisher), City St George's (institution), Bond & Coyne (digital agency), Raspberry Pi Foundation (educational non-profit), Wolfram (computational thinking), and individual experts—exemplifies this distributed model. Each partner contributes distinct expertise: publishers curate content, institutions provide pedagogical context, technologists implement systems, and individual educators test and refine in practice. The iterative, interdisciplinary approach treats AI integration as an ongoing learning process rather than a solved problem.
Equity and Access Considerations
Long-term resilience also demands attention to how AI integration affects educational equity. While AI promises to democratize access to tutoring and personalized instruction, it risks exacerbating existing inequalities if deployment favors well-resourced institutions or technologically sophisticated students. Three particular concerns warrant attention:
Infrastructure access: Students lacking reliable internet, devices, or quiet study environments may struggle to benefit from AI tools that assume consistent connectivity and private space for interaction.
Cultural capital: Effective AI use often requires tacit knowledge about how to frame questions, evaluate outputs, and integrate results—skills that correlate with educational privilege and may disadvantage first-generation students or those from under-resourced educational backgrounds.
Institutional capacity: Well-funded institutions can afford experimentation, faculty development, and sophisticated AI integration, while under-resourced schools may face pressure to adopt AI primarily for efficiency gains that reduce rather than enhance learning quality.
Addressing these challenges requires intentional design choices:
Ensuring AI tools function across diverse technological environments, including offline capabilities and low-bandwidth options
Providing explicit instruction in AI literacy for all students, avoiding assumptions about prior technological sophistication
Directing resources toward institutions serving disadvantaged populations, recognizing that AI's greatest potential value lies in addressing educational gaps
Maintaining high-quality non-AI alternatives, ensuring students can succeed without dependence on technologies they may not access equitably
Studying differential impacts across student populations, tracking whether AI integration narrows or widens achievement gaps
Ethical Frameworks for AI Stewardship
The final pillar of long-term resilience involves developing explicit ethical frameworks for AI stewardship within educational contexts. This extends beyond academic integrity policies to encompass broader questions about the values institutions model through technology choices.
Key ethical dimensions include:
Transparency about AI capabilities and limitations: Students deserve honest information about what AI systems can and cannot do, how they work, and whose interests they serve. This includes transparency about training data, known biases, and commercial relationships.
Agency and consent: Educational AI should enhance rather than diminish student autonomy, with clear opt-in mechanisms and alternatives to AI-mediated experiences where students prefer direct human interaction.
Data stewardship: AI systems that track student interactions generate sensitive data about learning processes, struggles, and capabilities. Institutions must articulate clear principles about data use, retention, and sharing, ensuring student information serves learning rather than surveillance.
Environmental impact: Training and operating large language models consumes substantial energy with significant carbon footprints. Institutions should consider environmental costs in technology choices and prioritize efficiency.
Labor implications: AI integration affects faculty work and employment. Ethical deployment requires honest dialogue about how AI might augment versus replace human instruction, with commitments to preserving meaningful educational labor.
The Raspberry Pi Foundation's (2025) position paper on why children still need to learn programming in the AI age exemplifies this ethical commitment. Rather than treating AI as a substitute for learning fundamental computational skills, the paper argues that "even in a world where AI can generate code, we will need skilled human programmers who can think critically, solve problems, and make ethical decisions" (Colligan et al., 2025). This positions AI as a context that makes human capability more rather than less essential—precisely the ethical stance educational institutions should adopt across domains.
Conclusion
The integration of AI into education presents not primarily a technological challenge but a profoundly human one: how to preserve and strengthen the irreducible human work of learning while thoughtfully incorporating powerful new tools. The evidence reviewed across cognitive science, instructional research, and emerging practice converges on several clear principles.
First, effective AI integration must be grounded in established understanding of how humans learn. Approaches that manage cognitive load, scaffold complexity, provide meaningful feedback, and support metacognitive development succeed; those that overwhelm learners or automate away productive struggle fail regardless of technological sophistication.
Second, assessment must evolve from product-based evaluation toward process-based approaches that make thinking visible and credit demonstrated mastery over time. Traditional forms of examination increasingly measure skill at concealing AI use rather than capability; conversational assessment integrated into ongoing learning offers a more honest and effective alternative.
Third, institutional responses grounded in surveillance and prohibition prove both practically unsustainable and pedagogically counterproductive. Students already use AI ubiquitously; the question is whether institutions provide guidance for thoughtful use or drive this learning underground into zones of anxiety and ambiguity.
Fourth, the skills that matter most—critical judgment, metacognitive awareness, ethical reasoning, collaborative sense-making—become more rather than less essential as AI capabilities advance. Educational excellence in an AI age means developing distinctively human capacities that machines can support but never substitute.
Finally, sustainable change requires distributed design capacity across faculty, protected time for experimentation, and institutional cultures that reward pedagogical innovation. The most effective AI integration emerges not from top-down mandates but from educators experimenting within their disciplines, sharing insights, and collectively refining practice.
The ultimate measure of success will not be technological sophistication but human flourishing: graduates who think critically, learn continuously, collaborate effectively, and use powerful tools with wisdom and purpose. Achieving this requires treating AI neither as threat nor panacea but as a context for re-examining and strengthening educational fundamentals—an opportunity to build learning systems worthy of human potential.
References
Anderson, J. R., Corbett, A. T., Koedinger, K. R., & Pelletier, R. (1995). Cognitive tutors: Lessons learned. Journal of the Learning Sciences, 4(2), 167–207.
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2024). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the Association for Computing Machinery Conference on Fairness, Accountability, and Transparency (FAccT '21).
Bogost, I. (2025, August 17). College students have already changed forever. The Atlantic.
Breakout Learning. (n.d.). What is next pedagogy?
Carr, N. (2025, May 27). The myth of automated learning. New Cartographies.
Clark, A. (2025). Extending minds with generative AI. Nature Communications, 16, 4627.
Colligan, P., Griffiths, M., & Cucuiat, V. (2025). Why kids still need to learn to code in the age of AI. Raspberry Pi Foundation.
Doskaliuk, B., Zimba, O., Yessirkepov, M., Klishch, I., & Yatsyshyn, R. (2025). Artificial intelligence in peer review: Enhancing efficiency while preserving integrity. Journal of Korean Medical Science, 40(7), e92.
Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T. (2013). Improving students' learning with effective learning techniques: Promising directions from cognitive and educational psychology. Psychological Science in the Public Interest, 14(1), 4–58.
Floridi, L. (2025, April 26). Distant writing: Literary production in the age of artificial intelligence. Social Science Research Network.
Freeman, J. (2025, February). Student generative AI survey 2025. HEPI Policy Note 61.
Gedikoglu, M. (2021). Social and emotional learning: An evidence review and synthesis. Education Policy Institute.
Hammond, Z. (2014). Culturally responsive teaching and the brain: Promoting authentic engagement and rigor among culturally and linguistically diverse students. Corwin.
Hattie, J. (2023). Visible learning: The sequel. A synthesis of over 2,100 meta-analyses relating to achievement. Routledge.
Laurillard, D. (2012). Teaching as a design science: Building pedagogical patterns for learning and technology. Routledge.
MIT. (2025). RAISE initiative. Responsible AI for social empowerment and education.
MIT Sloan Teaching & Learning Technologies. (n.d.). AI detectors don't work. Here's what to do instead.
Mollick, E. R., & Mollick, L. (2023, September 23). Assigning AI: Seven approaches for students, with prompts. The Wharton School Research Paper. Social Science Research Network.
Mueller, P. A., & Oppenheimer, D. M. (2014). The pen is mightier than the keyboard: Advantages of longhand over laptop note taking. Psychological Science, 25(6), 1159–1168.
Muijs, D., & Bokhove, C. (2020). Metacognition and self-regulation: Evidence review. Education Endowment Foundation.
Oakley, B., Johnston, M., Chen, K., Jung, E., & Sejnowski, T. (2025, May 11). The memory paradox: Why our brains need knowledge in an age of AI. Social Science Research Network.
Rohrer, D., Dedrick, R. F., & Stershic, S. (2015). Interleaved practice improves mathematics learning. Journal of Educational Psychology, 107(3), 900–908.
Smolen, P., Zhang, Y., & Byrne, J. H. (2016). The right time to learn: Mechanisms and optimization of spaced learning. Nature Reviews Neuroscience, 17(2), 77–88.
Stanford. (n.d.). HELM leaderboards.
Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12(2), 257–285.
Sweller, J. (2006). The worked example effect and human cognition. Learning and Instruction, 16(2), 165–169.
VanLehn, K. (2011). The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educational Psychologist, 46(4), 197–221.
Wang, K. D., Wu, Z., Tufts, L., Wieman, C., Salehi, S., & Haber, N. (2025). Scaffold or crutch? Examining college students' use and views of generative AI tools for STEM education. In Proceedings of EDUCON 2025.
Yiu, E., Kosoy, E., & Gopnik, A. (2024). Transmission versus truth, imitation versus innovation: What children can do that large language and language-and-vision models cannot (yet). Perspectives on Psychological Science, 19(5), 874–883.

Jonathan H. Westover, PhD is Chief Academic & Learning Officer (HCI Academy); Associate Dean and Director of HR Programs (WGU); Professor, Organizational Leadership (UVU); OD/HR/Leadership Consultant (Human Capital Innovations). Read Jonathan Westover's executive profile here.
Suggested Citation: Westover, J. H. (2025). AI in Education: Building Learning Systems That Elevate Rather Than Erode Human Capability. Human Capital Leadership Review, 27(4). doi.org/10.70175/hclreview.2020.27.4.4






















