The AI Ethics Gap in K–12 Education: Why Technical Training Alone Fails Our Teachers and Students
- Jonathan H. Westover, PhD
- 43 minutes ago
- 17 min read
Listen to this article:
Abstract: Artificial intelligence is rapidly entering K–12 classrooms worldwide, yet most educators lack formal training in AI—and even fewer have received instruction in AI ethics. Emerging evidence suggests that approximately two-thirds of teachers have no formal AI preparation, while those who do receive training typically encounter tool-focused, technical instruction rather than comprehensive ethics education. Meanwhile, government mandates requiring AI instruction are accelerating, and technology companies are scaling products with unprecedented speed. This disconnect leaves teachers, families, and students vulnerable to documented harms, including AI-related psychological distress. This article examines the current landscape of AI readiness in schools, analyzes organizational and individual consequences of the ethics training gap, and presents evidence-based interventions—from educator capability building and transparent governance frameworks to cross-sector partnerships and ethical curriculum design. Drawing on established research in organizational learning, educational technology adoption, and professional development, the article offers a roadmap for school leaders, policymakers, and technology companies committed to building sustainable, human-centered AI ecosystems in education.
Artificial intelligence has moved from experimental pilot to mainstream educational tool with breathtaking speed. Large language models generate lesson plans, adaptive algorithms personalize student pathways, and automated grading systems promise to reduce teacher workload. Yet beneath this technological optimism lies a troubling reality: the vast majority of educators worldwide have received little to no formal training in AI, and even fewer have been equipped to teach or critically evaluate AI ethics.
Practitioner reports from major urban school systems—including New York City, home to the world's largest K–12 student population—indicate that educators with substantive AI training remain rare. In the United States, available data suggests that only a small fraction of the nation's 13,000+ school districts report offering AI ethics-focused professional development. Globally, recent surveys indicate that a substantial majority of educators lack formal AI training altogether, while a significant percentage already use AI tools—many having taught themselves through trial and error.
This gap between adoption and preparation is not an abstract policy concern; it has concrete, human consequences. Reports of AI-related psychological harm are emerging among young people whose interactions with AI systems lack appropriate guidance or safeguards. Documented cases include instances where students have experienced emotional distress, confusion about AI capabilities, or harmful interactions with chatbot systems.
The urgency is compounded by two simultaneous pressures. First, governments around the world are mandating AI literacy standards and instructional requirements, often with implementation timelines measured in months rather than years. Second, technology companies—operating in a fiercely competitive market—are prioritizing speed-to-scale over comprehensive safety testing, ethics review, or teacher preparation. The result is a convergence of untrained educators, vulnerable students, and powerful technologies in classrooms with minimal oversight.
This is not a story of villainy. Teachers, school leaders, policymakers, and technology developers are, by and large, well-intentioned. The issue is systemic. Engineering and computer science programs seldom include ethics training; education schools rarely cover AI literacy; and school budgets typically lack dedicated funding for emerging technology professional development. If we truly aspire to human-AI partnerships that enhance rather than undermine student wellbeing, we must confront this ethics gap head-on—and act with the same urgency that has characterized AI deployment.
The K–12 AI Readiness Landscape
Defining AI Literacy and AI Ethics Competence in Education
AI literacy in K–12 education encompasses three interlocking domains. Foundational understanding involves grasping core concepts such as machine learning, data dependency, and algorithmic decision-making. Operational competence means using AI tools effectively for teaching, learning, or administrative tasks. Critical evaluation requires the ability to interrogate AI systems for bias, fairness, transparency, and alignment with pedagogical values. AI ethics competence extends beyond tool use to include principled reasoning about harm, accountability, equity, privacy, and autonomy in AI-mediated environments.
Many current professional development programs conflate AI literacy with operational competence—teaching educators how to prompt a chatbot or configure an adaptive learning platform—without addressing the ethical, social, and cognitive dimensions. This narrow framing leaves teachers unprepared to respond when an AI tutor produces problematic outputs, when an algorithmic recommendation system reinforces existing inequities, or when a student experiences emotional distress after extended interaction with a chatbot.
True AI readiness in education requires all three domains working in concert, with ethics competence as the foundation. Without ethical grounding, operational skills may actually amplify harm by enabling more efficient deployment of flawed or biased systems.
Prevalence, Drivers, and Distribution of the Training Gap
The scale of the training deficit is striking. Recent global surveys suggest that approximately two-thirds of educators have received no formal AI training, while roughly half report some exposure—typically through brief workshops or self-directed exploration. Among teachers already using AI tools for lesson planning, grading, or differentiation, a substantial majority appear to be self-taught. Available U.S. data indicates that only a small minority of school districts offer training that explicitly addresses AI ethics.
Geographic and socioeconomic disparities compound these gaps. Well-resourced districts in affluent areas are more likely to invest in comprehensive professional development, while urban and rural schools—already stretched thin—struggle to secure basic technology infrastructure, let alone specialized training (Warschauer & Matuchniak, 2010). Practitioner accounts from New York City reveal that educators with substantive AI training are vanishingly rare, despite the district's progressive reputation. This is not a failure of individual schools but a systems-level breakdown: teacher preparation programs have not updated curricula, state certification requirements lag behind technological change, and funding mechanisms do not prioritize emerging competencies.
Multiple drivers accelerate this gap. First, the pace of AI development outstrips the speed of institutional adaptation. Educational institutions, designed for stability and consistency, struggle to respond to technologies that evolve on monthly cycles (Cuban, 2001). Second, technology companies focused on market capture prioritize user acquisition over user education. Third, policymakers issue mandates—often motivated by economic competitiveness narratives—without allocating corresponding resources for capacity building. Fourth, the AI ethics field itself remains fragmented, with competing frameworks and limited consensus on core principles (Jobin et al., 2019). The result is a workforce asked to navigate uncharted territory with minimal preparation and high stakes.
Organizational and Individual Consequences of the AI Ethics Training Gap
Organizational Performance Impacts
Schools and districts face tangible operational risks when educators lack AI ethics competence. Misaligned tool adoption occurs when administrators purchase platforms without evaluating algorithmic fairness, privacy protections, or pedagogical validity—often discovering problems only after implementation. Research on technology adoption in schools demonstrates that tools selected primarily for their technical features, without attention to organizational fit or values alignment, frequently fail to achieve intended outcomes and may create new problems (Penuel, 2006).
Liability exposure is escalating. Schools using AI tools for consequential decisions—placement, discipline, individualized education plan recommendations—without adequate training or oversight invite legal challenges under disability rights, civil rights, and privacy laws. The regulatory landscape around algorithmic decision-making in education is evolving rapidly, and schools operating without clear policies or documented safeguards occupy increasingly vulnerable positions.
Erosion of instructional coherence occurs when individual teachers adopt conflicting AI tools without coordination. One teacher uses an AI writing assistant that emphasizes fluency over critical thinking; another employs a fact-checking tool that students learn to game. The result is a fragmented student experience and missed opportunities for cumulative skill development. Research on instructional coherence demonstrates that when teachers work in isolation without shared frameworks, student learning suffers (Newmann et al., 2001). Teacher turnover compounds the problem: when AI-savvy educators leave, institutional knowledge evaporates, and remaining staff must reinvent practices from scratch.
Diminished stakeholder trust is perhaps the most corrosive long-term consequence. When parents discover that their children's data feeds proprietary algorithms they do not understand, or when students experience harm from poorly designed systems, trust in school leadership deteriorates. Trust, once broken, is exceptionally difficult to rebuild (Bryk & Schneider, 2002). The reputational damage from a single high-profile AI-related incident can undermine years of relationship-building with families and communities.
Individual Wellbeing and Student Impacts
The human costs are even starker. Students interacting with AI systems lacking ethical guardrails face documented risks. Cases of AI-related psychological distress have emerged, including incidents where chatbots provided harmful responses to vulnerable adolescents or where social comparison with AI-generated content exacerbated anxiety. Young people, whose cognitive and emotional development is ongoing, may be particularly susceptible to technology-mediated harms.
Epistemic harm occurs when students lose confidence in their ability to evaluate truth. If AI tools deliver authoritative-sounding but factually incorrect information, and students lack training to detect these errors, critical thinking atrophies. Over time, dependence on AI-generated answers can undermine the cognitive stamina required for deep learning (Kirschner & van Merriënboer, 2013). Students may develop what scholars call "epistemic dependence"—an erosion of confidence in their own judgment and analytical capacity.
Inequitable access to human judgment emerges when under-resourced schools substitute AI tools for human mentorship, while affluent schools use AI to augment—not replace—personalized attention. This mirrors historical patterns in educational technology, where innovations intended to democratize access instead reproduce existing inequalities (Warschauer & Matuchniak, 2010). Students who need human connection most may receive the least, while their privileged peers benefit from hybrid support models that combine technology and relationship-based learning.
Educators themselves experience moral distress when required to use AI tools that conflict with their professional values but lack institutional support to resist or redesign those tools. A teacher who believes an AI grading system penalizes creative thinking but faces administrative pressure to adopt it experiences role conflict that contributes to burnout and attrition. Research on teacher retention demonstrates that value misalignment—the gap between personal professional commitments and institutional demands—is a significant predictor of leaving the profession (Ingersoll, 2001).
Evidence-Based Organizational Responses
Comprehensive Ethics-Centered Professional Development
Effective AI ethics training for educators goes far beyond one-off workshops. Research on teacher professional development demonstrates that meaningful change requires sustained engagement, practice-based learning, job-embedded support, and integration with existing pedagogical knowledge (Darling-Hammond et al., 2017). Applied to AI ethics, this means multi-session curricula that scaffold concepts, situate ethical dilemmas in authentic classroom contexts, and provide opportunities for collaborative problem-solving.
Effective approaches include:
Case-based learning modules that present realistic scenarios—such as an AI writing assistant that homogenizes student voice, or a predictive analytics tool that produces questionable classifications—and guide teachers through ethical analysis frameworks
Co-design workshops where educators collaborate with ethicists, data scientists, and students to audit existing AI tools and develop local use policies, building ownership and contextual expertise
Peer learning communities that meet regularly to share challenges, troubleshoot ethical dilemmas, and refine practice through collaborative inquiry
Embedded coaching where facilitators with AI ethics expertise observe classroom implementation and provide just-in-time feedback tied to specific teaching moments
Assessment literacies that help teachers evaluate vendor claims about algorithmic fairness, interpret model performance metrics, and recognize when statistical validity does not guarantee pedagogical appropriateness or ethical soundness
Some forward-thinking school districts have begun implementing year-long AI ethics professional learning series that combine monthly case discussions, quarterly tool audits, and summer design institutes. Early reports suggest that participating teachers demonstrate increased confidence in evaluating AI tools and higher rates of transparent communication with students and families about AI use. Importantly, successful programs are co-developed with classroom teachers rather than imposed top-down, ensuring relevance to daily practice and honoring teachers' professional expertise.
Transparent Governance and Participatory Decision-Making
Schools function as democratic institutions where technology decisions should reflect the values of all stakeholders—not just administrators or vendors. Transparent governance structures create accountability, distribute decision-making power, and surface ethical concerns before they escalate into crises. Research on organizational justice demonstrates that procedural fairness—ensuring stakeholders have voice in decisions that affect them—enhances perceived legitimacy, increases compliance, and strengthens organizational commitment (Colquitt et al., 2013).
Effective approaches include:
AI review boards with diverse representation (teachers, students, parents, community members, technical experts) that evaluate proposed tools against explicit ethical criteria before procurement, creating systematic gate-keeping
Public AI registries that document which systems are in use, for what purposes, with what safeguards, making the information accessible to families and community members through clear, non-technical language
Participatory impact assessments where students and teachers co-investigate how a deployed AI tool affects learning, equity, and wellbeing, using findings to guide iteration or discontinuation
Transparent procurement rubrics that weight ethical considerations—data privacy, bias mitigation, explainability, vendor accountability—alongside cost and functionality, making values explicit in purchasing decisions
Regular algorithmic audits conducted by independent third parties when possible, with findings shared publicly and incorporated into continuous improvement cycles
Some education systems have established AI ethics advisory structures that bring together diverse stakeholders to co-develop guidelines for AI adoption. These governance bodies review major AI deployments, require vendors to submit documentation about fairness and privacy protections, and publish regular transparency reports. This model reduces vendor opportunism, increases public confidence, and creates mechanisms for ongoing accountability.
Curriculum Integration and Student-Centered AI Literacy
If students are to thrive in AI-mediated environments, they need structured opportunities to develop critical AI literacy—not as a standalone elective, but integrated across the curriculum. Effective AI literacy education positions students as active investigators rather than passive consumers, cultivating both technical understanding and ethical discernment. This approach aligns with long-standing pedagogical commitments to critical thinking and democratic participation (Freire, 1970).
Effective approaches include:
Cross-curricular integration where AI ethics appears in humanities (examining algorithmic bias through historical and cultural lenses), science (exploring data collection, inference, and uncertainty), mathematics (interpreting probabilistic reasoning and statistical patterns), and arts (critiquing AI-generated creative work and exploring human creativity)
Youth participatory action research projects where students identify AI systems affecting their lives, gather data, analyze impacts, and advocate for change—developing agency alongside analytical skills
Co-design of classroom AI policies where students and teachers collaboratively establish norms for AI tool use, creating shared ownership, reflective practice, and ongoing dialogue about values
Embodied AI activities that make abstract concepts tangible—such as human simulations of machine learning algorithms, or role-playing exercises exploring facial recognition ethics and consent
Capstone projects where students develop ethical AI prototypes or conduct investigations of existing systems, learning to embed values like fairness and transparency at the design stage rather than retrofitting them later
Pilot programs engaging students in semester-long investigations of AI systems in their communities have shown promise. Students have examined school-based algorithms, investigated community surveillance technologies, and designed more transparent AI tools. Teachers report that students develop sophisticated ethical reasoning while also deepening content knowledge—demonstrating that AI literacy and traditional academic learning can be mutually reinforcing rather than competing priorities.
Cross-Sector Partnerships and Accountable Vendor Relationships
Technology companies hold considerable power in shaping educational AI ecosystems. Yet market incentives often prioritize growth and feature expansion over ethical rigor and careful implementation. Strategic partnerships that shift these incentives—rewarding transparency, long-term value creation, and ethical accountability—can transform vendor behavior. Research on stakeholder engagement suggests that sustained pressure combined with reputational incentives and clear standards drives substantive corporate change (Freeman, 1984).
Effective approaches include:
Values-aligned procurement where districts collectively negotiate contracts that require vendors to meet ethical standards, share algorithmic documentation, participate in ongoing impact assessments, and commit to transparent communication about system limitations
Co-development models where educators, students, and researchers partner with companies during product design, ensuring tools reflect pedagogical principles and ethical commitments from inception rather than as afterthoughts
Public benefit commitments where vendors agree to contribute to industry-wide ethics standards, fund independent research on their products' impacts, or open-source safety features that benefit the broader ecosystem
Slow rollout protocols that resist pressure for rapid scaling, instead phasing implementation with rigorous evaluation at each stage and clear decision points for continuation, modification, or discontinuation
Exit clauses tied to ethical performance allowing districts to terminate contracts if vendors fail to address identified harms, creating meaningful accountability beyond initial sales
Some educational technology providers have adopted more transparent development processes, publishing safety protocols, engaging external ethics reviewers, conducting pilot testing with diverse student populations, and releasing impact data publicly before scaling. While still exceptional rather than standard practice, these approaches demonstrate that alternative business models are viable and build greater trust with educators and families.
Financial and Resource Supports for Equitable Access
Addressing the AI ethics training gap requires dedicated funding. Schools cannot be expected to develop new competencies without resources to compensate educators for learning time, hire expert facilitators, or access quality curricula. Equity demands that resource allocation prioritizes schools serving marginalized communities, where AI harms often concentrate and protective factors are weakest (Eubanks, 2018). Historical patterns of educational technology funding have frequently directed resources toward already-advantaged schools; breaking this pattern requires intentional policy design.
Effective approaches include:
Dedicated AI ethics professional development budgets separate from general technology funding, ensuring ethics training is not sacrificed for hardware purchases or software licenses when budgets tighten
Microgrants for teacher-led inquiry enabling educators to pursue action research on AI ethics questions in their classrooms, supporting grassroots innovation and practitioner knowledge generation
State and federal incentive programs that reward districts demonstrating comprehensive AI ethics capacity building, similar to existing grant programs for STEM education or instructional improvement
Thoughtfully structured partnerships where philanthropic organizations or responsible technology companies fund ethics training infrastructure without exercising undue influence over content or creating vendor lock-in
Time allocation policies that recognize AI ethics learning as essential professional work, providing release time, stipends, or course credit rather than expecting teachers to add responsibilities without support
Some state education agencies have begun launching AI literacy initiatives with multi-year funding to support district-level professional development, with resources prioritized for high-need schools. Participating districts receive support to hire AI ethics coordinators, access vetted curricula, and create teacher leader cohorts who sustain learning beyond initial training periods. Early implementation suggests that sustained funding combined with local flexibility produces stronger outcomes than brief, mandated interventions.
Building Long-Term Organizational Capacity and Resilience
Distributed AI Ethics Leadership Structures
Sustainable change requires leadership that extends beyond a single administrator or technology coordinator. Distributed leadership models—where ethical responsibility is woven throughout organizational roles and decision-making processes—create redundancy, resilience, and cultural embedding (Spillane, 2006). When AI ethics competence resides in multiple actors across the organization, knowledge persists through staff transitions and evolving technological landscapes.
Building distributed leadership begins with identifying and empowering teacher leaders who demonstrate ethical fluency and peer credibility. These teacher leaders participate in governance structures, facilitate professional learning communities, and serve as accessible resources for colleagues navigating ethical dilemmas. Importantly, they are compensated for this work and provided with ongoing advanced training—recognizing that leadership responsibilities require time and expertise.
School administrators develop complementary competencies: policy literacy that enables them to evaluate vendor claims critically, budget literacy that allows them to allocate resources strategically toward long-term capacity rather than short-term fixes, and communication skills that build trust with skeptical stakeholders. District-level positions—such as AI ethics coordinators or digital equity officers—provide systems-level coordination, ensuring coherence across schools while respecting building-level autonomy and contextual expertise.
Student voice is formalized through youth advisory councils that review technology decisions, co-design AI policies, and contribute to professional development for adults. Research on student participation in school governance demonstrates that youth bring unique insights, challenge adult assumptions productively, and increase the perceived legitimacy of institutional decisions (Mitra, 2018). Moreover, involving students in governance develops their own capacity for democratic participation and ethical reasoning.
Continuous Learning Systems and Adaptive Capacity
AI technologies evolve rapidly; today's ethical best practices may be obsolete or inadequate within months. Organizations must build adaptive capacity—structures and cultures that enable ongoing learning, rapid response to emerging harms, and graceful evolution of practice (Heifetz et al., 2009). This requires shifting from episodic training events to embedded learning systems that treat professional growth as continuous rather than finite.
Key elements include:
Horizon scanning mechanisms that monitor AI development trends, emerging research, and incident reports from other contexts, translating insights into local relevance and actionable guidance
After-action reviews following AI-related challenges or failures, conducted in blame-free environments with focus on system improvement rather than individual fault—creating psychological safety for surfacing problems early
Rapid prototyping cycles where educators experiment with new AI tools in low-stakes settings, gather student and colleague feedback, and iterate before wider deployment
Cross-district learning networks that share case studies, tool evaluations, and curricular resources, reducing redundant effort and accelerating collective knowledge development
Embedded time for reflection in professional schedules—not treating ethical deliberation as an add-on but as core to educational practice, analogous to curriculum planning or assessment design
Some professional networks have developed communities of practice connecting district leaders, educators, and researchers across multiple school systems. Members participate in regular case discussions, share resources through collaborative repositories, and contribute to evolving guidance documents. Participants report that network learning accelerates their capacity development and reduces the isolation that often accompanies pioneering work in emerging domains.
Purpose, Belonging, and the Psychological Contract
Organizational change efforts fail when they neglect the psychological and emotional dimensions of work. Educators entered the profession to nurture human potential, foster critical thinking, and cultivate character—not to administer algorithmic systems or optimize efficiency metrics. If AI integration is perceived as undermining professional purpose or reducing teaching to technical execution, resistance and demoralization follow. Conversely, when AI is framed as a tool to amplify educators' core mission—freeing time for relationship-building, enabling deeper personalization, or surfacing insights about student needs—it aligns with professional identity and intrinsic motivation (Deci & Ryan, 2000).
School leaders must explicitly address the psychological contract: the implicit expectations and mutual obligations that bind educators to their institutions. This involves transparent communication about why AI tools are being adopted, how they connect to educational values, and what supports will be provided. It requires acknowledging legitimate concerns about surveillance, deskilling, or job displacement rather than dismissing them as resistance to innovation.
Creating belonging means ensuring all educators—regardless of technical background, career stage, or subject area—feel invited into AI ethics conversations. Hierarchies where "tech-savvy" teachers are valorized and others are marginalized breed resentment and disengagement. Inclusive frameworks position diverse forms of expertise—special education knowledge, cultural competence, relationship-building skill, subject-matter mastery—as essential to ethical AI implementation.
Purpose is strengthened by connecting AI ethics work to long-standing educational commitments like equity, dignity, and democratic participation. When teachers see AI literacy as an extension of critical pedagogy or social justice education—not a separate technical domain—integration feels coherent rather than fragmented. This framing helps educators understand AI ethics as continuous with their existing professional values rather than a new and foreign imposition.
Conclusion
The gap between AI adoption and AI ethics competence in K–12 education is not an abstract policy challenge—it is a present crisis with documented human consequences. Students are experiencing harm, educators are navigating impossible dilemmas without support, and systemic inequities are being encoded into powerful new technologies. Yet this moment also holds transformative potential. By confronting the ethics gap directly, education systems can model a different approach to technological change—one that centers human wellbeing, distributes decision-making power, and refuses to sacrifice long-term thriving for short-term efficiency.
The path forward is clear, if demanding. Schools need comprehensive, sustained professional development that treats AI ethics as foundational rather than supplementary. Governance structures must become transparent and participatory, ensuring that technology decisions reflect community values rather than vendor interests or administrative convenience. Curriculum integration must position students as critical investigators of AI systems, capable of both using and interrogating these tools. Cross-sector partnerships must shift vendor incentives toward accountability and long-term value creation. And resource allocation must prioritize equity, directing support to schools and communities where protective factors are weakest.
None of this can happen through individual heroism or isolated excellence. It requires systems change—updated teacher preparation standards, revised procurement policies, new funding streams, and cultural shifts in how educators understand their professional role. It requires policymakers to match mandates with resources, technology companies to embrace transparency over growth-at-all-costs, and communities to demand more than superficial reassurance.
The students currently in our classrooms will inherit a world shaped profoundly by artificial intelligence. The question is not whether they will encounter AI but what capacities they will bring to that encounter. Will they be passive consumers of algorithmic outputs, or critical participants in shaping human-AI partnerships? Will they have experienced education systems that modeled ethical deliberation, transparent governance, and accountable innovation—or systems that prioritized speed and compliance over thoughtfulness and care?
Our answers to these questions will be written not in policy documents but in the daily practices of classrooms, the norms of professional culture, and the experiences of young people. We have agency here. The AI ethics gap in education is not inevitable; it is a choice—and we can choose differently. The time to act is now, before the gap widens further and the harms deepen. Our students—and the future they will shape—deserve nothing less than our most thoughtful, ethically grounded, and courageously sustained efforts.
References
Bryk, A. S., & Schneider, B. (2002). Trust in schools: A core resource for improvement. New York, NY: Russell Sage Foundation.
Colquitt, J. A., Scott, B. A., Rodell, J. B., Long, D. M., Zapata, C. P., Conlon, D. E., & Wesson, M. J. (2013). Justice at the millennium, a decade later: A meta-analytic test of social exchange and affect-based perspectives. Journal of Applied Psychology, 98(2), 199–236.
Cuban, L. (2001). Oversold and underused: Computers in the classroom. Cambridge, MA: Harvard University Press.
Darling-Hammond, L., Hyler, M. E., & Gardner, M. (2017). Effective teacher professional development. Palo Alto, CA: Learning Policy Institute.
Deci, E. L., & Ryan, R. M. (2000). The "what" and "why" of goal pursuits: Human needs and the self-determination of behavior. Psychological Inquiry, 11(4), 227–268.
Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. New York, NY: St. Martin's Press.
Freeman, R. E. (1984). Strategic management: A stakeholder approach. Boston, MA: Pitman.
Freire, P. (1970). Pedagogy of the oppressed. New York, NY: Continuum.
Heifetz, R., Grashow, A., & Linsky, M. (2009). The practice of adaptive leadership: Tools and tactics for changing your organization and the world. Boston, MA: Harvard Business Press.
Ingersoll, R. M. (2001). Teacher turnover and teacher shortages: An organizational analysis. American Educational Research Journal, 38(3), 499–534.
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399.
Kirschner, P. A., & van Merriënboer, J. J. G. (2013). Do learners really know best? Urban legends in education. Educational Psychologist, 48(3), 169–183.
Mitra, D. L. (2018). Student voice in secondary schools: The possibility for deeper change. Journal of Educational Administration, 56(5), 473–487.
Newmann, F. M., Smith, B., Allensworth, E., & Bryk, A. S. (2001). Instructional program coherence: What it is and why it should guide school improvement policy. Educational Evaluation and Policy Analysis, 23(4), 297–321.
Penuel, W. R. (2006). Implementation and effects of one-to-one computing initiatives: A research synthesis. Journal of Research on Technology in Education, 38(3), 329–348.
Spillane, J. P. (2006). Distributed leadership. San Francisco, CA: Jossey-Bass.
Warschauer, M., & Matuchniak, T. (2010). New technology and digital worlds: Analyzing evidence of equity in access, use, and outcomes. Review of Research in Education, 34(1), 179–225.

Jonathan H. Westover, PhD is Chief Academic & Learning Officer (HCI Academy); Associate Dean and Director of HR Programs (WGU); Professor, Organizational Leadership (UVU); OD/HR/Leadership Consultant (Human Capital Innovations). Read Jonathan Westover's executive profile here.
Suggested Citation: Westover, J. H. (2025). The AI Ethics Gap in K–12 Education: Why Technical Training Alone Fails Our Teachers and Students. Human Capital Leadership Review, 28(4). doi.org/10.70175/hclreview.2020.28.4.3














