By Jonathan H. Westover, PhD
Listen to this article:
Abstract: This article proposes a holistic multi-pronged strategy for establishing an ethical workforce ready for the rise of AI, beginning with examining the current AI landscape and projections of jobs impacted to understand organizational preparedness needs. Key elements include developing an ethical framework through stakeholder engagement outlining shared principles for responsibility; implementing comprehensive technical and soft-skills training programs; creating new dedicated ethics and data governance roles while expanding existing functions to oversee AI accountability; and integrating metrics and career pathways tied to system and process assessments through an ethical lens to drive cultural normalization. Continuous self-evaluation through qualitative and quantitative metrics aids transparency, justification of investment, and framework improvement. The goal of proactively cultivating stewardship abilities across all functions through values clarification, tailored learning, distributed responsibilities, and self-reflection is positioned as exemplary leadership guiding technology's societal impacts amid workplace transformations from emerging technologies.
As Artificial Intelligence and automation increasingly pervade the workplace, organizations must prepare their workforce with the skills, knowledge and mindset to responsibly develop and work alongside AI technologies.
Today we will explore a research-based framework for companies to establish an ethical and competent workforce ready for the AI-powered future.
Understanding the AI Landscape
To prepare workers, leadership must first understand the current and projected impact of AI. Researchers predict widespread job disruption as many routine tasks become automated (Frey & Osborne, 2017). While some occupations will be replaced, many new roles will emerge to complement AI (World Economic Forum, 2018). Skill demand will shift towards more social and creative abilities best demonstrated by humans like complex problem-solving, critical thinking and emotional intelligence (World Economic Forum, 2016).
Italics At the same time, AI risks like bias, lack of explainability, privacy violations and job disruption threaten society if left unaddressed (Partnership on AI, 2018). For technology to benefit all, organizations must ensure its development and use uphold principles of fairness, safety, transparency and accountability (Jobin et al., 2019).
Establishing an Ethical Framework
An ethical framework provides shared values and guidelines to develop AI responsibility. Company leadership should engage internal and external stakeholders to understand their diverse perspectives. Based on this input and benchmarking industry best practices, leadership can craft a set of AI ethics principles tailored to their unique organizational culture and goals.
Some common elements of ethical frameworks include:
Respect for human autonomy, privacy, dignity, and diversity
Accountability and transparency of AI systems
Fairness, non-discrimination, and mitigating unintended harms
Safety and security of AI technologies
Explainability and opportunity for human oversight
Clearly communicating these principles establishes an ethical North Star to guide internal decision-making. Leadership must also incentivize and reward adhering to the framework versus short-term profit goals that could compromise responsibility.
Developing an Ethical Training Program
A comprehensive training program teaches workers to develop and use AI consistent with the company's ethical framework. Curricula should cover technical aspects like model testing, as well as develop "soft skills" for human-AI collaboration.
Areas of focus include:
Data Responsibility: Workers learn best practices for collecting, managing and governing diverse, representative datasets that avoid harms from bias or privacy issues. They practice techniques for identifying and mitigating bias throughout the AI lifecycle.
System Accountability: Engineers hone skills for documenting system designs, testing models for accuracy and unwanted behaviors, and integrating explainability and control mechanisms. Non-technical roles gain literacy in recognizing and addressing accountability concerns.
Human Values in Design: All roles expand their perspective-taking abilities and appreciation for diverse human experiences. They apply a "human-in-the-loop" approach, prioritizing oversight, feedback mechanisms and designing AI as a collaborative tool rather than replacement.
Lifelong Learning Mindset: Given rapid technological change, the program fosters continuous learning habits and an interdisciplinary, collaborative mindset. It builds motivation for ongoing self-education to ensure responsible development keeps pace with advancing capabilities.
Implementing Responsible Roles
To operationalize its framework, a company establishes new roles and integrates responsibility into existing jobs. Titles like "AI Ethicist," "Data Steward," and "Model Auditor" monitor adherence and progress. Traditional roles expand responsibilities to include accountability dimensions.
For example:
Product managers ensure features respect privacy and do not unintentionally harm users
Marketers evaluate marketing campaigns for potential bias or unfair targeting
IT specialists implement secure, transparent logging and control systems
HR uses AI tools fairly and avoids automation bias in important processes like hiring
Job descriptions, performance metrics and career advancement incorporate evaluating AI systems and work practices through an ethical lens. With accountability built into daily work, responsibility becomes a cultural norm rather than add-on task.
Measuring Progress and Impact
Ongoing assessment measures effectiveness of the organizational effort. Both qualitative and quantitative metrics evaluate workplace culture shifts, worker knowledge gains, and the real-world impact of deployed systems. Feedback also improves ongoing training and framework refinement.
Example metrics include:
Participant satisfaction and learning assessments from training programs
Inclusion of responsibility reviews in projects, products and processes
Anonymous employee surveys on perceived priority of ethical values
Analysis of AI output for unintended harms or fairness issues
Customer or public reporting of issues, and system for remediation
Recognition in ethical benchmarks or awards for leadership in the field
Regular reporting of metrics to leadership holds the organization transparently accountable and charts progress towards long-term responsibility goals. It also provides empirical justification for continued investment in the program.
Conclusion
As AI reshapes work, societies expect leadership from companies at the forefront of innovation. With foresight and commitment to responsible development, any organization can establish the conditions to ensure its workforce and technologies benefit all humanity. By operationalizing an ethics framework through targeted training, new roles and continuous improvement, companies nurture an ethical culture and competent workforce ready to realize AI's promise, while avoiding potential harms. Such proactive steps set the high-road example for others and build long-term competitive advantage in a future shaped by responsible, trustworthy AI.
References
Frey, C. B., & Osborne, M. A. (2017). The future of employment: How susceptible are jobs to computerisation? Technological Forecasting and Social Change, 114, 254–280. https://doi.org/10.1016/j.techfore.2016.08.019
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2
Partnership on AI. (2018). Tenets. https://www.partnershiponai.org/tenets/
World Economic Forum. (2016). The future of jobs: Employment, skills and workforce strategy for the Fourth Industrial Revolution. http://www3.weforum.org/docs/WEF_Future_of_Jobs.pdf
World Economic Forum. (2018). Towards a reskilling revolution: A future of jobs for all. https://www.weforum.org/reports/towards-a-reskilling-revolution
Jonathan H. Westover, PhD is Chief Academic & Learning Officer (HCI Academy); Chair/Professor, Organizational Leadership (UVU); OD Consultant (Human Capital Innovations). Read Jonathan Westover's executive profile here.
Suggested Citation: Westover, J. H. (2024). Preparing the Workforce for Ethical, Responsible and Trustworthy AI. Human Capital Leadership Review, 13(4). doi.org/10.70175/hclreview.2020.13.4.7