top of page
HCI Academy Logo.png
Foundations of Leadership 2.png
DEIB.png
Purpose.png
Cover.png
Capstone.png

Between Hope and Fear: Thoughts on AI surpassing human level intelligence



The possibility of artificial intelligence (AI) surpassing human level intelligence is a topic that elicits both hope and fear. As AI and machine learning continue to advance at an exponential rate, the likelihood of superintelligent machines emerging increases. While this could potentially solve many of humanity's most intractable problems and usher in a new era of unprecedented prosperity, it also threatens humans' role at the top of the societal and technological hierarchy. As leaders in organizations today, it is imperative we thoughtfully consider both the opportunities and risks associated with superintelligent AI.


Today we will explore some of the latest research on AI and superintelligence, outline potential impacts - both positive and negative - for organizations, and suggest strategies for embracing the hopeful vision while mitigating existential risks.


Research on AI and Superintelligence


While general artificial intelligence, or AI as intelligent as humans, remains on the far horizon, a growing number of researchers argue narrow AI focused on specific tasks could lead recursively to the development of superhumanly intelligent, generalist artificial agents as early as this century (Muller & Bostrom, 2016). The theoretical limits of machine learning are not fully understood, and some predict an "intelligence explosion" could occur whereby a generally intelligent machine can recursively self-improve and design intelligences vastly more capable than itself (Bostrom, 2014). Leaders today need a basic understanding of the technological trends driving this research.


One of the most significant trends is the rise of deep learning and neural networks. Inspired by biological neural systems, deep neural networks have achieved superhuman performance on narrow tasks like image recognition, video games, and strategic decision-making (Silver et al., 2017). Researchers are exploring increasingly powerful self-supervised techniques for knowledge extraction from massive datasets without human labels (Brown et al., 2020). These advances in unsupervised and self-supervised learning could help machines progress more autonomously in developing broader intelligence.


Impacts on Organizations - Positive Possibilities


If developed and applied responsibly, superintelligent AI has potential for enormous societal benefit. Many global challenges like disease, poverty, and environmental crises may become solvable (Bostrom, 2017). Organizations could leverage superintelligent assistants to fulfill their social missions at scales never before possible. Some envision AI helping end global warming through low-cost clean energy and carbon capture (Hanson, 2016).


For organizations specifically, superintelligent AI could boost productivity and efficiencies. Intelligent assistants could take over mundane and repetitive tasks, freeing employees for more creative work requiring human skills like empathy, intuition, strategic thinking and leadership (Ford, 2015). Top talent no longer required to toil at routine jobs may find more fulfilling work elsewhere. Automation could even enhance jobs by assisting employees with information and recommendations.


Certain difficult optimization problems could be cracked. AI may design new materials, medicines, and technologies by evaluating exponentially more possibilities than humans. It could optimize complex supply chains, project management, logistics and resource allocation. Organizational strategies, structures and processes themselves may become more rational and robust (Brynjolfsson & McAfee, 2014). With automation taking over more manual jobs, greater prosperity could be achieved working fewer hours (Ford, 2015).


Impacts on Organizations - Potential Downsides


While superintelligent AI promises gains, existential risks also warrant consideration. Jobs at high risk of automation include transportation, food service, retail sales, and clerical/administrative roles that employ millions worldwide (Frey & Osborne, 2013). Mass unemployment at such a scale could trigger economic and social upheaval. Organizations must thoughtfully manage this transition and retrain displaced workers for newly created jobs (Ford, 2015).


Competition from AI systems may disrupt entire industries. Digital technologies have already transformed publishing, music, retail and more. If AI proves more adept than humans at strategic planning, financial analysis, medical diagnosis or creative work, many professional fields could face existential challenges to their livelihoods and identities. Organizational adaptations will be needed to leverage AI for new opportunities, rather than fight disruption.


Some worry about the controliability of superintelligent machines (Bostrom, 2014). While narrow AI today remains limited to specific tasks, a generally intelligent agent's goals and behavior become far less predictable. It may be challenging to ensure the humanity- and prosperity-enhancing "disposition" of such a powerful, non-human intellect. Without proper oversight and controls, advanced AI could harm humanity though some type of accident or may act according to goals counter to human well-being. There are also concerns a generally intelligent agent may be difficult to shut down once created.


Strategies for Organizations to Navigate a Post-Human Future


Despite uncertainties, organizations today can guide progress responsibly by adopting prudent strategies. With purposeful planning, AI's benefits may vastly outweigh any negatives. What follows are suggestions for leadership:


  • Plan ahead for workforce transitions. Major job changes require years to implement thoughtfully, with consideration for communities and social impacts. Organizations must assess changing skill needs, partnering with education systems to boost retraining where automation displaces roles.

  • Prioritize job enhancement through human-AI collaboration. Rather than replacing jobs entirely, leverage AI to elevate roles through augmented intelligence. Partner technologies with human judgment, empathy and leadership. Foster a culture where humans and AI empower one another.

  • Guide AI development through your organizational mission and values. Input desired ethical and societal outcomes into the machine learning process from the start. Ensure any AI developed or adopted by the organization aligns with and enhances your mission through mechanisms like constitutional AI (Soares & Fallenstein, 2017).

  • Be proactively transparent in AI development and applications. Disclose model parameters, datasets and decision processes used by AI to appropriate oversight bodies and the public to build trust. Thoughtful explainability of "black box" machine learning becomes crucial alongside accountability.

  • Establish multi-stakeholder partnerships. Collaborate across sectors, with technical research institutions, oversight boards and public interest groups to develop beneficial AI solutions accountable to humanity's well-being (Future of Life Institute, n.d.). Pool organizational wisdom to maximize knowledge and guide progress responsibly.

  • Develop forecasting capabilities. Anticipate events through techniques like scenario planning to prepare strategic responses under variances in technological and social trends. Wise guidance requires circumspect foresight into prospective impacts over decades.

  • Advocate for responsible global coordination. No single actor controls technological change. Support international collaboration on AI safety research and governance to realize shared benefits and mitigate common existential risks cooperatively on a global scale. Organizations play a role in responsible civic stewardship.


By adhering to principles of transparency, collaboration, responsible leadership and prudent long-term thinking, organizations can help ensure humanity's future remains bright as transformative technologies like AI progress. With patience and deliberation focused on benefiting all people, the hopes of a post-scarcity world become attainable without plumbing the depths of existential fears. The choices leaders make today will echo through the centuries to come.


Conclusion


Artificial intelligence advancing to superhuman levels portends immense uncertainty yet also tremendous opportunity if developed conscientiously. For organizations focused on bettering society through their unique missions and expertise, now is the time to envision how experiential technologies might realize humanity's boldest aspirations. Through steadfast guidance by purpose and principles rather than reaction or fear, leadership navigating these technological frontiers can help bring forth AI's boundless promise while keeping its perils at bay. With prudent planning and multilateral cooperation between sectors, a future of enhanced human flourishing through intelligent machines remains conceivable. By thoughtfully preparing strategic responses and inputting preferred ethics early, organizations of today can maximize AI for empowering tomorrow.


References

  • Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.

  • Bostrom, N. (2017). Strategic implications of openness in AI development. Global Policy, 8(2), 135-148. https://doi.org/10.1111/1758-5899.12355

  • Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., & Amodei, D. (2020). Language models are few-shot learners. arXiv preprint arXiv:2005.14165.

  • Brynjolfsson, E., & McAfee, A. (2014). The second machine age: Work, progress, and prosperity in a time of brilliant technologies. WW Norton & Company.

  • Ford, M. (2015). Rise of the robots: Technology and the threat of a jobless future. Basic Books.

  • Future of Life Institute. (n.d.). About AI safety. https://futureoflife.org/ai-safety/

  • Frey, C. B., & Osborne, M. A. (2013). The future of employment: How susceptible are jobs to computerisation.

  • Hanson, R. (2016). The age of em: Work, love, and life when robots rule the earth. Oxford University Press.

  • Muller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of expert opinion. In V. C. Müller (Ed.), Fundamental issues of artificial intelligence (pp. 555-572). Springer International Publishing. https://doi.org/10.1007/978-3-319-26485-1_26

  • Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., Hubert, T., Baker, L., Lai, M., Bolton, A., Chen, Y., Lillicrap, T., Hui, F., Sifre, L., van den Driessche, G., Graepel, T., & Hassabis, D. (2017). Mastering the game of Go without human knowledge. Nature, 550(7676), 354–359. https://doi.org/10.1038/nature24270

  • Soares, N., & Fallenstein, B. (2017). Aligning Superintelligence with Human Values: A technical research agenda. Berkeley, CA: Future of Humanity Institute, University of California.

 

Jonathan H. Westover, PhD is Chief Academic & Learning Officer (HCI Academy); Chair/Professor, Organizational Leadership (UVU); OD Consultant (Human Capital Innovations). Read Jonathan Westover's executive profile here.



Comments


Human Capital Leadership Review

ISSN 2693-9452 (online)

Subscription Form

HCI Academy Logo.png
Effective Teams.png
Employee Well being.png
Change Agility 2.png
cover.png
cover.png
bottom of page