AI human values

Credit: IEEE

Popular culture abounds with examples of artificial intelligence gone haywire, from the creepy HAL 9000 from 2001: A Space Odyssey to the more recent killer androids from Westworld (what a great series!). But you don’t have to deal with the Terminator for A.I. to become dangerous. It doesn’t even have to be self-conscious to rise against us — just badly programmed. Imagine what an automated system based on self-learning and big data can do to the stock market.

With this in mind, the IEEE just released the first draft of Ethically Aligned Design, a document which offers ethical considerations in the creation of autonomous and intelligent technologies.

More than 100 experts in AI, law and ethics, philosophy, and politics contributed to the rough draft. The guidelines are still a work in progress, though, and everyone is invited to pitch in.

“By providing technologists with peer-driven, practical recommendations for creating ethically aligned autonomous and intelligent products, services, and systems, we can move beyond the fears associated with these technologies and bring valued benefits to humanity today and for the future,” according to a statement by Konstantinos Karachalios, IEEE Standard Association managing director.

The guidelines are centered around three core principles:

Subscribe to our newsletter and receive our new book for FREE
Join 50,000+ subscribers vaccinated against pseudoscience
Download NOW
By subscribing you agree to our Privacy Policy. Give it a try, you can unsubscribe anytime.

1. Embody the highest ideals of human rights.

2. Prioritize the maximum benefit to humanity and the natural environment.

3. Mitigate risks and negative impacts as AI/AS evolve as socio-technical systems.

To embed human norms and values into their systems, technologists are offered a couple of approaches and methodologies. Among them, ways to hold autonomous and intelligent systems accountable, transparent or predictable.

“Recent breakthroughs in machine learning and perception will enable researchers to explore bottom-up approaches—in which the AI system learns about its context and about human values—similar to the manner in which a child slowly learns which forms of behavior are safe and acceptable. Of course a child can feel pain and pleasure, empathize with others, and has other capabilities that AI system cannot presently imitate. Nevertheless, as research on autonomous systems progresses, engineers will explore new ways to either simulate these capabilities, or build alternative mechanisms that fulfill similar functions,” the report reads.

New and existing committees contributing to an updated version of Ethically Aligned Design will be featured at The IEEE Global Initiative’s face-to-face meeting at The Robert S. Strauss Center at The University of Texas at Austin on 5-6 June 2017.