Quantcast
ZME Science
  • News
  • Environment
  • Health
  • Future
  • Space
  • Features
  • ZME & more
    • About
    • The Team
    • Advertise
    • Contribute
    • Our stance on climate change
    • Privacy Policy
    • Contact
No Result
View All Result
ZME Science

No Result
View All Result
ZME Science
Home Science News

How to make an ethically aligned artificial intelligence — IEEE publishes first guidelines

Top experts in A.I. are discussing how to embed human values to machines.

by Tibi Puiu
December 19, 2016
in News, Tech
Share on FacebookShare on TwitterSubmit to Reddit

AI human values
Credit: IEEE

Popular culture abounds with examples of artificial intelligence gone haywire, from the creepy HAL 9000 from 2001: A Space Odyssey to the more recent killer androids from Westworld (what a great series!). But you don’t have to deal with the Terminator for A.I. to become dangerous. It doesn’t even have to be self-conscious to rise against us — just badly programmed. Imagine what an automated system based on self-learning and big data can do to the stock market.

With this in mind, the IEEE just released the first draft of Ethically Aligned Design, a document which offers ethical considerations in the creation of autonomous and intelligent technologies.

More than 100 experts in AI, law and ethics, philosophy, and politics contributed to the rough draft. The guidelines are still a work in progress, though, and everyone is invited to pitch in.

“By providing technologists with peer-driven, practical recommendations for creating ethically aligned autonomous and intelligent products, services, and systems, we can move beyond the fears associated with these technologies and bring valued benefits to humanity today and for the future,” according to a statement by Konstantinos Karachalios, IEEE Standard Association managing director.

The guidelines are centered around three core principles:

1. Embody the highest ideals of human rights.

2. Prioritize the maximum benefit to humanity and the natural environment.

3. Mitigate risks and negative impacts as AI/AS evolve as socio-technical systems.

To embed human norms and values into their systems, technologists are offered a couple of approaches and methodologies. Among them, ways to hold autonomous and intelligent systems accountable, transparent or predictable.

“Recent breakthroughs in machine learning and perception will enable researchers to explore bottom-up approaches—in which the AI system learns about its context and about human values—similar to the manner in which a child slowly learns which forms of behavior are safe and acceptable. Of course a child can feel pain and pleasure, empathize with others, and has other capabilities that AI system cannot presently imitate. Nevertheless, as research on autonomous systems progresses, engineers will explore new ways to either simulate these capabilities, or build alternative mechanisms that fulfill similar functions,” the report reads.

New and existing committees contributing to an updated version of Ethically Aligned Design will be featured at The IEEE Global Initiative’s face-to-face meeting at The Robert S. Strauss Center at The University of Texas at Austin on 5-6 June 2017.

Tags: artificial intelligence

ShareTweetShare
ADVERTISEMENT
ADVERTISEMENT
  • News
  • Environment
  • Health
  • Future
  • Space
  • Features
  • ZME & more

© 2007-2021 ZME Science - Not exactly rocket science. All Rights Reserved.

No Result
View All Result
  • News
  • Environment
  • Health
  • Future
  • Space
  • Features
  • ZME & more
    • About
    • The Team
    • Advertise
    • Contribute
    • Our stance on climate change
    • Privacy Policy
    • Contact

© 2007-2021 ZME Science - Not exactly rocket science. All Rights Reserved.

Don’t you want to get smarter every day?

YES, sign me up!

Over 35,000 subscribers can’t be wrong. Don’t worry, we never spam. By signing up you agree to our privacy policy.