ZME Science
No Result
View All Result
ZME Science
No Result
View All Result
ZME Science

Home → Environment → World Problems

Stephen Hawking: You Should Support Wealth Redistribution

In July, Professor Stephen Hawking took the time to answer questions posed by Reddit users in an AMA (Ask Me Antyhing), addressing one of the less discussed aspect of increasing technology and robotization: the distribution of wealth.

Mihai AndreibyMihai Andrei
November 20, 2015 - Updated on January 5, 2017
in World Problems
A A
Share on FacebookShare on TwitterSubmit to Reddit

In July, Professor Stephen Hawking took the time to answer questions posed by Reddit users in an AMA (Ask Me Antyhing), addressing one of the less discussed aspect of increasing technology and robotization: the distribution of wealth. Here’s the question, which is really interesting, and Hawking’s answer:

Q: “Have you thought about the possibility of technological unemployment, where we develop automated processes that ultimately cause large unemployment by performing jobs faster and/or cheaper than people can perform them? Some compare this thought to the thoughts of the Luddites, whose revolt was caused in part by perceived technological unemployment over 100 years ago. In particular, do you foresee a world where people work less because so much work is automated? Do you think people will always either find work or manufacture more work to be done?”

Answer:

A: If machines produce everything we need, the outcome will depend on how things are distributed. Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality.

You hear people and the media talk a lot about a potential war with robots or emerging conflicts with technology, but in my view, this is a far more pressing point. If we reach a point where machines produce all, or most of our needs, then we won’t really need so many people to work, and without work, in our current system, a huge income inequality gap will be created (or rather accentuated). With this in mind, we need to re-think our way of distributing wealth; the robots won’t be the enemy, we will.

A potential solution is a system called “basic income”, which basically revolves around the idea of offering people a sufficient sum of money every month for them to live off, whether they work or not. You can read more about it here. Several cities in the Netherlands are already starting to implement it, and Finland is considering it on a national level. There are also other ideas, but one things seems certain: the increasing use of robots and machines can either create a world where wealth is distributed harmoniously among the population, or a divided world, with very rich and very poor.

As for Professor Hawking, his entire AMA (which you can read on Reddit) was very insightful, here are another couple of Q&As related to Artificial Intelligence:

Image via Flickr.

Q: Professor Hawking- Whenever I teach AI, Machine Learning, or Intelligent Robotics, my class and I end up having what I call “The Terminator Conversation.” My point in this conversation is that the dangers from AI are overblown by media and non-understanding news, and the real danger is the same danger in any complex, less-than-fully-understood code: edge case unpredictability. In my opinion, this is different from “dangerous AI” as most people perceive it, in that the software has no motives, no sentience, and no evil morality, and is merely (ruthlessly) trying to optimize a function that we ourselves wrote and designed. Your viewpoints (and Elon Musk’s) are often presented by the media as a belief in “evil AI,” though of course that’s not what your signed letter says. Students that are aware of these reports challenge my view, and we always end up having a pretty enjoyable conversation. How would you represent your own beliefs to my class? Are our viewpoints reconcilable? Do you think my habit of discounting the layperson Terminator-style “evil AI” is naive? And finally, what morals do you think I should be reinforcing to my students interested in AI?

Answer:

RelatedPosts

Stephen Hawking brings a new perspective on what we know as black holes
Stephen Hawking: ‘God particle’ might destroy the Universe. But wait…
“Escape our fragile planet” in the next 1,000 years or perish, Stephen Hawking says
Stephen Hawking doubts someone of his talent and condition would be supported today as he was

A: You’re right: media often misrepresent what is actually said. The real risk with AI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants. Please encourage your students to think not only about how to create AI, but also about how to ensure its beneficial use.

Question:

Q: Hello Doctor Hawking, thank you for doing this AMA. I am a student who has recently graduated with a degree in Artificial Intelligence and Cognitive Science. Having studied A.I., I have seen first hand the ethical issues we are having to deal with today concerning how quickly machines can learn the personal features and behaviours of people, as well as being able to identify them at frightening speeds. However, the idea of a “conscious” or actual intelligent system which could pose an existential threat to humans still seems very foreign to me, and does not seem to be something we are even close to cracking from a neurological and computational standpoint. What I wanted to ask was, in your message aimed at warning us about the threat of intelligent machines, are you talking about current developments and breakthroughs (in areas such as machine learning), or are you trying to say we should be preparing early for what will inevitably come in the distant future?

Answer:

The latter. There’s no consensus among AI researchers about how long it will take to build human-level AI and beyond, so please don’t trust anyone who claims to know for sure that it will happen in your lifetime or that it won’t happen in your lifetime. When it eventually does occur, it’s likely to be either the best or worst thing ever to happen to humanity, so there’s huge value in getting it right. We should shift the goal of AI from creating pure undirected artificial intelligence to creating beneficial intelligence. It might take decades to figure out how to do this, so let’s start researching this today rather than the night before the first strong AI is switched on.

Tags: robotizationstephen hawking

Share341TweetShare
Mihai Andrei

Mihai Andrei

Dr. Andrei Mihai is a geophysicist and founder of ZME Science. He has a Ph.D. in geophysics and archaeology and has completed courses from prestigious universities (with programs ranging from climate and astronomy to chemistry and geology). He is passionate about making research more accessible to everyone and communicating news and features to a broad audience.

Related Posts

Credit: Pixabay.
News

Sonic black hole confirms Hawking radiation theory

byTibi Puiu
6 years ago
Stephen Hawking at Kennedy Space Center Shuttle Landing Facility. Credit: Wikimedia Commons.
News

Stephen Hawking’s final paper suggests black holes store information about missing matter in ‘soft hairs’

byTibi Puiu
7 years ago
News

Stephen Hawking’s final paper was just published — and it’s mindblowing

byTibi Puiu
7 years ago
Young Stephen Hawking. Source: The New Yorker.
Alien life

Why Stephen Hawking Was Afraid of Aliens

byJohn Tuttle
7 years ago

Recent news

CERN Creates Gold from Lead and There’s No Magic, Just Physics

May 9, 2025

A New AI Tool Can Recreate Your Face Using Nothing But Your DNA

May 9, 2025

How Some Flowers Evolved the Grossest Stench — and Why Flies Love It

May 9, 2025
  • About
  • Advertise
  • Editorial Policy
  • Privacy Policy and Terms of Use
  • How we review products
  • Contact

© 2007-2025 ZME Science - Not exactly rocket science. All Rights Reserved.

No Result
View All Result
  • Science News
  • Environment
  • Health
  • Space
  • Future
  • Features
    • Natural Sciences
    • Physics
      • Matter and Energy
      • Quantum Mechanics
      • Thermodynamics
    • Chemistry
      • Periodic Table
      • Applied Chemistry
      • Materials
      • Physical Chemistry
    • Biology
      • Anatomy
      • Biochemistry
      • Ecology
      • Genetics
      • Microbiology
      • Plants and Fungi
    • Geology and Paleontology
      • Planet Earth
      • Earth Dynamics
      • Rocks and Minerals
      • Volcanoes
      • Dinosaurs
      • Fossils
    • Animals
      • Mammals
      • Birds
      • Fish
      • Amphibians
      • Reptiles
      • Invertebrates
      • Pets
      • Conservation
      • Animal facts
    • Climate and Weather
      • Climate change
      • Weather and atmosphere
    • Health
      • Drugs
      • Diseases and Conditions
      • Human Body
      • Mind and Brain
      • Food and Nutrition
      • Wellness
    • History and Humanities
      • Anthropology
      • Archaeology
      • History
      • Economics
      • People
      • Sociology
    • Space & Astronomy
      • The Solar System
      • Sun
      • The Moon
      • Planets
      • Asteroids, meteors & comets
      • Astronomy
      • Astrophysics
      • Cosmology
      • Exoplanets & Alien Life
      • Spaceflight and Exploration
    • Technology
      • Computer Science & IT
      • Engineering
      • Inventions
      • Sustainability
      • Renewable Energy
      • Green Living
    • Culture
    • Resources
  • Videos
  • Reviews
  • About Us
    • About
    • The Team
    • Advertise
    • Contribute
    • Editorial policy
    • Privacy Policy
    • Contact

© 2007-2025 ZME Science - Not exactly rocket science. All Rights Reserved.