homehome Home chatchat Notifications


AI Experts Predict Machines Could Outthink Humans by 2040. But Some Are Betting on Even Sooner

The Singularity could be closer than you think.

Tibi Puiu
February 27, 2025 @ 7:19 pm

share Share

In the 1950s, mathematician Alan Turing — best remembered by many as the cryptography genius who led the British effort to break the German Enigma codes during WWII — posed a question that would haunt scientists for decades: “Can machines think?”

Turing proposed an “imitation game” to answer it. This imitation game, known as the Turing Test, is simple: a human participant would exchange a series of typed interactions with two respondents, a computer and a human being. Both respondents, one made of flesh and the other of circuits, were concealed behind a partition. If, after a designated time, the interrogator couldn’t tell them apart, the computer would effectively win, suggesting that such a machine could be considered capable of thought.

In the age of AI, we can safely say that machines can pass the Turing Test with flying colors. Now that we have GPT-4, deepfakes, and OpenAI’s “Sora” text-to-video model that can churn out highly realistic video clips from mere text prompts, it seems like we’ve come closer to a thinking machine than ever before.

Artificial General Intelligence

Today, Turing’s question has evolved into a more urgent one. When will machines think in the real, genuine sense that human beings can think? And what happens when they do?

Artificial General Intelligence (AGI), the point at which machines can perform any intellectual task as well as humans, has long been the stuff of science fiction. But, according to a sweeping analysis of predictions from 8,590 scientists, entrepreneurs, and AI researchers, AGI may be closer than we think. Surveys among these experts suggest a 50% chance it could arrive by 2040. Some even bet on the 2030s.

This timeline has shifted dramatically in recent years. Just a decade ago, many researchers believed AGI was a century away. But the rapid rise of large language models like GPT-4 has accelerated expectations — and sparked intense debate about what AGI really means, whether it’s achievable, and how it will reshape our world.

The road to AGI is paved with bold predictions — and a fair share of over-optimism. In 1965, AI pioneer Herbert A. Simon declared that machines would be capable of doing any human work within 20 years. In the 1980s, Japan’s Fifth Generation Computer project promised machines that could hold casual conversations by the 1990s. Neither materialized.

Yet today, the consensus among AI researchers is shifting.

Are we closer to the Singularity?

Surveys conducted between 2012 and 2023 reveal a growing belief that AGI is not only possible but probable within the next few decades. In 2023, a survey of 2,778 AI researchers estimated a 50% chance of achieving “high-level machine intelligence” by 2040. Entrepreneurs are even more bullish, with figures like Elon Musk and OpenAI’s Sam Altman predicting AGI could arrive as early as 2026 or 2035. However, tech leaders have an incentive to embellish the pace of AI progress as this can help them secure more funding or grow their stocks.

What’s driving this shift? The exponential growth of computing power, advances in algorithms, and the emergence of models like GPT-4, which demonstrate surprising generalist capabilities in areas like coding, law, and mathematics. Microsoft’s 2023 report on GPT-4 even sparked debate over whether it represented an early form of AGI. It matched human performance on math, coding, and law (though not quite expert-level performance).

Last year, futurist Ray Kurzweil in his latest book, The Singularity is Nearer, states that we’re just years away from human-level AI. Kurzweil previously coined the term “singularity” — a point at which machines surpass human intelligence and begin improving themselves at an uncontrollable rate.

“Human-level intelligence generally means AI that has reached the ability of the most skilled humans in a particular domain and by 2029 that will be achieved in most respects. (There may be a few years of transition beyond 2029 where AI has not surpassed the top humans in a few key skills like writing Oscar-winning screenplays or generating deep new philosophical insights, though it will.) AGI means AI that can do everything that any human can do, but to a superior level. AGI sounds more difficult, but it’s coming at the same time. And my five-year-out estimate is actually conservative: Elon Musk recently said it is going to happen in two years,” Kurzweil said.

Kurzweil doubled down and made another wild prediction. He said that by 2045, humans will be able to increase their intelligence a millionfold through advanced brain interfaces. These interfaces, according to Kurzweil, may involve nanobots non-invasively inserted into our capillaries, allowing for a seamless integration of biological and artificial intelligence.

But not everyone is convinced. Some researchers argue that human intelligence is too complex to replicate. Yann LeCun, a pioneer of deep learning, has called for retiring the term AGI altogether, suggesting we focus instead on “advanced machine intelligence.” Others point out that intelligence alone doesn’t solve all problems — machines may still struggle with tasks requiring creativity, intuition, or physical dexterity.

Do we really want the Singularity?

Science fiction has long explored the dangers of superintelligent machines, from Isaac Asimov’s “Laws of Robotics” to the malevolent HAL 9000 in 2001: A Space Odyssey. Today, these fears are echoed by some AI developers, who worry about the risks of creating systems smarter than ourselves.

A 2021 review of 16 articles from the scientific literature ranging from “philosophical discussions” to “assessments of current frameworks and processes in relation to AGI” identified a range of risks. These included AGI removing itself from the control of human owners/managers, being given or developing unsafe goals, development of unsafe AGI, AGIs with poor ethics, morals and values; inadequate management of AGI, and existential risks.

A self-improving AGI could revolutionize fields like medicine, climate science, and economics — or it could pose existential threats if misaligned with human values. This has spurred a growing field of “alignment research,” aimed at ensuring that intelligent machines act in humanity’s best interest.

As the race to AGI accelerates, so do the questions. Will quantum computing unlock new frontiers in machine intelligence? Can we overcome the limits of classical computing as Moore’s Law slows? And perhaps most importantly, how do we ensure that AGI benefits humanity rather than harms it?

Predicting the future of AI is a risky business. Perhaps the most we’ll get out of the journey to AGI will be as much about understanding ourselves as it is about building smarter machines.

share Share

New Type of EV Battery Could Recharge Cars in 15 Minutes

A breakthrough in battery chemistry could finally end electric vehicle range anxiety

We can still easily get AI to say all sorts of dangerous things

Jailbreaking an AI is still an easy task.

Scientists Solved a Key Mystery Regarding the Evolution of Life on Earth

A new study brings scientists closer to uncovering how life began on Earth.

AI has a hidden water cost − here’s how to calculate yours

Artificial intelligence systems are thirsty, consuming as much as 500 milliliters of water – a single-serving water bottle – for each short conversation a user has with the GPT-3 version of OpenAI’s ChatGPT system. They use roughly the same amount of water to draft a 100-word email message. That figure includes the water used to […]

Smart Locks Have Become the Modern Frontier of Home Security

What happens when humanity’s oldest symbol of security—the lock—meets the Internet of Things?

A Global Study Shows Women Are Just as Aggressive as Men with Siblings

Girls are just as aggressive as boys — when it comes to their brothers and sisters.

Birds Are Singing Nearly An Hour Longer Every Day Because Of City Lights

Light pollution is making birds sing nearly an hour longer each day

U.S. Mine Waste Contains Enough Critical Minerals and Rare Earths to Easily End Imports. But Tapping into These Resources Is Anything but Easy

The rocks we discard hold the clean energy minerals we need most.

Scientists Master the Process For Better Chocolate and It’s Not in the Beans

Researchers finally control the fermentation process that can make or break chocolate.

Most Countries in the World Were Ready for a Historic Plastic Agreement. Oil Giants Killed It

Diplomats from 184 nations packed their bags with no deal and no clear path forward.