homehome Home chatchat Notifications


AI Experts Predict Machines Could Outthink Humans by 2040. But Some Are Betting on Even Sooner

The Singularity could be closer than you think.

Tibi Puiu
February 27, 2025 @ 7:19 pm

share Share

In the 1950s, mathematician Alan Turing — best remembered by many as the cryptography genius who led the British effort to break the German Enigma codes during WWII — posed a question that would haunt scientists for decades: “Can machines think?”

Turing proposed an “imitation game” to answer it. This imitation game, known as the Turing Test, is simple: a human participant would exchange a series of typed interactions with two respondents, a computer and a human being. Both respondents, one made of flesh and the other of circuits, were concealed behind a partition. If, after a designated time, the interrogator couldn’t tell them apart, the computer would effectively win, suggesting that such a machine could be considered capable of thought.

In the age of AI, we can safely say that machines can pass the Turing Test with flying colors. Now that we have GPT-4, deepfakes, and OpenAI’s “Sora” text-to-video model that can churn out highly realistic video clips from mere text prompts, it seems like we’ve come closer to a thinking machine than ever before.

Artificial General Intelligence

Today, Turing’s question has evolved into a more urgent one. When will machines think in the real, genuine sense that human beings can think? And what happens when they do?

Artificial General Intelligence (AGI), the point at which machines can perform any intellectual task as well as humans, has long been the stuff of science fiction. But, according to a sweeping analysis of predictions from 8,590 scientists, entrepreneurs, and AI researchers, AGI may be closer than we think. Surveys among these experts suggest a 50% chance it could arrive by 2040. Some even bet on the 2030s.

This timeline has shifted dramatically in recent years. Just a decade ago, many researchers believed AGI was a century away. But the rapid rise of large language models like GPT-4 has accelerated expectations — and sparked intense debate about what AGI really means, whether it’s achievable, and how it will reshape our world.

The road to AGI is paved with bold predictions — and a fair share of over-optimism. In 1965, AI pioneer Herbert A. Simon declared that machines would be capable of doing any human work within 20 years. In the 1980s, Japan’s Fifth Generation Computer project promised machines that could hold casual conversations by the 1990s. Neither materialized.

Yet today, the consensus among AI researchers is shifting.

Are we closer to the Singularity?

Surveys conducted between 2012 and 2023 reveal a growing belief that AGI is not only possible but probable within the next few decades. In 2023, a survey of 2,778 AI researchers estimated a 50% chance of achieving “high-level machine intelligence” by 2040. Entrepreneurs are even more bullish, with figures like Elon Musk and OpenAI’s Sam Altman predicting AGI could arrive as early as 2026 or 2035. However, tech leaders have an incentive to embellish the pace of AI progress as this can help them secure more funding or grow their stocks.

What’s driving this shift? The exponential growth of computing power, advances in algorithms, and the emergence of models like GPT-4, which demonstrate surprising generalist capabilities in areas like coding, law, and mathematics. Microsoft’s 2023 report on GPT-4 even sparked debate over whether it represented an early form of AGI. It matched human performance on math, coding, and law (though not quite expert-level performance).

Last year, futurist Ray Kurzweil in his latest book, The Singularity is Nearer, states that we’re just years away from human-level AI. Kurzweil previously coined the term “singularity” — a point at which machines surpass human intelligence and begin improving themselves at an uncontrollable rate.

“Human-level intelligence generally means AI that has reached the ability of the most skilled humans in a particular domain and by 2029 that will be achieved in most respects. (There may be a few years of transition beyond 2029 where AI has not surpassed the top humans in a few key skills like writing Oscar-winning screenplays or generating deep new philosophical insights, though it will.) AGI means AI that can do everything that any human can do, but to a superior level. AGI sounds more difficult, but it’s coming at the same time. And my five-year-out estimate is actually conservative: Elon Musk recently said it is going to happen in two years,” Kurzweil said.

Kurzweil doubled down and made another wild prediction. He said that by 2045, humans will be able to increase their intelligence a millionfold through advanced brain interfaces. These interfaces, according to Kurzweil, may involve nanobots non-invasively inserted into our capillaries, allowing for a seamless integration of biological and artificial intelligence.

But not everyone is convinced. Some researchers argue that human intelligence is too complex to replicate. Yann LeCun, a pioneer of deep learning, has called for retiring the term AGI altogether, suggesting we focus instead on “advanced machine intelligence.” Others point out that intelligence alone doesn’t solve all problems — machines may still struggle with tasks requiring creativity, intuition, or physical dexterity.

Do we really want the Singularity?

Science fiction has long explored the dangers of superintelligent machines, from Isaac Asimov’s “Laws of Robotics” to the malevolent HAL 9000 in 2001: A Space Odyssey. Today, these fears are echoed by some AI developers, who worry about the risks of creating systems smarter than ourselves.

A 2021 review of 16 articles from the scientific literature ranging from “philosophical discussions” to “assessments of current frameworks and processes in relation to AGI” identified a range of risks. These included AGI removing itself from the control of human owners/managers, being given or developing unsafe goals, development of unsafe AGI, AGIs with poor ethics, morals and values; inadequate management of AGI, and existential risks.

A self-improving AGI could revolutionize fields like medicine, climate science, and economics — or it could pose existential threats if misaligned with human values. This has spurred a growing field of “alignment research,” aimed at ensuring that intelligent machines act in humanity’s best interest.

As the race to AGI accelerates, so do the questions. Will quantum computing unlock new frontiers in machine intelligence? Can we overcome the limits of classical computing as Moore’s Law slows? And perhaps most importantly, how do we ensure that AGI benefits humanity rather than harms it?

Predicting the future of AI is a risky business. Perhaps the most we’ll get out of the journey to AGI will be as much about understanding ourselves as it is about building smarter machines.

share Share

Scientists Turn Timber Into SuperWood: 50% Stronger Than Steel and 90% More Environmentally Friendly

This isn’t your average timber.

A Provocative Theory by NASA Scientists Asks: What If We Weren't the First Advanced Civilization on Earth?

The Silurian Hypothesis asks whether signs of truly ancient past civilizations would even be recognisable today.

Scientists Created an STD Fungus That Kills Malaria-Carrying Mosquitoes After Sex

Researchers engineer a fungus that kills mosquitoes during mating, halting malaria in its tracks

From peasant fodder to posh fare: how snails and oysters became luxury foods

Oysters and escargot are recognised as luxury foods around the world – but they were once valued by the lower classes as cheap sources of protein.

Rare, black iceberg spotted off the coast of Labrador could be 100,000 years old

Not all icebergs are white.

We haven't been listening to female frog calls because the males just won't shut up

Only 1.4% of frog species have documented female calls — scientists are listening closer now

A Hawk in New Jersey Figured Out Traffic Signals and Used Them to Hunt

An urban raptor learns to hunt with help from traffic signals and a mental map.

A Team of Researchers Brought the World’s First Chatbot Back to Life After 60 Years

Long before Siri or ChatGPT, there was ELIZA: a simple yet revolutionary program from the 1960s.

Almost Half of Teens Say They’d Rather Grow Up Without the Internet

Teens are calling for stronger digital protections, not fewer freedoms.

China’s Ancient Star Chart Could Rewrite the History of Astronomy

Did the Chinese create the first star charts?