homehome Home chatchat Notifications


Google made an annoying AI that explains why jokes are supposed to be funny

The ultimate goal is to close the gap between machine and human reasoning.

Tibi Puiu
April 5, 2022 @ 10:56 pm

share Share

Don’t you just hate it when someone explains a joke? There’s probably a special place in hell for that. Luckily for them, machines have no souls. Google’s new algorithm is a special machine — it is powerful and clever enough to not only understand a joke it has never encountered before, but also to eloquently explain the gist of the joke. This is a lot harder than it sounds and it is a huge step towards building algorithms that can understand human language, and perhaps human nature itself.

Artificial intelligence has come a long way in the past decade. Machine learning algorithms can now do many things that were previously thought to be impossible to do by a machine, including generating original artwork, such as novels, screenplays, and even music. AIs can now diagnose tumors, read maps, and play games often many times faster and more accurately than the best humans can. Now, AI is tackling humor.

The example below is revealing.

The new algorithm developed by Google Research is known as the Pathways Language Model, or PaLM. According to Google, PaLM is capable of understanding more than 540,000 million parameters, including complex concepts and relationships that were thought to be out of reach of computers.

At the moment, the PaLM system can scale training to 6,144 chips, using a combination of English and multilingual datasets from books, Wikipedia, web documents, conversations, and even GitHubcode.

The Pathways model is the definite current state of the art in the field. It aims to open a new chapter in AI by having a single system solve virtually any type of problem or task, rather than training thousands of individual algorithms designed to complete a narrow task.

“Imagine if, every time you learned a new skill (jumping rope, for example), you forgot everything you’d learned – how to balance, how to leap, how to coordinate the movement of your hands – and started learning each new skill from nothing,” wrote Jeff Dean, Google Senior Fellow at Google Research, wrote in a blog post. “That’s more or less how we train most machine learning models today. Rather than extending existing models to learn new tasks, we train each new model from nothing to do one thing and one thing only (or we sometimes specialize a general model to a specific task). The result is that we end up developing thousands of models for thousands of individual tasks.”

“We want a model to have different capabilities that can be called upon as needed, and stitched together to perform new, more complex tasks – a bit closer to the way the mammalian brain generalizes across tasks,” Dean added.

Pathways isn’t an AI that explains jokes — that’s just one of its many capabilities. It’s essentially a natural language juggernaut that can distinguish between cause and effect and can make sense of a combination of concepts and their appropriate context. It’s just that understanding and explaining jokes represents a great way to demonstrate capability because often humor involves saying one thing but meaning the other. Human communication has no evident and clear rules, therefore a conventional machine algorithm cannot capture the depth and richness of human language because you can never have enough examples that you can plug into the machine to describe all possible communication scenarios.

The same machine can also solve simple mathematical problems, explaining its reasoning step by step — something that Google calls “chain of thought prompting”.

It can also write new code from a simple text prompt, translate code from one language to another, and fix compilation errors in existing code.

But the most impressive feature is its impressive natural language understanding and generation capabilities. Not only can it distinguish cause and effect and understand conceptual combinations, as demonstrated by its interpretation of comedy, but it can even guess a movie from an emoji.

PaLM is well on its path toward taking AI to the next level — closing the gap between machine learning and human learning. But there is still much work to do, especially in improving ethical consideration and data sources to mitigate potential biases that may lead to toxic stereotypes and other unwanted results.

share Share

Giant Brain Study Took Seven Years to Test the Two Biggest Theories of Consciousness. Here's What Scientists Found

Both came up short but the search for human consciousness continues.

The Cybertruck is all tricks and no truck, a musky Tesla fail

Tesla’s baking sheet on wheels rides fast in the recall lane toward a dead end where dysfunctional men gather.

British archaeologists find ancient coin horde "wrapped like a pasty"

Archaeologists discover 11th-century coin hoard, shedding light on a turbulent era.

Astronauts May Soon Eat Fresh Fish Farmed on the Moon

Scientists hope Lunar Hatch will make fresh fish part of space missions' menus.

Scientists Detect the Most Energetic Neutrino Ever Seen and They Have No Idea Where It Came From

A strange particle traveled across the universe and slammed into the deep sea.

Autism rates in the US just hit a record high of 1 in 31 children. Experts explain why it is happening

Autism rates show a steady increase but there is no simple explanation for a "supercomplex" reality.

A New Type of Rock Is Forming — and It's Made of Our Trash

At a beach in England, soda tabs, zippers, and plastic waste are turning into rock before our eyes.

A LiDAR Robot Might Just Be the Future of Small-Scale Agriculture

Robots usually love big, open fields — but most farms are small and chaotic.

Scientists put nanotattoos on frozen tardigrades and that could be a big deal

Tardigrades just got cooler.

This underwater eruption sent gravitational ripples to the edge of the atmosphere

The colossal Tonga eruption didn’t just shake the seas — it sent shockwaves into space.