Don’t you just hate it when someone explains a joke? There’s probably a special place in hell for that. Luckily for them, machines have no souls. Google’s new algorithm is a special machine — it is powerful and clever enough to not only understand a joke it has never encountered before, but also to eloquently explain the gist of the joke. This is a lot harder than it sounds and it is a huge step towards building algorithms that can understand human language, and perhaps human nature itself.
Artificial intelligence has come a long way in the past decade. Machine learning algorithms can now do many things that were previously thought to be impossible to do by a machine, including generating original artwork, such as novels, screenplays, and even music. AIs can now diagnose tumors, read maps, and play games often many times faster and more accurately than the best humans can. Now, AI is tackling humor.
The example below is revealing.
The new algorithm developed by Google Research is known as the Pathways Language Model, or PaLM. According to Google, PaLM is capable of understanding more than 540,000 million parameters, including complex concepts and relationships that were thought to be out of reach of computers.
At the moment, the PaLM system can scale training to 6,144 chips, using a combination of English and multilingual datasets from books, Wikipedia, web documents, conversations, and even GitHubcode.
The Pathways model is the definite current state of the art in the field. It aims to open a new chapter in AI by having a single system solve virtually any type of problem or task, rather than training thousands of individual algorithms designed to complete a narrow task.
“Imagine if, every time you learned a new skill (jumping rope, for example), you forgot everything you’d learned – how to balance, how to leap, how to coordinate the movement of your hands – and started learning each new skill from nothing,” wrote Jeff Dean, Google Senior Fellow at Google Research, wrote in a blog post. “That’s more or less how we train most machine learning models today. Rather than extending existing models to learn new tasks, we train each new model from nothing to do one thing and one thing only (or we sometimes specialize a general model to a specific task). The result is that we end up developing thousands of models for thousands of individual tasks.”
“We want a model to have different capabilities that can be called upon as needed, and stitched together to perform new, more complex tasks – a bit closer to the way the mammalian brain generalizes across tasks,” Dean added.
Pathways isn’t an AI that explains jokes — that’s just one of its many capabilities. It’s essentially a natural language juggernaut that can distinguish between cause and effect and can make sense of a combination of concepts and their appropriate context. It’s just that understanding and explaining jokes represents a great way to demonstrate capability because often humor involves saying one thing but meaning the other. Human communication has no evident and clear rules, therefore a conventional machine algorithm cannot capture the depth and richness of human language because you can never have enough examples that you can plug into the machine to describe all possible communication scenarios.
The same machine can also solve simple mathematical problems, explaining its reasoning step by step — something that Google calls “chain of thought prompting”.
It can also write new code from a simple text prompt, translate code from one language to another, and fix compilation errors in existing code.
But the most impressive feature is its impressive natural language understanding and generation capabilities. Not only can it distinguish cause and effect and understand conceptual combinations, as demonstrated by its interpretation of comedy, but it can even guess a movie from an emoji.
PaLM is well on its path toward taking AI to the next level — closing the gap between machine learning and human learning. But there is still much work to do, especially in improving ethical consideration and data sources to mitigate potential biases that may lead to toxic stereotypes and other unwanted results.
Was this helpful?