What happens when two of Britain’s top neuroscientists and AI researchers sit down to talk about artificial intelligence? You don’t get the usual hype about machines taking over the world. Instead, you get a crash course in how technology rewires the human brain, why cats are more impressive than chess grandmasters, and what Aristotle got wrong about reading.
Steve Fleming (Professor of Cognitive Neuroscience, University College London) and Chris Summerfield (Professor of Cognitive Neuroscience, University of Oxford/Google DeepMind) aren’t Silicon Valley futurists. They’re researchers who spend their days studying how humans make decisions, reflect on themselves, and learn. That’s refreshing if you’re just done with hearing about AI hype. When they talk about AI, they don’t see it as an alien threat. They see it as another in a long line of technologies that humans adopt and adapt to — from clay tablets to smartphones. Just as writing once reprogrammed our brains to externalize memory, today’s neural networks are changing how we think about creativity, reasoning, and even consciousness itself.
ZME Science: I’d like to begin with a thorny question: Will AI make us dumb?
SF: I think that the simple answer is we don’t know yet. I think one aspect that educational providers are grappling with at the moment is how people are using these technologies to think. And there are a couple of ways in which people are using them. One is as creative partners and to help you structure your thinking, structure whatever you’re producing for work or school, and so on. And that, I think, is helpful and fits in with the broader model of us using external tools to support our cognition, going back to pen and paper. But then there’s another mode where people are using them in a more mindless fashion and getting them to produce content on their behalf. And I think we don’t yet know what impact that might have on the capacity to gain more sophisticated critical thinking skills. I think we don’t yet know, but there is a potential danger that could impact those.
CS: Technology always changes the brain, right? So there’s a technology called reading, or writing. So that is, you might think that that’s ancient — and it is ancient — but in evolutionary terms it’s very new. So it’s only 5,000 years old. That means that our brains evolved in an era before reading existed. So you can use that as a way of thinking about how technology can change the brain.
We actually have bits of the brain which, during development, become specialized for reading. And as I said, the brain didn’t evolve to read because during the time when those pressures were happening, there wasn’t any reading. It’s not so much like your brain gets sculpted by the technology, but it means that the brain adapts to the technology. So reading is a technology which changes how we think, changes it a lot, because we are able to externalize things. Not everyone thinks that’s a good thing. Today, everyone thinks reading’s a good thing, but Aristotle famously thought reading was a really, really bad idea because it would impair everyone’s memory. So, you know, we have these shifts that happen because of technology, and usually there’s resistance. And I think we’re seeing that resistance right now with digital technology. Generations that grow up with that technology then just think it’s perfectly normal and can’t imagine what all the fuss is about.
ZME Science: Let me ask about intelligence. How do we define it in relation to AI?
CS: We’ve always defined intelligence in terms of what we think we are good at, and that goes for AI. Intelligence tests tend to privilege things that the makers of intelligence tests are good at. In AI research, it’s always happened as well. We used to think that if you build an AI that can play chess better than a human, then you’ve basically solved AI. And we achieved that in about 1997, and everyone was like, well, hold on a minute, we’ve built an AI that can play chess, but we haven’t built an AI that is generally intelligent.
Then people said, well what about language? If we build an AI that can talk to us in language, then we’ll have solved AI. Now we have solved that problem, and clearly, the models we’ve built are not intelligent in other ways. I think it’s just because we think about the things that humans are good at. Humans are very good at chess, at least relative to cats, and we are the only species that can speak in sentences. So we think of those things as being about intelligence. We don’t think of the really hard things animals can do, like what your cat can do — jumping on the kitchen counter, chasing mice, navigating its environment. These things are actually really hard problems to solve. And in particular, the social things — a lot of species have very sophisticated social behaviors. The current models that we have, of course, don’t have any friends, so they’re not much good at that.
SF: Just to add, one way that we clearly diverge from AI is that we have bodies. We have multimodal sensory input. And the fact that, as babies, we need to develop ways of first controlling our bodies, developing fine motor control, and so on — that underpins a lot of things that we take for granted as part and parcel of being human.
Interacting and navigating our world, stacking the dishwasher, cooking dinner, and so on. All of those things were not considered part of intelligence because we just took them for granted. As Chris says, the more intellectual aspects seem, in hindsigh,t easier to solve than the stuff that takes a much longer time to develop in childhood, which is all about being embodied and interacting with the world.
ZME Science: What about creativity?
SF: Creativity is another. I think a lot of these concepts are hard to really pin down. In one sense the current generative AIs are very creative. The generative aspect underpins the capacity to sample from these huge models of human language and recombine it in novel ways, generate new poems and new music. In that sense, yes, there’s a creative aspect to these technologies, perhaps surprisingly so. Coming back to what we would have imagined these systems could do just ten or twenty years ago, we wouldn’t necessarily have put the creative industries at the top of the list of those that were going to be disrupted.
CS: Yeah, so when we talk about creativity we mean two different things. One is cognitively definable, and that’s exactly as Steve said: being able to take different building blocks of knowledge and recombine them in novel ways. And there’s no doubt these models can do that, and they can do that in many ways better than we can, at least across a wide variety of domains. There’s another element of creativity, which is doing something special and different from everyone else. You can see this in psychological tests of creativity. They basically show you paintings, and if you like weird abstract art then you’re creative, and if you like paintings of horses in fields, then you’re less creative. That says nothing about the brain but a lot about our cultural conception of creativity. The models won’t be creative in that latter sense, because by definition they’ve been trained to be as human-like as possible — like the average human. They are creative in the first sense. You ask them for a recipe with five random ingredients from your cupboard and they’ll probably do at least as good a job as any family member.
ZME Science: There are so many misconceptions about AI. Which ones do you think matter most?
CS: That’s a hard question. There are so many. Misconceptions aren’t limited to the general public. There are enormous misconceptions among people who live and breathe AI every day. One is that AI is just parroting — literally copying what people do, regurgitating sentences. That’s wrong. The models do genuinely put things together in novel ways. At the other end of the spectrum, there’s the belief that AI is the solution to everything. That’s also wrong. It’s limited by computational power, data, and the algorithms we design. You shouldn’t ascribe it magic abilities to solve all of humanity’s problems.
SF: One conception we’ve been studying in my lab is that people think of these systems as stereotypically machine-like — always right, always giving you the right answer. We’ve shown in studies that even when you show people identical performance from AI and from a human, people think the AI is more competent and are more willing to trust it. That comes from a general belief that these systems are robotic and not likely to fail. But the most powerful AI systems now are based on neural networks, which are more brain-like, probabilistic, and give slightly different answers each time. Understanding that helps you realize what you’re dealing with.
ZME Science: What about reasoning?
CS: I think it’s possible you’ll get powerful systems able to come up with actions that are different from what we expect, able to reason about problems. The Go system is a reasoning system. It generated a move no human had done, by working it out to the end. That was a really good move. As systems get better at reasoning we may see similar, different behaviors in other domains. The one everyone hopes for is science — that AI will come up with a breakthrough no one thought of. But Go is a very well-structured game. Science is messy, noisy, and value-laden. Not all experiments are equally worthwhile. To be a good scientist, AI would have to understand culture, human values, and messy data. That’s much harder.
SF: And just to add to that, one lesson from doing science is that the hardest part is knowing what question to ask. Being aware of what you don’t know, knowing where the field should go, being able to have that perspective — that’s crucial. Now that we can interact with AI tools that can synthesize knowledge, the way we get the best out of them is by knowing what questions to pose. That’s still going to be a really hard problem. Perhaps AI can help us with the question-asking too.
ZME Science: Some worry about AI acting in nefarious ways. Could that happen?
SF: When you train these models, they’re trained largely from human data. They inherit our virtues but also our vices. Humans deviate from rationality, show biases, self-serving behaviors. Models will too. A subfield of AI has emerged to correct these undesirable behaviors: alignment research. The idea is to align the models to some idealized version of human behavior. The technical challenge is hard, but the conceptual challenge is even harder — knowing what values to align to. Different cultures, generations, and groups have different values. Increasingly, models are trained to have a plurality of values. The politics of one model may differ from another depending on the company.
CS: This is not a new question. For thousands of years we’ve debated how to aggregate diverse viewpoints. Democracy is one solution. What’s exciting about language-enabled AI is that it could help aggregate diverse views in language itself, not just numbers. That could be an opportunity. At the same time, these systems will become more personalized. They’ll adapt to you based on your interactions. That could be beneficial, more tailored advice. But it could also reinforce filtering of information, like we already see with social media.
ZME Science: One last question. What excites you most and what worries you most about AI?
CS: What worries me most is how AI systems will be connected together. Most challenges in society come from interconnectedness — communication channels, modes of exchange. At the moment, AI is mostly one user and one system. But our intelligence comes from networks. Alone, we’re limited. Together, we can put a man on the moon. What happens when we move to AI-to-AI interaction, where systems exchange information and make decisions? That cuts humans out of the loop and creates opportunities for collusion, misalignment, even the emergence of AI cultures. That worries me most.
SF: What worries me most is the effects on the next generation of children, who are growing up surrounded by systems that appear very human-like, with linguistic and multimodal competence. If they become embodied in the home as robotic devices, how will that impact kids’ interactions with parents, teachers, sources of information? It could be benign, but my worry is that like social media, it could filter their outlook on the world. We don’t yet have the research base to know the impact.
CS: On the positive side, knowledge is a good thing. Having instant access to a tool that knows almost everything is very useful. The challenge is to configure systems so that knowledge increases our ability to engage with the world and gives us greater agency. That’s possible. These systems could make us smarter and better able to solve problems if oriented that way.
SF: I fully agree. Beyond social benefits, I’m fascinated at an intellectual level. As these systems become part of daily life, how will they change our conception of being human? Will we start thinking we are more like AI and less like animals? What do they do to fuzzy concepts like consciousness and sentience? I think they’ll put strong pressure on those concepts. It may turn out consciousness isn’t as mysterious as we thought, once we build agents that look and sound like us. That will change how we think about ourselves. That will be fascinating to see.