Researchers at UC Santa Barbara made a simple neural circuit comprised of 100 artificial synapses, which they used to classify three letters by their images, despite font changes and noise introduced into the image. The researchers claim the rudimentary, yet effective circuit processes the text much in the same way as the human brain does. In other words, like you’re currently interpreting the text in this article. Even if you change the font, printscreen this article and splash it with an airbrush in MS Paint, you’ll still be able to read at least portions of it, because the human brain is so great at scaling patterns and abstracting symbols. This kind of research will hopefully usher in a new age of more refined, energy efficient computing.
Don’t worry, while this is a big step for artificial intelligence, the circuit comes nowhere near the human brain, which has 1015 (one quadrillion) synaptic connections. Despite technology as gone a long way, computers are still rather dumb. Yes, you can achieve marvelous things with them, but they’re only tools – not thinking machines. Any traces of “smartness” you might find in a computer or software code is actually human cleverness – you’re revering the designer’s intent! Considering this kind of complexity, scientists have long been trying to mimic the way the brain processes information; not necessarily to create a sentient artificial intelligence, rather to increase mechanical computational speed by orders of magnitude. The adult human brain needs about 20 Watts of power. A conventional machine that could simulate the entire human brain would need to have an entire river’s course bent just to cool it!
The team led by Dmitri Strukov, a professor of electrical and computer engineering at UC Santa Barbara, where looking to build a simple, yet effective device that could perform some of the tasks human brains can made using split-second decision making processes. Reading or interpreting visuals symbols is one of them. They used their rudimentary artificial neural network to classify three letters (“z”, “v” and “n”) by their images, each letter stylized in different ways or saturated with “noise”. The algorithms they used was akin to the way we pick our friends out from a crowd, or find the right key from a ring of similar keys. The findings were reported in the journal Nature.
“While the circuit was very small compared to practical networks, it is big enough to prove the concept of practicality,” said Merrikh-Bayat, part of the team at UC Santa Barbara.
“And, as more solutions to the technological challenges are proposed the technology will be able to make it to the market sooner,” according to Gina Adam, another engineer in the team.
To build their artificial neural network, the engineers used memristors instead of traditional semiconductor transistors – the kind your CPU or graphical card uses. A memristor is an electrical component that limits or regulates the flow of electrical current in a circuit and remembers the amount of charge that has previously flowed through it – a transistor with memory. Unlike traditional transistor which rely on the drift and diffusion of electrons and their holes through semiconducting material, memristor operation is based on ionic movement, similar to the way human neural cells generate neural electrical signals.
“The memory state is stored as a specific concentration profile of defects that can be moved back and forth within the memristor,” said Strukov.
The ionic mechanism has several advantages over pure electron transfer.
“For example, many different configurations of ionic profiles result in a continuum of memory states and hence analog memory functionality,” he said. “Ions are also much heavier than electrons and do not tunnel easily, which permits aggressive scaling of memristors without sacrificing analog properties.”
In other words, in this case at least, analog trumps digital, since the same human-brain functionality would be attained only by an enormous machine, if that machine were to use conventional transistor technology.
“Classical computers will always find an ineluctable limit to efficient brain-like computation in their very architecture,” said lead researcher Prezioso. “This memristor-based technology relies on a completely different way inspired by biological brain to carry on computation.”
This is merely the beginning. To scale the number of artificial synapses and thus perform some more complex tasks, many more memristors need to be added and interwoven. The team is also thinking of assembling a hybrid, in which memristors and conventional transistor technology are mearged, which will enable more complex demonstrations and allow this early artificial brain to do more complicated and nuanced things. Ideally, trillions of memristors would be stacked atop each other to perform computations much more efficiently than before. This, however, implies some more tiny steps until this kind of research is considered mature enough to be awarded billions of dollars.
Previously, researchers at Harvard School of Engineering and Applied Sciences (SEAS) built a transistor that behaves like a neuron, in some respects at least. Last year, the K supercomputer, with over 700,000 processor cores and 1.4 million GB of RAM, was used to simulate one second of human neural activity in 40 minutes.
Enjoyed this article? Join 40,000+ subscribers to the ZME Science newsletter. Subscribe now!