
[ad_1]
Advances in artificial intelligence research have often been fostered by advances in neuroscience. Indeed, the two fields have frequently borrowed ideas and there are still many successful opportunities to do so in the future.
In a recent review article published in Science, professor of biology and neurobiology at Stanford University Liqun Luo summarizes our current understanding of neural circuits in the brain and how they fit into the architecture of the brain. The review also suggests additional opportunities for artificial intelligence to learn from neuroscience.
âI wanted to define what is known and what is unknown, to stimulate both neuroscience and AI researchers,â he says.
Luo’s message to AI researchers is this: Neuroscientists still have a long way to go in understanding the various circuit patterns and architectures in the brain and how they interact with each other, but the groundwork has been laid for that AI researchers plan to use a greater variety of patterns and architectures than they currently do – and perhaps even connect multiple circuit architectures together to create the kinds of synergies we see in the brain.
From neurons, to circuit patterns, to architectures
Luo compares the structure of the brain to the building blocks of language. If individual neurons are letters, then circuit patterns are the words they spell, and circuit architectures are the sentences created by a series of words. At each level, says Luo, AI researchers will benefit from a better understanding of how different parts of the brain connect and communicate with each other.
Patterns of synaptic connectivity – the ways in which neurons connect to other neurons – define the first level of generalized information processing principles in the brain – circuit patterns. These include some of the most basic types of neural circuits, such as predictive excitement, which were incorporated into some of the first artificial neural networks ever developed, including perceptrons and deep neural networks.
But Luo also describes other patterns, including feedback inhibition, lateral inhibition, and mutual inhibition. While these patterns can appear in AI systems that use unsupervised learning, where weights are assigned and adjusted during the learning process, Luo questions whether the deliberate incorporation of these patterns into the architecture of AI systems can help further improve their performance.
At a level above the circuit patterns, Luo says, are the âsentencesâ that these patterns create when organized together in specific brain architectures. For example, continuous topographic mapping is an architecture in which neighboring units of one layer of the brain are connected to neighboring units of the next layer. This approach has been incorporated into AI systems that use convolutional neural networks. Likewise, parallel processing is a type of neural circuit architecture that has been widely adopted in computing in general as well as in a variety of AI systems.
An additional important circuit architecture is dimensionality expansion, in which the inputs of a layer with a small number of units are connected to an intermediate layer with a much larger number of units so that subtle differences in the input layer become more apparent in the intermediate layer for the output layer to be distinguished. Recurrent networks are also important, in which neurons connect to themselves, often through intermediaries. The brain concatenates both dimensionality expansion and recurrent processing in a highly structured way across multiple regions. Understanding and exploiting the design principles governing these combinations of circuit patterns could help AI.
In general, Luo says, âUsing my linguistic metaphor, I would say that AI researchers tend to use letters and jump straight to articles without writing the words and phrases in between. In essence, he says, without knowing the middlemen, they always make things work using brute force and a lot of computing power. Maybe neuroscience can help AI researchers open that black box, Luo says.
Going forward: assembling multiple architectures
AI researchers should broaden their approaches, says Luo. In the brain, a variety of architectures coexist and work together to generate general intelligence, while most AI systems rely on a single type of circuit architecture.
“Maybe if AI researchers explore the variety of architectures that exist in the brain, they will be inspired to design new ways to put multiple architectures together to build better systems than is possible with a single architecture,” he said.
Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more.
[ad_2]