
Summary: “Offline” periods during AI training mitigated “catastrophic forgetfulness” in artificial neural networks, mimicking the learning benefits that sleep provides to the human brain.
Source: UCSD
Depending on age, humans need 7 to 13 hours of sleep per 24 hours. During this time, a lot is happening: heart rate, breathing, and the ebb and flow of metabolism; hormone levels adjust; the body relaxes. Not so much in the brain.
“The brain is very busy when we sleep, repeating what we’ve learned during the day,” said Maxim Bazhenov, PhD, professor of medicine and sleep researcher at the University of California San Diego School of Medicine. . “Sleep helps reorganize memories and present them in the most effective way.”
In previous published work, Bazhenov and colleagues have reported how sleep builds rational memory, the ability to remember arbitrary or indirect associations between objects, people, or events, and protects against forgetting old memories. .
Artificial neural networks take advantage of the architecture of the human brain to improve many technologies and systems, from basic science and medicine to finance and social media. In some respects, they have achieved superhuman performance, such as computational speed, but they fail in one key aspect: when artificial neural networks learn sequentially, new information overwrites previous information, a phenomenon called catastrophic forgetting. .
“In contrast, the human brain is constantly learning and integrating new data into existing knowledge,” Bazhenov said, “and it generally learns best when new training is intertwined with periods of sleep for memory consolidation.”
Written in the November 18, 2022 issue of Computational Biology PLOS, Lead author Bazhenov and colleagues discuss how biological models can help mitigate the threat of catastrophic forgetting in artificial neural networks, enhancing their usefulness in a range of research interests.
The scientists used spike neural networks that artificially mimic natural neural systems: instead of information being communicated continuously, it is transmitted as discrete events (spikes) at certain times.
They found that when peak networks were trained on a new task, but with occasional offline periods that mimicked sleep, catastrophic forgetting was mitigated. Like the human brain, the study authors said, the networks’ “sleep” allowed them to replay old memories without explicitly using old training data.
Memories are represented in the human brain by patterns of synaptic weight – the strength or magnitude of a connection between two neurons.
“When we learn new information,” Bazhenov said, “neurons fire in a specific order, which increases the synapses between them. During sleep, the spike patterns learned during our waking state spontaneously repeat. This is called reactivation or proofreading.
“Synaptic plasticity, the ability to be modified or molded, is still in place during sleep and it may further enhance synaptic weight patterns that represent memory, helping to prevent forgetfulness or enable the transfer of knowledge from ‘old tasks to new ones.’
When Bazhenov and his colleagues applied this approach to artificial neural networks, they found that it helped the networks avoid catastrophic forgetting.
“This meant that these networks could learn continuously, like humans or animals. Understanding how the human brain processes information during sleep can help increase memory in human subjects. Increased sleep patterns can lead to better memory.
“In other projects, we are using computational models to develop optimal strategies for applying stimulation during sleep, such as auditory tones, which improve sleep rhythms and improve learning. This can be especially important when memory isn’t optimal, such as when memory declines with aging or in certain conditions like Alzheimer’s disease.
Co-authors include: Ryan Golden and Jean Erik Delanois, both at UC San Diego; and Pavel Sanda, Institute of Informatics, Czech Academy of Sciences.
About this AI and learning research news
Author: Scott Lafee
Source: UCSD
Contact: Scott La Fee – UCSD
Image: Image is in public domain
Original research: Free access.
“Sleep prevents catastrophic forgetting in peak neural networks by forming a joint representation of synaptic weight” by Maxim Bazhenov et al. Computational Biology PLOS
Summary
Sleep prevents catastrophic forgetting in peak neural networks by forming a joint representation of synaptic weight
Artificial neural networks overwrite previously learned tasks when trained sequentially, a phenomenon known as catastrophic forgetting. In contrast, the brain learns continuously and generally learns best when new training is interspersed with periods of sleep for memory consolidation.
Here, we used the spike network to investigate the mechanisms behind catastrophic forgetting and the role of sleep in preventing it.
The network could be trained to learn a complex foraging task, but exhibited catastrophic forgetfulness when trained sequentially on different tasks. In the synaptic weight space, training for new tasks shifted the synaptic weight configuration away from the variety representing the old task leading to forgetting.
Intertwining new training tasks with periods of offline reactivation, mimicking biological sleep, mitigating catastrophic forgetfulness by constraining the synaptic weight state of the network to the previously learned collector, while allowing the weight configuration to converge towards the intersection of the collectors representing the old and the new tasks.
The study reveals a possible strategy of synaptic weight dynamics that the brain applies during sleep to prevent forgetting and optimize learning.