Artificial intelligence systems are modeled after the human brain, but a new branch of research at Columbia University in New York is examining whether developments in AI might contain clues as to how living brains work and how their function might be improved.
Columbia was one of seven universities that the National Science Foundation designated as a new national AI research institute, and the $20 million it received will boost the school’s AI Institute for Artificial and Natural Intelligence (ARNI). The goal is to conduct research ‘connecting the major progress made in AI systems to the revolution in our understanding of the brain.’
Richard Zemel, professor of computer science at Columbia, told Fox News Digital that the ambition is to bring together top AI and neuroscience researchers together in a cross-training exercise that can benefit AI systems and people.
‘The idea is that it’s going both ways,’ Zemel said. ‘AI has gotten inspiration from the brain and the neural nets have things that are loosely connected to the brain.’
One of the central ideas behind AI has been to mimic the brain’s structure in the hopes of creating something that approximates a thinking machine. Artificial neural networks modeled after the brain are comprised of millions of processing nodes that help AI systems learn when they’re fed data.
The ‘transformer’ neural network that has been in use for the last five years or so is aimed at getting even closer to the human brain by focusing on the context of questions it is asked in order to arrive at a more precise answer. Zemel said transformers focus on the concept of ‘attention.’
‘It’s something they call the cocktail party effect,’ Zemel said. ‘You’re at a party, and you’re barely able to hear, but you hear your name even though there’s tons of conversations going on. But somehow your brain is able to pick up and attend to something.’
He said it’s this concept of ‘attention’ that is making generative AI output more and more usable by people who ask questions of AI systems. This kind of work has opened the door to wondering whether improvements in AI might help researchers better understand the brain.
‘By understanding these complicated neural networks, does that give us some hypotheses or new things to investigate in the brain?’ Zemel said.
Some of the big questions Columbia will look at include understanding the concept of ‘robust flexible learning.’ He said many AI systems so far can get good at a specific task but then don’t do as well when given another job to do, while the human brain shows more adaptability.
But AI has shown it can quickly develop language skills, and Zemel said that’s one example of an AI talent that might help them understand how to more efficiently train the human brain.
‘A lot of these new systems are quite good at picking up on new language tasks. With just an example or two, they learn something very quickly, in some ways faster than people do,’ he said. ‘Then it’s a question of, does this give us an idea on what we want to do differently for human training?’
Another area is continual learning, which gets into the issue of how and when both people and AI systems can forget information and how that information can be recalled.
‘AI does suffer sometimes from a lot of forgetting,’ Zemel said. ‘Both of them have problems in different ways, so this is a good area to study and try to figure out if there are some ways for getting both to help each other.’
A third crossover issue affecting both people and AI systems is the principle of uncertainty.
‘A lot of AI systems that are out there now aren’t very good at knowing when they’re uncertain when they should be uncertain,’ he said. ‘And people aren’t very good at that either.’
Practical applications of this kind of cross-training between AI and human brains are already being developed and improved. One example is the kinds of ‘brain-machine interfaces’ that are helping to build smarter prosthetic devices for people, such as mechanical arms for someone who can’t control his or her arms.
Zemel said ‘AI-assisted prosthetic devices’ are being developed that allow movement partly through the brain and partly through an AI interface.
He said the hope is that AI and neuroscience experts and Columbia can keep making these sorts of connections.
‘We’re trying to put these people together, put these people in the same room and get ideas to go back and forth and find things to test and things to explore,’ Zemel said.