Comparison of deep networks with the brain: can they ‘see’ just as well as humans? | India News

BENGALURU: A new study from IISc’s Center for Neuroscience (CNS) has investigated how well deep neural networks – machine learning systems inspired by the network of brain cells or neurons in the human brain – relate to the human brain when it comes to visual perception.
Noting that deep neural networks can be trained to perform specific tasks, researchers say they played a critical role in helping scientists understand how our brains perceive the things we see.
“Although deep networks have evolved significantly over the past decade, but they are still nowhere near as good as the human brain at perceiving visual cues. In a recent study, SP Arun, associate professor at CNS, and his team have compared several qualitative properties of these deep networks to those of the human brain, ”IISc said in a statement.
Deep networks, while a good model for understanding how the human brain visualizes objects, work differently from the latter, IISc said, adding that while complex calculations are trivial for them, certain tasks that are relatively easy for humans can be difficult for them. these networks to complete.
In the current study, published in Nature Communications, Arun and his team sought to understand which visual tasks can be performed naturally by these networks due to their architecture, and which require further training. The team studied 13 different perceptual effects and discovered previously unknown qualitative differences between deep networks and the human brain, ”the statement read.
An example, IISc said, was the Thatcher effect – one phenomenon where people find it easier to spot local changes in features in an upright image, but this becomes difficult when the image is turned upside down.
Deep networks trained to recognize standing faces showed a Thatcher effect compared to networks trained to recognize objects. Another visual property of the human brain, called mirror confusion, was tested on these networks. For humans, specular reflections along the vertical axis are more similar than those along the horizontal axis. The researchers found that deep networks are also stronger mirror confusion for vertically reflected images compared to horizontally reflected images.
“Another phenomenon that is characteristic of the human brain is that it first focuses on coarser details. This is known as the global benefit effect. For example, in a picture of a tree, our brains would first see the tree as a whole before noticing the details of the leaves in it, ”explains Georgin Jacob, lead author and PhD candidate at CNS.
Surprisingly, he said, neural networks showed a local advantage. This means that, unlike the brain, the networks focus on the finer details of an image first. Therefore, although these neural networks and the human brain perform the same object recognition tasks, the steps followed by the two are very different.
Arun, the study’s senior author, says identifying these differences could bring researchers closer to making these networks more brain-like. Such analyzes can help researchers build more robust neural networks that not only perform better, but are also immune to “hostile attacks” that aim to derail them.

.Source