Method of identifying objects using computer vision differs substantially from human vision
A team of cognitive psychologists from the University of California has studied current talents of deep learning computer networks. They tried to discover how closely this form of artificial intelligence to the abilities of human brain. The conclusion was made that machines with AI still have to make a further progress, although they’ve significantly improved in recent years.
The scientists proposed one of the best deep learning networks, called VGG-19, to analyze a picture of a teapot with golf ball pattern. An AI concluded that there was only a 0.41% possibility that this object was a teapot. The first choice of AI was a golf ball, which is quite reasonable. VGG-19 also thought there was a 0% possibility that the elephant, covered with the pattern of a blue and red argyle sock, was really an elephant.
Moreover, VGG-19 and to the second deep learning network, called AlexNet, were unable to identify the glass figurines. With the images in solid black, the both networks did better – there was correct identification of objects among AIs’ top five choices for nearly 50% of the objects.
The scientists found that the networks’ method of identifying objects using computer vision differs substantially from human vision. Humans see the entire object, while the artificial intelligence networks identify fragments of the object. As the result, the machines make very different errors from humans. These results were published in PLOS Computational Biology journal on December 7, 2018.
Philip Kellman, a distinguished professor of psychology and a senior author of the study, concluded: “The machines have severe limitations that we need to understand.”
“This study shows these systems get the right answer in the images they were trained on without considering shape,” he added. “For humans, overall shape is primary for object recognition, and identifying images by overall shape doesn't seem to be in these deep learning systems at all.”
Author: Alena Snezhnaya