Artificial Intelligence Systems Can be Fooled

The experiment shows the severe limitations of 'deep learning' machines

Despite all its benefits and the ease that technology has brought in, the fear that new-age technologies like artificial intelligence (AI), machine learning and robotics would displace human jobs still looms. However, some researchers don’t agree with the idea that technology would take away jobs from humans anytime soon.

Some researchers at University of California, Los Angeles (UCLA) in the US conducted various experiments, which show the severe limitations of ‘deep learning’ machines.

A Long Way To Go For AI

“How smart is the form of AI known as deep learning computer networks, and how closely do these machines mimic the human brain? They have improved greatly in recent years, but still have a long way to go,” reports a team of UCLA cognitive psychologists in the journal PLOS Computational Biology.

Supporters have expressed enthusiasm for the use of these networks to do many individual tasks, and even jobs, traditionally performed by people. However, results of the five experiments in this study showed that it’s easy to fool the networks, and the networks’ method of identifying objects using computer vision differs substantially from human vision.

“The machines have severe limitations that we need to understand,” says Philip Kellman, a UCLA professor of psychology and a senior author of the study.

Networks Are Easily Fooled

Machine vision, he says, has drawbacks. In the first experiment, the psychologists showed one of the best deep learning networks, called VGG-19, color images of animals and objects. The images had been altered. For example, the surface of a golf ball was displayed on a teapot; zebra stripes were placed on a camel; and the pattern of a blue and red argyle sock was shown on an elephant. VGG-19 ranked its top choices and chose the correct item as its first choice for only five of 40 objects.

“We can fool these artificial systems pretty easily,” says co-author Hongjing Lu, a UCLA professor of psychology. “Their learning mechanisms are much less sophisticated than the human mind.”

In the second experiment, the psychologists showed images of glass figurines to VGG-19 and to a second deep learning network, called AlexNet. VGG-19 performed better on all the experiments in which both networks were tested. Both networks were trained to recognize objects using an image database called ImageNet.

However, both networks failed to identify the glass figurines.

In the third experiment, the researchers showed 40 drawings outlined in black, with images in white, to both VGG-19 and AlexNet. These first three experiments were meant to discover whether the devices identified objects by their shape.

The researchers concluded that humans see the entire object, while the AI networks identify fragments of the object.

“This study shows these systems get the right answer in the images they were trained on without considering shape. For humans, overall shape is primary for object recognition, and identifying images by overall shape doesn’t seem to be in these deep learning systems at all,” Kellman says.

Join Geezgo for free. Use Geezgo's end-to-end encrypted Chat with your Closenets (friends, relatives, colleague etc) in personalized ways.>>

Post a Comment

[disqus][blogger][facebook]

Afrogalaxy

Contact Form

Name

Email *

Message *

Powered by Blogger.
Javascript DisablePlease Enable Javascript To See All Widget