Artificial Intelligence also has illusory perceptions

  • Science Park
  • October 16th, 2020
 
Jesús Malo
Jesús Malo

A study on visual illusions in artificial neural networks, in which the University of Valencia participates, reveals that artificial perception does not eliminate the subjectivities and biases of the human brain. That machines can be wrong in their perception of reality, as people do, is one of the main conclusions of a work recently published in the Vision Research journal.

Researchers from the Image Processing Laboratory (IPL) of the University of Valencia and the Department of Information and Communication Technologies (DTIC) of the Pompeu Fabra University (UPF) have shown that convolutional neural networks (CNN) – a type of artificial neural network commonly used in detection systems – are also affected by visual illusions, just as in the human brain.

In a convolutional neural network neurons are arranged in receptive fields in much the same way as neurons do in the visual cortex of a biological brain. Today, CNN are found in a wide variety of autonomous systems, such as face detection and recognition systems or self-driving vehicles.

The study published in Vision Research analyses the phenomenon of visual illusions in convolutional networks compared to their effect on the vision of human beings. After training CNN for simple tasks such as removing noise or blur, scientists have found that these networks are also susceptible to perceiving reality in a biased way, caused by visual illusions of brightness and colour.

Also – the article says –, “some illusions of networks may be inconsistent with the perception of humans”. This means that the visual illusions that occur in the CNN do not necessarily have to coincide with the biological illusory perceptions, but that in these artificial networks, different illusions and foreign to the human brain can also occur. “This is one of the factors that leads us to think that it is not possible to establish analogies between the simple concatenation of artificial neural networks and the much more complex human brain”, says Jesús Malo, professor of Optics and Vision Sciences and researcher at the Image Processing Laboratory of the University of Valencia.

They propose a paradigm shift

Along these lines, the team has just published another article in Scientific Reports that details the limits and differences between the two systems, and whose results lead the authors to warn about the use of CNNs to study human vision. “CNNs are based on the behaviour of biological neurons, in particular on their basic structure formed by the concatenation of modules made up of a linear operation (sums and products) followed by a non-linear one (saturation), but this conventional formulation is too simple. In addition to the intrinsic limitations of these artificial networks to model vision, the non-linear behaviour of flexible architectures can be very different from that of the biological visual system”, sums up Malo, co-signer of the articles by the University of Valencia.

The text argues that artificial neural networks with intrinsically non-linear bio-inspired modules, rather than the usual excessively deep concatenations of linear + non-linear modules, not only better emulate basic human perception, but can provide a higher performance in general purpose applications. “Our results suggest a paradigm shift for both vision science and artificial intelligence”, concludes Jesús Malo.

Reference 1

Alexander Gómez Vila, Adrián Martín, Javier Vázquez-Corral, Marcelo Bertalmío, Jesús Malo (2020), “Color illusions also deceive CNNs for low-level vision tasks: Analysis and Implications”, september, Vision Research, advanced online edition. DOI: https://doi.org/10.1016/j.visres.2020.07.010

Reference 2

Marcelo Bertalmío, Alex Gómez-Villa, Adrián Martín, Javier Vázquez-Corral, David Kane, Jesús Malo (2020), “Evidence for the intrinsically nonlinear nature of receptive fields in vision”, Scientific Reports, October 1, 10, 16277. DOI: https://doi.org/10.1038/s41598-020-73113-0

More information: