How Does Artificial Intelligence See the World?

We have been living side by side with artificial intelligence for a long time. The little black bricks in our pocket, which we still naively call telephones, produce an artificial image, artificial sounds. We live inside a synthetic garden, where we are simultaneous with artificial intelligence. It seems to us that there is a main character in this garden. And this character is us. It is as if we are conductors, overseers of technologies, their guides. But each of these sensors sees the world based on their intelligence – and in a very different way.

Sometimes we are not the main character of the story and not the main character in the frame. We are just an object. Perhaps we are being looked at from the outside, studied, or even ignored. We are not an integral part of this plot, because instead of us, something else comes to this position.

Artificial Consciousness

Technology and humans have always worked together.

We can understand this intellectually, but it is tough for us not to project onto machines what we think they should be. And how they (as we would like) see us. Have you heard about the baby seal therapy robot so popular with the elderly? He calms the elderly. But, of course, he cannot have a love for them. Still, people think that the seal loves them. Of course, it would be easier to create a machine that would work the way we would like – but then we do not allow it to turn around for real, to do what it really could do to the fullest extent of its strength and capabilities.

In our research, we tried to find some of the oddities that happen in the work of machines – and think about the question of what future these oddities can tell us.

Deep Dreams

Are you familiar with Deep Dream images? It is a project created by research laboratories Google Labs. The algorithm tries to identify the dogs in the pictures. The researchers gave AI the task to look for many dogs in those pictures where these dogs are absent. Sooner or later, AI programming begins to experience hallucinations in the form of dogs. It is extraordinary and entertaining. Artificial intelligence can do some things that seem to us to be a kind of creative work. We sometimes call these kinds of things dreams.

Artificial intelligence is also prone to apophenia. If AI programming not only somehow processes the world, but also somehow senses it, then understanding machine vision becomes very important. Many of you have probably used the Google Translate tool. It is an augmented reality tool. You can point your phone at some text or pointer, mark in the application form which language and into which language to translate, and then the application will try to provide a visual translation. An interesting point about this technology is that the program works pretty well in general, but it is not good enough to avoid bizarre errors.

When you teach an artificial intelligence neural network to find a dog, you usually show it about 10,000 pictures of dogs, mark where the dog is and where it is not, and the network is excellent at recognizing dogs. But the fundamental idea, the fundamental understanding of what it looks like, is very different. Even if recognition works in practice, research shows that the principles of this recognition will be completely different: the network has a different idea of ​​the subject from us. 

Pattern Recognition

It is possible to use neural networks to solve the problem of pattern recognition in real-time, in particular, the Hopfield neural network. The use of the Hopfield neural network as an associative memory allows you to accurately reconstruct the images that the network has been trained with when a distorted image is fed to the input. In this case, the network will “remember” the closest (in the sense of a local energy minimum) image, and thus recognize it. This functioning can also be thought of as the sequential application of auto-associative memory.

The Hopfield neural network is an example of a network that can be defined as a dynamic feedback system in which the output of one completely direct operation serves as the input to the next operation of the network. This network is fully connected, that is, the output of each neuron is connected to the input of all other neurons, except for itself. Also, the network is single-layer, where the same neurons are used simultaneously as inputs and outputs. Teaching vectors are stored in regular text files. They contain a sequence of ideal signal values. The values ​​are aligned with the signal amplitude so that it does not influence the decision-making process.

“AI Told Me”

Artificial intelligence not only helps people find good cafes near their homes but also knows how to go crazy. The programmer and founder of the online art project AI Told Me tried to show the changes in the electronic consciousness of the neural network. Using a generative adversarial network, or simply GAN, the AI programming drew a human face that never existed in reality. For this, the neural network processed and connected the features of people from millions of photos and videos.

Individual parts of the AI ​​were responsible for the features of the eyes, the shape of the head, skin color, the location of the hair. Like the human brain with neurons, artificial intelligence used the connections between program fragments to build an entire face. 

The author of the experiment not only created a spectacular video but also laid a deep meaning in the project. The programmer tried to show that people also see the world like a disconnecting AI. Neuroscientists tell us that deep neural networks are similar in certain aspects to the visual system, so this project is my unique opportunity to see how the world is changing in someone’s mind even if it is artificial.

Final Word

By successfully delivering solutions to customers, as well as developing our proprietary AI-powered platforms, FPT Software is utilizing this technology as one of the most important factors in taking advantage of the digital transformation and avoiding its potential risks. FPT Software is currently committed to implementing four core artificial intelligence technologies to develop solutions that drive our clients’ businesses:

  • NLP
  • User Behavior prediction
  • Computer Vision & Pattern Recognition
  • Sequential Data Analytics.