Artificial intelligence: how does it work?

Artificial intelligence: how does it work?

0 72

By Laurie Raffin

Artificial intelligence (AI) is everywhere in the news these days, from debates, to newspapers, to the Internet and more. The whole world is talking about robots in news stories, commercials and TV shows but we do not really know how AI works.

What is Artificial intelligence?

Artificial intelligence can be embodied in a physical machine, such as a robot, or in a virtual machine, such as a program. In either case, this machine has the characteristic of so-called “intelligent” behavior: it can solve problems, recognize objects or voices, or even win a game: for example, Lee Sedol, the world champion of Go, was beaten by Google’s AlphaGo robot in 2016.

To know whether or not a machine is “intelligent”, the best-known and least formalized test is the Turing test. Indeed, the principle involves having a person and a machine converse via a computer. If at the end of the conversation the person does not know whether they have talked with another person or with a machine, the machine has passed the test and is “intelligent”. However, this test cannot be demonstrated mathematically since it is highly subjective and depends on the person and their criteria.

How do people see it?

Most of the time, people have many preconceived ideas on the issue of AI.

They may be opposed to AI for various reasons, such as spiritualism, which claims that in order to be intelligent, a being must have a soul or a conscience. There is also the question of physical reality, as AI transcends the physical world. As well, subjectivity cannot be programmed, just like emotions. Some people even think that it will be impossible for us to achieve true AI because the field is too complex.

Others are in favor of AI. The tenets of materialism, which is opposed to spiritualism, advocate the fact that Man himself is a machine and therefore that AI can have a consciousness. Functionalism sees consciousness as a computational process, a theory studied by David Chalmers which explains that consciousness is not limited to a particular matter, that neurons can be replaced by chips and that the system will produce the same conscious experiences if the functional organization is maintained. Moreover, like human beings, AI has very easy access to databases, to all accumulated data and thus to infinite amounts of knowledge.

The question we can ask ourselves concerns the type of intelligence we should use as a basis when referring to AI, as we all have access to the same information. But the way of thinking is quite different.

Did you know that there are two kinds of artificial intelligence?

At a recent meeting organized by the Wild Code School in Lyon, France, Olivier Georgeon, a researcher at the LIRIS laboratory at University Lyon 1, explained that two types of AI are being developed today: that which is based on a priori modeling and that which is not based on a priori modeling.

Although these types of AI may seem complicated at first glance, their basic principle and differences can be quite easily understood.

 

AI based on a priori modeling

This kind of AI involves most of the service robots we know today, which are programmed and coded with a defined purpose in mind. Cybedroid’s Leenby robot is programmed to shake hands with someone when its sensors tell it the person is touching the robot’s hand. It is programmed to simultaneously say “Nice to meet you, my name is Leenby” when someone touches its hand.

Taking the example of the Chinese strategy board game Go, the machine operates in a closed field (the Go game board) with rules (those of the game). The machine therefore takes in the information and explores a predefined decision tree in order to win. The field of possibilities is planned in advance and, based on how the game plays out, the machine seeks which move will maximize the chances of victory.

With today’s image recognition and classification technology, we simply need to code the patterns of the pixels in the photo and then to define everything we want to recognize, as the system does not know the world. There are, however, certain limitations, namely due to confusion. Olivier Georgeon’s example was the mix-up between a poodle and a fried chicken: as the system did not know what a dog is, it could not make the distinction. This kind of problem also affects translations, as AI translates patterns (a word = a pattern) yet it does not understand what it is translating. As a result, proofreading is often required after translation.

Artificial intelligence based on a priori modeling is the basic principle of any AI system that is working today.

 

AI not based on a priori modeling

Here, an algorithm with innate preferences encoded determines behavior. It is based on active perception, in other words on interaction. The machine learns in the same way as a baby: at first it does not know what it is doing, it tries things out, for example hitting its hand against a wall. The baby gets feedback from the environment – i.e. it is painful – and thus learns that learns it does not like to be hit and that a wall is an obstacle.

The algorithm thinks in a similar way. If, in its preferences, it is told that hitting a wall is not good, as events unfold (actions and feedback), it deduces which way to go in order not to bump against anything. Even though the starting algorithm is the same, as Olivier Georgeon explains in this video, the experiences and the learning are different each time, just like for a human being:

 

Artificial intelligence not based on a priori modeling is still in its infancy. Since it has no particular aim, today it is difficult to know just how it could be useful. Olivier Georgeon sees it evolving towards animal robots that behave in the way you would train them to, like real animals. Yet the road ahead is long.

This type of intelligence also poses an ethical problem different from what we usually encounter, as in a priori modeling – which includes autonomous vehicles – the ethics are the same as for other machines: as all possible eventualities are coded in, human ethics serve as the basis. However, in the case of technology not based on priori modeling, it is more difficult to define from when we can consider them as “living” / “sensitive” beings since two robots trained differently will react differently to the same situation. Now there’s something to think about.

NO COMMENTS

Leave a Reply