With such a title, one hopes to read a light close to Minority Report and his vision of man in mutants, or Iron Man and his omniscient JARVIS assistant. But before going so far into futurology, it is essential to better understand the very concept of “Artificial Intelligence”: catchy word already overused, it nevertheless contains precise notions of understanding how technology evolves and will continue evolving.
AI: Artificial Intelligence
A search in Google image on the terms “artificial intelligence” is enough to see how much the concept is associated with the anthropomorphic robots and the reproduction of human behaviors. If man and his brain can be considered a model or a source of inspiration, we must separate from this analogy in order to understand artificial intelligence. And it is essential to differentiate the container from the content: a robot is only an envelope, it materializes the artificial intelligence by creating an interface (a voice like Siri, a person controlled by a computer in a video game, etc.) Engineering students of India are working on advancing artificial intelligence.
Artificial Intelligence Programming: The first scientific article to define AI
Artificial intelligence of a machine or computer program is what makes it capable of performing tasks requiring learning, memory organization and reasoning. The idea is to allow a computer to answer a more or less extended problem by providing it with the right data and software to interpret them and return relevant information or action.
The first steps of the very notion of artificial intelligence date back to the mid-twentieth century and to Alan Turing’s early research on the machine’s awareness and potential intelligence.
His idea was to create a machine capable of imitating man in such a way that a third party could not know whether he was talking to a computer or a human (the famous Imitation Game , which would give his name to the film almost as brilliant Than the character). This objective has never been achieved but it remains a standard test and shows the way to a lot of work.
Development of Artificial Intelligence
By looking at the major works carried out on artificial intelligence by the greatest representatives of the exact sciences (mathematics, physics, computer science, etc.) and the human sciences (sociology, psychology, etc.), two main categories of artificial intelligences Have been defined since the Turing precursor writings.
Artificial Narrow Intelligence – ANI (Low Artificial Intelligence)
This is the most pragmatic approach of artificial intelligence, which would be constructed by man gradually to meet one or more specific objectives. The system is capable of reproducing an action or responding to a problem according to its programming, without ever being able to go beyond the perimeter of its design.
The idea is to make the machine more and more efficient and therefore intelligent in a particular field. They can enrich their knowledge base autonomously, but always in the same way and with the same purpose.
Let’s take the example of voice recognition software: the more we use, the more we develop and recognise the words we pronounce. Indeed, artificial intelligence allows it to acquire some sort of “experience” and to improve, but this is restricted to one and only domain: the speech recognition.
Gradually, these machines have become increasingly complex; Who is able today to explain the functioning of Google’s algorithms, or why this or that person or celebrity are suggested to us on Twitter or Facebook?
These artificial intelligence, however, will always be limited by their initial function, and the capacities of each of the systems composed accordingly.
Artificial General Intelligence – AGI (Strong Artificial Intelligence)
More difficult to apprehend and explain, it is also the family of AI involving the most debates among philosophers, sociologists, or other ethnologists.
Strong artificial intelligence differs from its younger sister in its ability to surpass its original function, learn new fields of knowledge alone, reason and solve new problems, rely on experience to evolve, and so on. In a word, to rise to the level of human intelligence.
“I’m sorry, Dave. I’m afraid I can not do that. ”
We are not yet technologically advanced to create this kind of artificial intelligence, but science fiction has been trying to imagine it for years. Take the example of the famous computer HAL 9000 in the film 2001: the Space Odyssey of Stanley Kubrick. At the base programmed to help the space travelers in their mission of exploration, HAL develops a reasoning on its raison d’être and its true mission, and arrives at wanting to get rid of the crew on which it initially had For the purpose of watching. Before being disconnected, the latter ends up being afraid, proof that he has evolved enough to feel human emotion. It is in a way the culmination of the AGI.
Artificial Intelligence Applications
It is this form of artificial intelligence that unleashes passions in the highest spheres of science:
The day when this technology will mature, the place of man will be questioned in all fields. And tomorrow super artificial intelligence? (Artificial Superintelligence – UPS)
Less documented at the moment, scientists now focus on another form of artificial intelligence, concentrating mainly on the concept itself and the associated ethics: it is artificial super-intelligence (Artificial Superintelligence – ASI).
Unlike the human intellect, strong artificial intelligence would have no a priori limit in its evolution (which would be exponential, if we refer to Moore’s Law) and could therefore theoretically surpass human intelligence, To reach levels beyond our understanding.
This super intelligence would be beyond what we are capable of imagining, much better than human intelligence in all areas . She would be able to learn new subjects or themes at an exponential rate: once this level of intelligence had been reached, the capacity would be increased so rapidly that all the paradigms of artificial intelligence could no longer Be valid, and man may be led to disappear or transcend – the visions range from the most pessimistic to the most optimistic on the subject (we will address this disturbing and/or exciting perspective in a future light).
AI: despite an incredible evolution, often unsuspected progress of the general public John McCarthy, one of the founding fathers of the discipline and creator of the very term artificial intelligence in 1956, said that “as soon as something works, nobody calls it artificial intelligence. ”
This is a good illustration of the current state of affairs: the general public has little awareness of the advancement of scientific research on the subject and artificial intelligence remains the prerogative of films and some wacky projects. Yet behind their screen, or in their pocket, are already hidden artificial intelligence that would make pale the greatest experts of the late twentieth century.
For many years, however, concrete applications have emerged in many disciplines. Some are known, such as IBM’s Watson able to play Jeopardy, or Deep Blue, winner against the world’s best chess player (still IBM). But most are diffuse: army, banking, medicine, logistics, video games.
An excellent article on this subject, published on the equally excellent American scientific popularisation site “Wait but why”, illustrates this paradox between the incredible progress of artificial intelligence and the inability of man to grasp it, Two small schemes rather impertinent:
For ordinary mortals, artificial intelligence has painfully progressed from the level of an insect to that of an ape in recent decades; Levels of intelligence that do not really lend themselves to their admiration, just the fun. In reality, the progress between the ant and monkey levels is exponential, the promising dynamics of recent years suggest an equally exponential evolution in the future.
To get away from the theoretical approach of this first part, the next article will focus on the first applications of artificial intelligence and on the main technologies we now use every day.
But by then, maybe an artificial intelligence will be able to write it for us?