One of the most challenging issues of the future is the management of the relationship between human and machines. And if in the past our counterpart has been more similar to a tool than an interlocutor, in the future it will indeed become a vibrant, clever and active fellow. Maybe one day we will be able to create an artificial intelligence that will go beyond the human one. But at that point what will be the role of humanity in such an environment? And what will prevent robots from taking control of our world?
Since its early beginning, AI has tried to replicate the human intelligence. Researchers’ ultimate goal is to reproduce the ability to learn from experience and to adapt to new situations using knowledge acquired before, even in different environments. They want to create an intelligence that is capable of planning actions, predicting consequences and identifying cause-effect relationships.
The objective depicted above seems still distant. AI is now limited to specific tasks, such as virtual assistance, self-driving cars and electronic trading. What is sure is that the massive use of AI technology will have very important economic and social consequences that could totally change the way we conceive our society.
AI systems are capable of recognising patterns and processing data and the application of this technology is disrupting traditional sectors. For instance, FICO, a data analytics company, has developed an AI-enhanced system that helps to detect transactional frauds. Similarly, Accenture uses AI to scan CVs and to select the best candidates. Furthermore, AI widens our possibilities in terms of understanding the world. “Google Accelerated Science”, a research division of Google, is applying machine learning and AI systems to accelerate progress in natural science. Moreover, in the future, the use of AI will be fundamental in the field of medicine: Microsoft, through its “InnerEye” project is developing an assistive AI technology for cancer treatment.
However, the development of AI is not devoid of risks. Indeed, AI will come with unemployment: in the short run, specific jobs, especially those which are repetitive and time consuming, will be totally automated, but also more complicated tasks such as interpretation and manipulation of data will be influenced by AI. For instance, Morgan Stanley is already improving its financial advisers’ capabilities through AI. Their algorithms are able to assist customers in suggesting trades and take over routine tasks. The impact on unemployment will be ambiguous, because if, on the one hand, some people will lose their jobs, on the other hand, some new interesting, high-paid jobs will be created.
In part this process is already in act, and we can observe that AI not only replaces human jobs, but also broadens the possibility of the activities that were done by humans before. For instance, in China, over 20 million AI-equipped street cameras were installed in 2017. Security is, indeed, one of the sectors that will be disrupted by the use of AI. Algorithms are now capable of interpreting videos, recognising faces and actions and this means that AI systems could potentially lead to monitor everyone in every situation. The challenge will then be to manage the trade-off between security and privacy.
Privacy is not the only field that could be imperiled by the use of AI. This technology can be applied to build military instruments like automated aircrafts or ground vehicles. We don’t have to underestimate the rising power of intelligent weapons. For instance, Kalashnikov Group is developing combat robots that identify targets and make autonomous decisions. The sudden development of these technologies could alter the world power hierarchy, and provoke a situation of dangerous political instability. Yet, there is also a small improvement in this kind of evolution: we will probably see fewer and fewer human beings on the battlefield in the years to come.
Furthermore, in making decisions, machines could make mistakes. In the worst scenarios this could lead to the death of human beings. Think about an autonomous airliner that fails to estimate the altitude during a landing because of a sensor interference. And what if instead of an aircraft, the subject of this episode is the security system of a nuclear plant? A possible way to reduce these risks is to design systems with checks and balances so that a single mistake does not cause the collapse of the entire system. From this perspective, the role of humans is still crucial, both as system designers and as checkers.
AI systems solve problems maximizing utility functions. That is the way algorithms manage to replicate human rationality. The big problem is to choose the correct utility function to maximize, so that AI systems don’t come up with undesired solutions. For example, if we asked to an AI-enhanced supercomputer to solve the most complex mathematical problem ever imagined, its response could be to exploit all the resources of Earth to build more powerful supercomputers to obtain more computational power. Hence, the focus should be put on the shape of the utility function as well as on the constraints to solve a specific problem.
If one day machines overtook human intelligence, they could develop other robots, even more powerful than their creators. This idea is called “singularity”, and according to some authors, this process would lead to an “intelligence explosion” that would determine the human extinction. However, there is a lack of expert consensus on the idea that one day machines’ capabilities will overtake the human ones.
The idea of intelligent machines that behave beyond the control of their creators and jeopardize the existence of humanity is a common theme in literature. For instance, in “2001: A Space Odyssey” HAL 9000, an AI-equipped supercomputer, starts behaving weirdly and this leads to the death of an astronaut and to an increasing tension in the relationship between man and machine. A possible explanation for this literary trend is that machines are becoming more and more complex, and we are unable to understand how they work. Authors try to exploit our fear of the unknown in order to create thrilling works.
Although the development of new technologies leads probably to an increasing number of risks, AI is not something that we should be afraid of. Technology has been embedded in humanity since its inception. It is our way to replicate nature and even go beyond it. Homo habilis made choppers because he had no claws. Humans build computers because they don’t have enough computational skills. The capability to build tools in order to manipulate the world is our main gift. And if in the past those tools were aimed more at compensating our physical lacks, now, in the digital era, they are widening the possibilities of our brains.
In order to exploit the opportunities that AI offers without jeopardizing human rights, a clear regulation is needed. Legislation should ensure an appropriate ethical and legal framework in which AI researchers and companies can operate. It should outline a safe path for the evolution of this technology, so that society can change in time. The main goal is a general legislation, i.e. multisectoral, as the same algorithms can be applied to solve very different problems. Furthermore, the best alternative is probably to regulate AI at an international level, since the spread of innovation in this field occurs in real time.
Regulation must concern different aspects. First of all, it is crucial for humans to be able to recognize AI bots from normal interlocutors. This is particularly important if we think about the role of social networks and the spread of ideas promoted by fake accounts that are, in reality, managed by AI systems. Second, if AI systems begin to populate our houses, they will gather an infinite amount of data. For this reason, it is very important to continue to promote data policy transparency and extend it to AI. Finally, the most challenging issue of AI is probably to design systems that make it possible to reach the designer’s goal without imperiling rights. Thus, it is necessary to make rules that ensure a particular attention on this topic.
In conclusion, since AI is about to change the way we conceive our society radically, a crucial change in the way we think about technology is needed. Legislation is the way we can control this process. Information is the key to attain a sound understanding of the dialectic between man and machine. The development of AI involves serious risks, but we possess the instruments to handle them. AI is a tool that is capable of broadening the horizons of our minds. But it is just another extension of us. In the end, we have always been cyborgs, since the days we used choppers; perhaps “artificiality” is what makes us humans.