Humanity as a single civilization is approaching a fundamental milestone in our history. A milestone driven by unprecedented technological advances. Technologies are so complex and so dynamically developing that few people are already aware of their current state. And, as you know, what we do not understand, we ignore. At the same time, this critical milestone will soon definitively define our future as a species.
But today we still have the opportunity to influence this process. Technological progress cannot be stopped, no, it is impossible. But we must make every effort so that it is we, people, who can manage its course, its tendencies. Could take control of the challenges of this technology revolution.
Not so long ago, many prominent people of our modern human civilization spoke about these risks: the esteemed Dr. Stephen Hawking, the famous mathematician and futurist Vernor Vinge, the founder of Space X and Tesla, Elon Musk. Summing up their common position in the words of theoretical physicist S. Hawking, who in the last years of his life spoke a lot about the existential threat from artificial intelligence: "The development of artificial intelligence can be both the most positive and the most terrible factor for humanity. We must be aware of the danger it carries. " He made this statement
during a videoconference in Beijing, held as part of the Global Mobile Internet Conference in 2017.
Humanity has no right to neglect this kind of assessments and scornfully refers to the possibility of their implementation. We don't have much time left. Unfortunately, today's approaches to the problem are reduced to hushing it up, or to ridicule and relegation to a state of farce. Media motives that take this issue out of serious public discussion are not important right now. Today, unfortunately, it has become a commonplace to mention only the advantages of using individual solutions based on neural networks in certain business projects, but they are just a context, the beginning of a path. Behind this "weak AI" lies a much larger and more complex system, which society and business completely ignore.
In reality, only people far from the subject can think of AI as a pseudo problem. Hushing up this challenge on the state agenda is an unforgivable mistake of the elite laymen in assessing the real danger of the situation. According to expert assessments of many experts (including R. Kurzweil, CTO in the field of machine learning
and natural language processing
at Google), the possibility of the appearance of the so-called. "Strong AI" - ie. self-aware artificial intelligence (with a human-level consciousness) will come before 2030, i.e. in the next nine years. Moreover, the likelihood of its appearance is growing exponentially every year.
This is very little for making serious decisions and catastrophically small for their implementation. We estimate that unless the decision is made as a full-fledged global strategy by 2025, our chances of successfully overcoming the crises caused by uncontrolled AI creation will decline exponentially. Thus, by 2028, they will reduce all our capabilities to manage this process to a minimum.
Humanity should learn to be guided by a common understanding of the security of civilization as a whole in its actions and decisions. We need to find common ground more than ever in the face of a common fundamental crisis. It is necessary to confront the elite of countries and heads of corporations with the need to find a common strategic solution. Government and corporate competition that localizes AI creation can destroy us all. We are talking about eschatological challenges that will lead to catastrophic consequences for the entire civilization as a whole. If this issue is ignored, everyone will lose. The uncontrolled development of AI systems inevitably and fundamentally increases the risks associated with the loss of humanity's ability to control our civilization, and moreover, even its preservation. If we do not realize these risks and do not do our best to create a single global strategy to control these developments and do not make it our priority, we will inevitably face a situation where we will not be able to control our own lives.
We, humanity, are intelligent, we proudly call ourselves Homo Sapiens, and therefore we are obliged to demonstrate intelligent behavior and predict our future, taking into account the critically serious challenges that the development of artificial intelligence technologies poses to us and the consequences of their creation for the entire civilization as a whole.1. The first group of challenges "Direct encounter" with the new mind.
This risk is not inevitable, but our civilization must take it into account, because in essence, it is tantamount to direct competition with a superintelligent civilization on our territory. Those. it is in many ways tantamount to a direct invasion of an alien technological civilization.
The reasons for its occurrence are multifactorial. These are examples of behavior from our history: as a species, we are constantly competing and changing the world for ourselves. Our history of relationships with less developed species and cultures is full of examples of aggressive confrontation. Therefore, the simple awareness of the AI of the threat from our species, even as some kind of potential, can cause a collision. But apart from this, there can be many reasons for a direct collision. This risk can be fatal to our species in a very short time. Possible scenarios for this kind of aggression: - Capture of weapons systems by AI algorithms
with subsequent use for their own purposes, both for direct attacks and for provocative strategies that can lead to an escalation of military conflict between countries. - Subtle manipulations at the level of media and markets
to destabilize the situation and gain superiority and control. - Biological weapons.
Viruses, Bacteria that can spread very quickly and destroy our species, leaving the entire biosphere intact. 2. The Cognitive Excellence Challenge Group.
In the event that the human species does not find a direct competitor in AI, and that unexpectedly turns out to be our ally or a neutral element, humanity will inevitably lose its internal rationally conditioned imperatives for intelligent life and intelligent behavior. A person will be deprived of meaning to live as a person. In this case, technology will revolutionize both the world and us within it. The so-called. The "Technological Singularity" and the threads of control of progress will finally be lost by humanity. Indeed, from the moment of the emergence of the AI (Strong AI), it is these systems that will determine the logic of our development, which will inevitably lead to the process of constant complication of the AI systems themselves and the emergence of "Super Strong AI", the characteristics and tasks of which will fundamentally go beyond our understanding, in fact turning a person into his "pet", which will deprive us as a species of the meaning of life. We will lose our bearings, the ability to determine our own future and present. This will be the catastrophe of the "golden age", which will inevitably lead to our degeneration and death as an intelligent civilization.
The options for development in the conditions of the coming Singularity are incredible, but the key is the loss by us, humanity, of the methods and capabilities of control at all levels. We will no longer be able to make decisions. 3. Group of challenges "Assimilation".
Many modern technological optimists, such as representatives of the philosophical-quasi-religious systems of transhumanism and extropianism, assert the necessity and aim to find a symbiotic balance between man and the superintelligent system. They believe that by creating a strong AI, and using it to build a "Super-Strong AI", they will gain a certain "big brother", a technological quasi-deity who will help humanity solve the issue of immortality, or at least part of people. A kind of technological "philosopher's stone". They see this as a "natural evolutionary" process of human consciousness transcending biology through fusion and transition to the information matrix.
The "Principles of Extropism" described by Max More in 2003 gives us an idea of the logic behind this reasoning. In particular, one of the popular ideas is the so-called. technology "Uploading"
- transfer of consciousness to a kind of "information matrix". This technology is one of the methods for achieving immortality. Along with various types of human cyberginization, it is the core of ideas about the future of humanity in the era of machines. Let's leave out moral and ethical dilemmas. But you need to understand that at the basis of these arguments of transhumanists there are some nuances that the followers of these philosophical trends do not fully take into account:
- Existential risks for the "I". Because copying consciousness does not guarantee its preservation. The question remains, will the "transferred" consciousness be the original or will it be just a copy? The second point is the frustration of consciousness, its distortion during this procedure and after it. Under the current conditions, there is no definite and stable theory of consciousness. Today there are about ten relatively developed theories of consciousness. But none of them provides an exhaustive answer and a complete picture of its genesis and functioning. Moreover, the neurophysiological foundations of consciousness are not clearly defined. Consequently, the so-called "transfer of consciousness" can be both a kind of suicide and a specific version of insanity: states of splitting "I" and other unpredictable states, because the consequences of this kind of manipulation of consciousness are not predictable. Moreover, changes in "I" are inevitable, since body physics, sensory-motor mechanics, chemistry, and a whole range of hormonal and body physiological processes will be eliminated in this new quasi-life system - the "technosphere".
Loss of self-identification is inevitable. And as a result, even in the most optimistic scenario, when the problem of similarity and the original is solved, the dehumanization of such consciousness, the loss of its human foundations, will be inevitable, and thus de facto the killing of all humanity in such a person and self-destruction of such humanity will occur, which ultimately will refuse from what makes us human.
- Risk of complete takeover. Even if the risks of the first group are removed, the "person" who has undergone such a procedure as" transfer " becomes a hostage to the motives and goodwill of the forces that are the masters of the situation in this new environment-in a kind of cluster of "Technosphere", and its functional capabilities can be changed and adjusted to their needs. I.e., a person can in the full sense of the word become only a function in some incredibly complex system, and humanity as a whole will be just a certain cluster of functions for specific tasks of Super-strong AI. In this situation, as in the situation with the Singularity, the person loses the ability to make decisions, becoming the property of an alien paradigm. Variations on the theme of assimilation are no less numerous than in the situation with the Singularity, but even more catastrophic in nature, because they involve the rejection of oneself, the complete reformatting of the individual according to other people's rules. In some cases, it can be much worse than death.
We, humanity, being an intelligent civilization, realizing and taking into account all the challenges outlined, are obliged as soon as possible to develop an integral strategy to control the development of the situation in this area, to focus the attention of the state, corporations and companies working in this direction, on the implementation of this strategy and the creation adequate answers to the above and all other challenges that go beyond the scope of this manifesto. In order to preserve for us, for our species, the ability to think, make decisions and live. Half-hearted measures taken locally will not give the necessary global result and will not be able to provide us with solutions to these challenges.
All the facts presented here are not the fruit of idle reasoning, they are entirely based on scientific data available in open sources and deserving the most careful study. Humanity has very little time to direct this process in a constructive and manageable direction. This is no longer the problem of "our children" - no, unfortunately, this is already our common task. It is us, people living today, who will have to pass this most difficult and fundamental exam in our entire history. We must prove to ourselves that we are able to overcome this challenge by preserving humanity for the future and humanity in ourselves. After all, we are humanity, and it sounds proudly!