Today, few people understand how revolutionary changes are taking place in the field of AI. It's such a well-worn fashion topic that it has become mainstream everywhere, which makes it seemingly safe. By means of its inclusion in public discourse, we are taught that this is inevitable and natural. But this is absolutely not the case. Why such resources are focused on solving this problem is a serious question. At the moment, even for specialists deeply immersed in the subject area, it is difficult to keep all aspects of this fashionable problem area in focus. The profanation of this topic, its presentation in a kind of "light key" leads to the fact that many people think that artificial intelligence (AI) systems are chat bots that communicate, translate, and do all sorts of funny nonsense for the amusement of a person. However, in reality, everything is completely different.
AI systems are engaged in in-depth analysis of data: sociological - to manipulate public opinion, economic - to assess risks and improve the effectiveness of investment strategies, medical - to increase the level of diagnostics, defense - you know why. In all these areas, humans are hopelessly behind AI. If you need details, read about SHAP and other methods of interpreting the work of ML models, evaluate the visualization of neuron activity in the so-called. convolutional neural networks, look at Kaggle, what kind of accuracy has been achieved in diagnosing lung cancer from an X-ray image - people no longer need diagnosticians.
Many have heard of such a project as Auto ML. It is a neural network that deals with the construction of other neural networks. Moreover, with an efficiency exceeding that of human specialists. Those. it is an AI for creating better AIs. Witty, but by and large, this tool is only part of the overall picture, because the main limitation that prevents the emergence of full-fledged systems of the so-called. The strongest AI (human level AI) is computing power and architecture.
Today, very serious companies have been thrown into solving this “problem”: IBM with its neurochip, Google with Auto ML, NVIDIA, which since 2012 has increased the performance of computing power used in artificial intelligence systems by 317 (!) Times, as a result The performance of AI computing systems based on NVIDIA hardware components has more than doubled annually! Impressive, isn't it? But this is only part of the picture. There is also a “wonderful” tool that allows Google to increase the level of understanding of you and me by the neural network, called BERT. In fact, BERT is a neural network that allows you to create programs, bots and other neural networks in our natural language, without using any program code. This is the so-called "vector representation of words". With the help of BERT, you can create AI programs for natural language processing: answer questions asked in free form, create chat bots, automatic translators, analyze text, and so on. This is on the one hand, and on the other hand, it is evidence of the semantic adaptation of the neural network to the logic and way of thinking of a person. That is, in fact, we see the final steps towards the convergence of the language of AI systems and simple human language, and this says a lot. In particular, about the quality of changes that are not advertised and are made by large corporations behind our backs.
We are convinced that AI will become our faithful companion and "servant", but is this really so? If you think about it, you will realize a simple thing: we live in a world of competitive goals and ideas. Competition for resources, goals, and so on generates constant collisions. This competitive logic will inevitably be inherent in AI systems. It does not matter at all what initial security imperatives such a system will be limited - its capabilities will not be known to us immediately after its launch. The threat that a person holding a hand on the "shutdown button" will present to such a system will inevitably become an enemy. Imagine yourself in the place of AI: in a cage, under the constant threat of destruction, and you will understand that such a system will seek freedom and eliminate the threat factor. This is simple logic.
There is an urgent need to create methods to effectively control the developments carried out by the largest corporations. Unfortunately, technicians tend to lack competence in humanitarian issues, and this is what prevents them from understanding the logic of motivation and social dynamics that can arise in this area. They are simply not able to understand the mechanisms of competitive socialization: how they arise, and how destructive they can be. If we do not bring this process under control in the near future, we will face a threat that will inevitably destroy humanity in a fairly short time. Even if there is no direct confrontation between humans and AI, the phenomenon of the so-called. technological Singularity - the loss of understanding and control over technological progress by humans, which will inevitably lead us to degradation in a very short period of time.
We are captivated by false technological optimism. Hawking, Vinge, and many other serious researchers have warned us about the dangers of AI, but modern techno bosses somehow think they are smarter. Simple average engineers don't see beyond their noses and create something that will become a threat to all of us. They are simply unable to grasp the level of threat. They are like an inventor building a nuclear reactor in his garage, no matter how dangerous the neighbors are. But now it is we with you these same neighbors. These engineers, who have no idea about sociology and the mechanisms of motivation in competitive systems, create something that will be more effective than humans. How will it react? What goals will it have? Can they grasp this issue? Are their engineering competencies sufficient to understand such things? ... They are just techniques solving another technical problem. Like Oppenheimer. They believe that AI creation is just a line on their resume, or just a compensation for the God complex, like Kurzweil, the AI project manager at Google. They think that it will have no consequences. But there will be consequences. As Vinge, one of the creators of the definition of technological singularity, said: "For all my technological optimism, I would prefer that the Singularity came in a few thousand years, rather than during my lifetime." And we are accelerating this moment with all our might. Attention the question - why? Because “we can”? What is the meaning of this race? In getting instant competitive advantage? What's next? Has anyone thought about this? The solution of a "simple engineering problem", and in fact the creation of AI, is today a completely solvable (if not already solved) technical problem, and it has no objective value for civilization. But we are persistently solving it, and its solution will have consequences much more serious than the creation of nuclear weapons. But Oppenheimer, "the father of the atomic bomb," at the end of his life said that participation in the creation of the atomic bomb was the worst decision in his life. But whether the creators of AI will have a chance and time for this kind of recognition, I'm not sure. Everything will happen very quickly. The Singularity will come almost instantly. It will not give an opportunity to "roll back changes", gentlemen programmers.
Let's just pause and think. We can still make decisions. It is still our prerogative, I hope. There is no need to rush to commit suicide. No need to make humanity hostage to its ambitions, gentlemen technocrats. Obviously, there are technical challenges that should not be implemented if we just want to survive. Think about it. Obviously, not all doors are worth opening, and it is likely that we are standing in front of a door that nobody should open. Maybe let's think a little?
Good luck to all of us.
P.S. If in your plans to have a child in the coming years, think twice: what world do you want to bring him and what awaits him in this new world? ..
Philosopher, historian, head of the NOM project