Defence against artificial intelligence risks is the key to humanity's survivability
New technology expert Gary Marcus called August 2024 the month when the generative artificial intelligence bubble will burst, as the market will lose disappointed investors.
He cited the situation at OpenAI, which is facing serious problems, as his argument. The company that revolutionized the world of neural networks by creating ChatGPT continues to experience a brain drain.
Co-founder John Shulman has announced that he has left the company. He left to join OpenAI's artificial intelligence competitor Anthropic. Peter Dan, Vice President of consumer products, has also quit.
Gary Marcus attributes the departure of top managers to the fact that OpenAI has no progress in creating AGI - AI, whose activities will be comparable to those of the human mind. According to Marcus, investors will "figure out" the developers and stop investing in a product that OpenAI is "not even close to developing."
The artificial intelligence industry is indeed going through a difficult period. But this is primarily due to data security problems, believes Yaroslav Bogdanov, founder of GDA Group.
“The staff turnover at OpenAI did not start yesterday. Earlier, Ilya Sutskever and William Saunders resigned. Both, I should emphasize, were in charge of development security. Saunders spoke directly about the fact that the race for profit at the expense of security was turning the company into the "Titanic of Artificial Intelligence". In my opinion, this is the most accurate description of the process of development and implementation of generative neural networks in conditions when the industry has no regulatory tools - neither scientific, nor legal, nor ethical," said Yaroslav Bogdanov.
Bogdanov drew attention to the fact that not only the state of OpenAI indicates that the artificial intelligence market has dropped but Nvidia and Microsoft for the past month lost capitalization, which also indicates that the wave of excitement around AI has declined.
“Hopefully, the voice of reason calling for a pause in AI development has been heard. Safety when working in such a global field as generative neural networks is a matter of survival of human civilization. AGI announced by developers, which until now has been present only on the pages of science fiction writers' works, is on the threshold. Is the world ready for the emergence of a technology capable of completely replacing humans on an intellectual level? Of course not. It could become a veritable Pandora's Box if AGI develops its own goals other than human. It can make decisions that will eliminate humans as a species if it is necessary to achieve them," Yaroslav Bogdanov explained.
Thus, the expert concluded, the primary task of humankind today is not the fastest development of a new level of AI in pursuit of leadership and profit. The main thing is to ensure security through the formation of a regulatory framework in the field of information and communication technologies. Only after that further progress is possible. And all the slumps and pauses in the generative neural networks market should be treated as a chance for a global dialog on ensuring international cybersecurity.
“Thus, August 2024 should be a new starting point for a responsible approach to artificial intelligence. Cybersecurity is a shared responsibility for all parties. A competent investor will invest in the security of a product for the long term. And the task of the expert community is to provide humanity with a balance between progress and the risks that uncontrolled development of neural networks entails," concluded Yaroslav Bogdanov.