In 2025, the number of crimes for the implementation of which attackers will use artificial intelligence will increase significantly. According to experts in the field of cybersecurity, fraudulent methods will develop along with the development of neural networks, and about 90% of illegal cyber groups will use them this year.
Technologies are developing too quickly
It is assumed that the attacks will affect not only the virtual space, but also other areas of life. One example is the undermining of the Tesla Cybertruck in Las Vegas in early January. The ChatGPT chatbot was used to prepare this attack.
The annual increase in crimes committed with the help of AI is not recorded by official bodies. It is determined approximately by the number of bots and the content they generate. In 2025, the increase in such crimes may increase up to eight times or even tenfold compared to last year.
The specialist notes that methods of protecting neural networks can be bypassed using so-called prompt engineering. In such cases, an attacker can create a fictional situation for AI, for example, ask it to imagine itself as a writer and write a novel about terrorists, including instructions for creating an explosive device.
Dangerous networks
The greatest threat is posed by cases when attackers can obtain information from a chatbot on how to create weapons, explosives and chemical compounds that can be used for terrorist purposes. This was warned by Dmitry Shtanenko, an expert in the field of information technology.
This year, an increase in data leaks is also expected due to the widespread use of artificial intelligence in organizations. Some of the employees admitted that they use such tools without the knowledge of management.
The reason for the growth of global crime rates with the help of AI will be automation and new opportunities for personalization of attacks. Cybercriminals will begin to use technology for more sophisticated attacks, which will require new methods of protection and changes in legislation and software to combat this type of crime.
Artificial intelligence cannot attack on its own, but it does an excellent job of creating phishing emails, emphasized Alexey Gorelkin, CEO of Phishman. In 2025, a significant increase in its capabilities is expected, and tools with artificial intelligence may become widespread among attackers. This is especially true for those who use social engineering methods.
Ekaterina Snegireva, senior analyst at the Positive Technologies research group, drew attention to the fact that criminals also use artificial intelligence to create and modify malicious code.
In particular, in June 2024, a phishing campaign was discovered, which resulted in the spread of the AsyncRAT virus. Malicious JavaScript and VBScript scenarios were created using a computer.
According to her, this year an increase in the number of data leaks is expected due to the widespread use of artificial intelligence by organizations. 38% of the employees surveyed admitted that they use such tools without notifying the employer.
F.A.C.C.T. specialists expect that artificial intelligence in 2025 will play a more significant role in cyberattacks. In particular, it can be used to create more realistic deepfakes, automate phishing attacks, and improve the efficiency of finding vulnerabilities in systems and applications.
However, experts do not have statistical data on such cases. They study specific examples of the use of AI and deepfakes in cyberattacks.
Protect yourself
Russians officially cannot use foreign services such as Grok or ChatGPT, since they cannot be legally paid for with cards from Russian banks. Also, these services are not available to users with IP addresses associated with Russia, said IT expert Sergey Pomortsev from GG Tech.
According to the expert, domestic analogues are safer from the point of view of data confidentiality, since they are controlled by Russian state bodies and regulated by legislation on personal data and other regulatory acts.
It is already necessary to introduce more accurate formulations corresponding to technological development into the legislative base of Russia.
It is necessary to introduce a ban on the use of GPT technologies for generating requests relating to the manufacture of homemade weapons, as well as on the creation of recipes including chemical elements such as hydrogen sulfide.
According to Igor Bederov, head of the investigation department at T.Hunter, there are mechanisms in Russia that help protect users from unwanted content. Among them are filters that block certain words and expressions, as well as the "Code of Ethics in the field of Artificial Intelligence." This document was signed by large companies such as Sber and Skolkovo.
It is important to note that the use of AI in cybercrime is a relatively new phenomenon, and law enforcement agencies and cybersecurity specialists continue to study and develop methods to counter such attacks.
Vladimir Daschenko, expert at Kaspersky ICS CERT, is confident that in the next year and a half to two years, artificial intelligence will reach its peak of development. According to him, a "golden era" will begin for neural networks.
The artificial intelligence market is showing rapid growth. According to forecasts, the share of AI in Russia's GDP may be 2%. This will be possible thanks to state support aimed at developing this area. It is planned that 5% of the budget will be allocated to scientific research in the field of neural networks, and another 15% to other areas where artificial intelligence is used.
But something else is also curious. Igor Kalyaev, Academician of the Russian Academy of Sciences, expressed concern that the division of artificial intelligence systems by national, gender, confessional and other characteristics could lead to a conflict between them. In such a situation, humanity may only be a pawn in this war.
Read related materials:
Neural network against labor discrimination: how to change your profession with AI
AI in Russia should not be allowed to govern the state — what Russians think
To avoid cyber risks: a guide to secure AI development