Новые инструменты защиты от «отравления» баз данных для обучения ИИ создали в ИСП РАН

This will increase resilience to cyberattacks

Russian specialists from the Institute for System Programming named after V. P. Ivannikov (ISP) RAS have developed tools to protect against "poisoning" of data used for training AI systems. This innovation will reduce the likelihood of failures in such solutions and increase their protection against cyberattacks.

Cybercriminals use the data "poisoning" method to disrupt the normal operation of AI systems. Because of this, the neural network produces incorrect results.

The Director of the Institute for System Programming, Arutyun Avetisyan, said that specialists from the Research Center for Trusted Artificial Intelligence of ISP RAS, together with employees of RANEPA, created a set of test data SLAVA. It allowed checking the algorithm of value analysis.

In addition, tools were created against database "poisoning", allowing to counteract attacks. Avetisyan explained that trusted versions of basic frameworks for working with AI were created.

Of course, AI errors and its malicious use are dangerous. It is impossible to abandon artificial intelligence - this will only lead to lagging behind. It is necessary to build trusted AI systems, relying on advanced scientific developments.
Арутюн Аветисян


It should be reminded that database "poisoning", also known as data poisoning, is a type of attack on machine learning systems in which an attacker deliberately adds false or malicious data to the training sample. The main goal of such an attack is to reduce the quality of the model, forcing it to produce incorrect predictions or make erroneous decisions.

Read more on the topic:

Hackers in Russia are targeting Internet of Things attacks

Neural network trained to identify vulnerabilities in text captchas in St. Petersburg

Domestic platform for training second-generation neural networks appeared in Russia

Now on home