New Tools to Protect Against Data "Poisoning" of AI Training Databases Created at ISP RAS

This will increase cyberattack resilience

Russian specialists from the Institute for System Programming named after V. P. Ivannikov (ISP) RAS have developed tools to protect against "poisoning" of data used to train AI systems. This innovation will reduce the likelihood of failures in such solutions and increase their protection against cyberattacks.

Cybercriminals use the data "poisoning" method to disrupt the normal operation of AI systems. Because of this, the neural network produces incorrect results.

The Director of the Institute for System Programming, Arutyun Avetisyan, said that specialists from the Research Center for Trusted Artificial Intelligence of ISP RAS, together with employees of RANEPA, created a set of test data SLAVA. It allowed checking the value analysis algorithm.

In addition, tools against database "poisoning" were created to counter attacks. Avetisyan explained that trusted versions of basic frameworks for working with AI were created.

Of course, AI errors and its malicious use are dangerous. It is impossible to abandon artificial intelligence - this will only lead to lagging behind. It is necessary to build trusted AI systems, relying on advanced scientific developments.
Арутюн Аветисян


It should be reminded that database "poisoning", also known as data poisoning, is a type of attack on machine learning systems in which an attacker deliberately adds false or malicious data to the training sample. The main goal of such an attack is to reduce the quality of the model, forcing it to produce incorrect predictions or make erroneous decisions.

Read more on the topic:

Hackers in Russia are targeting Internet of Things attacks

A neural network has been trained in St. Petersburg to identify vulnerabilities in text captchas

A domestic platform for training second-generation neural networks has appeared in Russia

Now on home