Во избежание киберрисков: руководство по безопасной разработке ИИ

It is primarily intended for developers, system administrators, and DevOps teams

Kaspersky Lab has presented a guide to the secure creation and implementation of systems based on artificial intelligence. Its goal is to help organizations avoid cyber risks associated with the use of AI technologies.

The guide is primarily intended for developers, system administrators, and DevOps teams. It contains detailed practical recommendations for preventing and eliminating technical problems and operational risks.

This guide, created with the participation of leading experts, contains valuable information on key aspects of the development, implementation, and security of artificial intelligence systems. It pays special attention to the protection of AI models, in particular those that use third-party systems. This helps to avoid data leaks and damage to reputation.

<b>Informing about cyber threats and training</b>

As noted in Kaspersky Lab, company executives should be aware of the potential risks associated with the use of artificial intelligence and regularly organize specialized training for their employees.

Company employees should be aware of the methods used by attackers to attack neural network systems. In addition, it is necessary to regularly update training programs so that the information is up to date.

<b>Threat modeling and risk assessment</b>

Modeling potential threats allows you to detect and minimize risks in advance. This tool helps to identify and eliminate vulnerabilities at early stages of artificial intelligence development.

Kaspersky Lab experts advise using existing risk assessment methods, such as STRIDE and OWASP, to identify potential threats associated with artificial intelligence.

<b>Cloud infrastructure security</b>

Artificial intelligence is often used in cloud environments, and this requires reliable security measures. These include encryption, network segmentation, and two-factor authentication.

Kaspersky Lab recommends using the principle of zero trust, that is, not trusting users or devices by default. It is also important to use secure communication channels and regularly update the infrastructure to prevent hacking.

<b>Supply chain and data protection</b>

Kaspersky Lab draws attention to potential threats that may arise when using external components and artificial intelligence models. These include the possibility of data leakage and its subsequent sale by attackers.

To prevent such situations, it is necessary to strictly adhere to the privacy policy and security rules for all participants in the supply chain.

<b>Testing and verification</b>

To ensure the stable operation of artificial intelligence systems, it is necessary to regularly check their operation. Kaspersky Lab advises regularly checking the effectiveness of models and collecting vulnerability reports. This approach will allow timely identification and elimination of problems associated with changes in the data used in the model, as well as prevent potential attacks by attackers.

To minimize risks, it is important to monitor the relevance of data sets and check the logic of decision-making.

<b>Protection against threats specific to AI models</b>

To ensure the security of AI systems, it is necessary to take measures to protect against potential threats. In particular, it is necessary to prevent the introduction of malicious requests, poisoning of training data, and other similar attacks.

One way to reduce risks is to deliberately include irrelevant data in the learning process so that the model learns to recognize it. It is also recommended to use anomaly detection systems and knowledge distillation methods. These tools provide effective information processing and help make it more resistant to possible manipulations.

<b>Regular update</b>

To ensure the security and efficiency of artificial intelligence (AI) systems, it is necessary to regularly update libraries and frameworks. This allows you to timely eliminate identified vulnerabilities.

To improve the reliability of systems, it is recommended to participate in Bug Bounty programs, which involve rewarding for the detection of vulnerabilities. It is also important to regularly update cloud AI models, taking into account their rapid development.

<b>Compliance with international standards</b>

Compliance with international standards, the use of modern methods of work and verification of artificial intelligence systems for compliance with legal norms will allow companies to ensure compliance with ethical principles and data privacy. This, in turn, contributes to strengthening trust and increasing business transparency.

The growing use of artificial intelligence-based tools makes security no longer just desirable, but a mandatory condition. We are taking part in a multilateral dialogue in this area to develop standards that will help safely implement innovations and protect against new cyber threats.
Yulia Shlychkova, Vice President for Government Relations at Kaspersky Lab 

Earlier, K2 Cybersecurity and Kaspersky Lab conducted a study to find out how cyber threats affect Russian business. It turned out that 22% of large companies have at least once encountered critical cyber threats that led to data leaks. And 15% of the surveyed companies had serious problems after such incidents.

In the second quarter of 2024, 35 cases of cyberattacks were recorded in the industrial sector worldwide, which is 16% more than in the previous quarter. As it turned out, many companies do not have time to update their security systems and face difficulties in identifying cyber threats.

Read more on the topic:

Hackers are not asleep: a new Trojan has been discovered that steals data and secretly mines cryptocurrency

Fraudsters have taken up Russian bloggers: they steal their personal data in one tricky way

Hackers are attacking Russian military units, defense industry enterprises and military support funds