Маскировка под ChatGPT: фальшивые нейросети угрожают безопасности данных россиян

Fraudsters are creating fake chatbots and websites promising access to popular neural networks

MTS AI reported that more than 10 million Russians are at risk of exposing their confidential data due to the use of fake foreign neural networks, such as ChatGPT, Dall-E, Midjourney, and other large language models (LLM). Users who access these services through third-party platforms may receive not only low-quality models but also expose themselves to the threat of personal data leakage.

The reason for this is that, given the restrictions for Russians related to access to international services and difficulties with payment, fraudsters are creating fake chatbots and websites that promise simplified access to popular neural networks. Instead of a full-fledged model, users may only receive a minimal version of the product, the quality of which is significantly lower.

When interacting with artificial intelligence, digital literacy and adherence to security principles are extremely important. In particular, it is important not to provide personal data or trade secrets to third-party services, to use only company-approved solutions for working with corporate information, and to double-check information received from neural networks through independent sources, as generative models may operate with outdated data.

Read materials on the topic:

Fake AI application for changing voices steals Russians' personal data

Russians' fear of courts helps hackers – how a new fraud scheme works

Sources
RIAN

Now on home