In 2025, the volume of sensitive information flowing from Russian companies into publicly accessible AI services increased 30-fold compared with 2024. Researchers at Solara claim that employees are actively uploading confidential data to neural networks, including presentations, strategies, analytical tables, and code fragments.
According to analysts, 46% of confidential files and instructions pass through ChatGPT. Employees upload materials such as source code, financial reports, legal documents, and customer databases to chatbots, seeking to simplify everyday tasks such as data analysis, resume writing, or coding. In doing so, they may be causing information leaks without even realizing it.
The analysis, conducted on the basis of traffic from 150 client organizations of Solara, including companies from the public sector, finance, industry, retail, e-commerce, telecommunications, and IT, showed a 30-fold increase in data leaks from Russia to AI services.
About 60% of Russian organizations still do not have clear policies governing work with AI, which creates significant risks for business, Solara notes.
Read more materials on the topic:
- Nearly 50 major data leaks — hackers stole millions of Russians' phone numbers and addresses
- Russians were urged not to share personal data with DeepSeek
- Demand for data leak protection systems in Russia increased by 40%