For the first time, Russia has decided to seriously address the regulation of artificial intelligence. The Ministry of Digital Development of Russia has released a draft law that sets the rules of the game for everyone — from developers to users of neural networks.
The main idea is to divide AI into categories. Domestic models will receive priority and support, and "trusted" systems will be allowed to work in the public sector and critical infrastructure. But for this, they will have to pass inspection, including through the FSB of Russia.
At the same time, new requirements are being introduced: content from AI will need to be marked, risks will need to be controlled, and rights to generated materials will need to be prescribed in advance.
The most interesting thing is the restrictions for foreign solutions. Popular neural networks like ChatGPT, Claude, and Gemini may fall under them. The reason is simple: they send user data abroad. The law allows that such services may be partially restricted or even prohibited — but there are no specifics yet.
At the same time, open models like Qwen or DeepSeek look like a more "safe" alternative — they can be deployed inside the country and not release data outside.
Additionally, services with a large audience will be required to store user data in Russia, and infrastructure will need to be developed within the country.
So far, this is only a draft, and it will still be finalized. But if everything is adopted, the rules will begin to operate from 2027 — and the AI market in Russia may change significantly.