The Research Center for Artificial Intelligence at Lomonosov Moscow State University, which became one of the winners of the third wave of the competition, is developing an innovative solution to combat hallucinations in large language models. This will allow the safe use of AI in business, banking, and government services. This was announced by the center's director, Grigory Bokov.
Chatbot hallucinations are the ability of language models to invent facts when answering user questions. The Coordination Center of the Russian Government announced the winners of the third wave of selection of research centers in the field of AI. The event was organized by Deputy Prime Minister Dmitry Chernyshenko. 15 universities participated in the competition, and now the selected centers will receive grants for development.
The work of the Moscow State University division was presented at this event.
[Together with industrial partners, a project will be implemented to ensure] the elimination of [so-called] hallucinations [of AI systems] to improve the safety of using large language models in business, banking, and government services.
Earlier, www1.ru reported on typical mistakes that users make when working with artificial intelligence. Neural networks are often considered simply search engines, not noticing their ability to generate complex and contextual answers. To achieve accuracy, it is important to formulate the request correctly.
Read more on the topic:
Every second Russian uses neural networks
Made by AI: marking of products generated by neural networks proposed to be introduced in Russia
AI account hacks in Russia have increased by 90% — users are losing control over personal data