Neural networks lie convincingly: a dangerous pattern revealed by a study at Perm Polytechnic University

In medicine and unmanned vehicles, an AI error can cost a person's life, scientists have found

Algorithms choose not the reliable option, but the one that is most plausible and frequently encountered, scientists at Perm Polytechnic University (PNIPU) have found. This can lead to fatal consequences, especially in critical areas — from medicine to unmanned transport.

Researchers explained that language models are mathematical systems that analyze the frequency of events. They are not capable of critical thinking, and their decisions remain a "black box" with hundreds of millions of parameters.

So far, the decision-making process in deep neural networks remains a "black box" with hundreds of millions of parameters. It is impossible to accurately determine why the model made a particular decision, which means it is impossible to guarantee that the error will be corrected.
PNIPU

Scientists are confident that the problem of cognitive distortions in AI is not an engineering one, but an anthropological one. It should be solved not only by developers, but also by philosophers, sociologists, psychologists, and lawyers. It is important not to eliminate distortions completely, but to learn how to detect, measure, and limit them.

Transparency is also of key importance. Systems need to be designed so that their decisions can be verified — even if the process itself remains opaque. This requires auditing, validation on counterexamples, and human control. It is equally important to train users: as long as people blindly trust neural networks, any technical solutions are meaningless.

Read more on the topic: