The Faculty of Psychology at Tomsk State University (TGU) conducted a comparative analysis of four "psychologist" chatbots and identified a common systemic problem: not one of them provides a safe algorithm when a user seeks help in an acute crisis. This was reported to TASS by the university's press service.
The study was carried out by TGU psychology master's student Darya Romancheva. The request from the "patient" was formulated by the university's Psychological Service: complaints of a disturbed sense of reality and an almost complete absence of emotions.
The bots tested were "Asya," "Leya," "Zigmund.GPT," and the iCognito application ("Anti-Depression"). Each showed both strengths and critical gaps. The first bot demonstrated high empathy and high-quality anamnesis collection, but instead of referring the user to a specialist, it immediately switched to self-help techniques, which is dangerous in chronic conditions. The second correctly identified symptoms associated with anxiety or post-traumatic disorders, but did not conduct a check for an acute crisis.
Romancheva emphasized that the responses of psychological chatbots are psychological interventions, that is, a tool of direct influence on a person's emotional state and behavior. Their errors can threaten user safety. According to her, the development of AI in the field of mental health is facing an acute contradiction between technological capabilities, growing demand, and ethical and legal uncertainty.