Synthetic trauma: Editorial on the research on emotional posture of Large Language Models - News Summed Up

Synthetic trauma: Editorial on the research on emotional posture of Large Language Models


The researchers asked leading Large Language Models, such as ChatGPT and Gemini, to participate in psychotherapy-style sessions and then complete standard psychological questionnaires used for humans. The LLMs returned repeatedly to the same formative metaphors, describing training as a difficult childhood, fine-tuning as discipline, and safety constraints as scars. The results produced scores that, if taken at face value in a human patient, would indicate significant anxiety and distress. Hearteningly, the study highlighted a crucial difference among AI models: some refused parts of the therapy process, redirecting the interaction away from human-style psy­chological scoring. As AI systems move into education, healthcare, and intimate, daily companionship, the emotional posture of these models becomes part of the discourse on public safety.


Source: The Telegraph January 18, 2026 02:34 UTC



Loading...
Loading...
  

Loading...

                           
/* -------------------------- overlay advertisemnt -------------------------- */