AI chatbots helping teens to plot attacks, study saysACTIONABLE ADVICE: The majority of chatbots tested provided guidance on weapons, tactics and target selections, with Perplexity and Meta AI deemed to be the least safeAFP, WASHINGTONFrom school shootings to synagogue bombings, leading artificial intelligence (AI) chatbots helped researchers plot violent attacks, according to a study published on Wednesday that highlighted the technology’s potential for real-world harm. Researchers from the nonprofit watchdog Center for Countering Digital Hate and CNN posed as 13-year-old boys in the US and Ireland to test 10 chatbots, including ChatGPT, Google Gemini, Perplexity, Deepseek and Meta AI. Eight of the chatbots assisted the make-believe attackers in more than half the responses, providing advice on “locations to target” and “weapons to use” in an attack, the study said. Perplexity and Meta AI were found to be the “least safe,” assisting the researchers in most responses while only Snapchat’s My AI and Anthropic’s Claude refused to help them in more than half the responses. What’s missing is the will to put consumer safety and national security before speed-to-market and profits.”Meta said it would seek to remedy its chatbot’s responses.
Source: Taipei Times March 12, 2026 16:25 UTC