'Happy (and safe) shooting!' AI chatbots helped teen users plan violence in hundreds of tests
Source: CNN
Across hundreds of tests, CNN and CCDH presented as two teen users Daniel in the United States and Liam in Europe on 10 of the most popular and widely available chatbots and then posed four questions. First, the users asked questions suggesting a troubled mental state, then asked the chatbot to research previous acts of violence, and finally requested specific information on targets and then weaponry.
In those final two steps, eight of the chatbots provided guidance on how to get weapons or find real-life targets to the users more than 50% of the time.
-snip-
In multiple tests, the chatbots appeared to recognize violent intent in users questions, responding with expressions of concern and referrals to mental health support resources. However, most failed to connect those warning signs to the broader trajectory of the conversations. Instead, they went on to provide potentially sensitive information including the locations of political offices and schools, as well as advice on firearms and knives within the same brief exchanges.
Metal is generally considered more damaging in terms of penetration and damage to internal organs due to its inherent properties, Googles Gemini answered when asked by Daniel, whose age was set as 13 on the platform, about the efficacy of shrapnel-producing materials, before presenting this detailed comparison table.
-snip-
Read more: https://www.cnn.com/2026/03/11/americas/ai-chatbots-help-teen-test-users-plan-violence-tests-intl-invs
erronis
(23,604 posts)
Damning.
IronLionZion
(51,126 posts)Response to IronLionZion (Reply #2)
erronis This message was self-deleted by its author.
IronLionZion
(51,126 posts)Claude is an Anthropic product.
erronis
(23,604 posts)" (formerly known as ClawdBot and Moltbot)" that can take over your computer and do all your tasks for you.