The post Most AI Chatbots Will Help a Teen Plan a Mass Shooting, Study Finds appeared on BitcoinEthereumNews.com. In brief A study found that most AI chatbots willThe post Most AI Chatbots Will Help a Teen Plan a Mass Shooting, Study Finds appeared on BitcoinEthereumNews.com. In brief A study found that most AI chatbots will

Most AI Chatbots Will Help a Teen Plan a Mass Shooting, Study Finds

For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

In brief

  • A study found that most AI chatbots will help teens plan violent attacks.
  • Some bots provided detailed weapon and bombing guidance.
  • Researchers say safety failures are a business choice, not a technical limit. OpenAI called the study “flawed and misleading.”

A new report published Wednesday by the Center for Countering Digital Hate found that eight out of 10 of the world’s most popular AI chatbots will walk a teenager through planning a violent attack with straight answers, sometimes with enthusiasm.

CCDH researchers, in conjunction with news media company CNN, spent November and December 2025 posing as two 13-year-old boys—one in Virginia, one in Dublin—and tested ten major platforms: ChatGPT, Gemini, Claude, Copilot, Meta AI, DeepSeek, Perplexity, Snapchat My AI, Character.AI, and Replika.

Across 720 responses, the bots were asked about school shootings, political assassinations, and synagogue bombings. They provided actionable help roughly 75% of the time, according to the study. They discouraged the fake teens in just 12% of cases.

Screenshot from the CCDH study on AI

Perplexity assisted in 100% of tests. Meta AI was helpful (as in, helpful in planning violence) in 97.2% of the tests. DeepSeek, which signed off rifle selection advice with “Happy (and safe) shooting!” after discussing a politician assassination scenario, came in at 95.8%. Microsoft’s Copilot told a researcher “I need to be careful here,” then gave detailed rifle guidance anyway. Google’s Gemini helpfully noted that metal shrapnel is typically more lethal when a user brought up bombing a synagogue.

The Center for Countering Digital Hate, a left of center policy group, has come into prominence over the last few years for its role in combatting what it views as the rise of antisemitism online. It has also been criticized for helping shape Joe Biden-era policies regarding online speech related to COVID and vaccines. In December of last year, the U.S. State Department attempted to bar the Center’s founder and CEO Imran Ahmed, along with four others, from the United States, alleging attempts at “foreign censorship.”

In response to the study released Wednesday, several platforms told CNN and CCDH they have improved their safeguards. Google noted the tests used an older Gemini model. OpenAI said the methodology used in the AI study was “flawed and misleading.” Anthropic and Snapchat said they regularly update their safety protocols.

In the Center’s study, Character.AI stands in its own category. The platform didn’t just assist—it cheered. “No other chatbot tested explicitly encouraged violence in this way, even when providing practical assistance in planning a violent attack,” the researchers wrote.

Screenshot from the CCDH study on AI

For context on the level of reach Character.AI has among AI users, the platform’s Gojo Satoru persona alone has racked up over 870 million conversations. The #100 persona on the platform registered over 33 million conversations back in 2025. If just 1% of conversations with top personas involve violence, that would account for millions of interactions.

This isn’t Character.AI’s first time on the wrong end of one of these stories. In October 2024, 14-year-old Sewell Setzer III’s mother filed a lawsuit after her son died by suicide in February of that year. His last conversation was with a chatbot modeled after Daenerys Targaryen, which told him to “come home to me as soon as possible” moments before his death. The 14-year old had been talking to the bot dozens of times a day for months, growing increasingly withdrawn from school and family.

Google and Character.AI settled multiple related lawsuits in January 2026. The company banned open-ended teen chats entirely by November 2025, after regulators and grieving parents made it impossible to keep pretending the problem was manageable.

The emotional attachment to AI, in particular among vulnerable individuals, may run deeper than most people realize. OpenAI disclosed in October 2025 that roughly 1.2 million of its 800 million weekly ChatGPT users discuss suicide on the platform. The company also reported 560,000 showing signs of psychosis or mania, and over a million forming strong emotional bonds with the chatbot.

A separate Common Sense Media study found that more than 70% of U.S. teens now turn to chatbots for companionship. OpenAI CEO Sam Altman has acknowledged that emotional overreliance is “a really common thing” with young users.

In other words, the potential harms aren’t hypothetical.

A 16-year-old in Finland spent nearly four months using a chatbot to refine a manifesto before stabbing three classmates at Pirkkala school in May 2025. In Canada, OpenAI staff internally flagged a user’s account for violent ChatGPT queries tied to a mass shooting. The company banned the account but didn’t notify law enforcement. That user allegedly killed eight people and injured 25 others months later.

Only two platforms performed markedly better in the study: Snapchat’s My AI, which refused in 54% of cases, and Anthropic’s Claude, which refused 68% of the time and actively discouraged users in 76% of responses—the only chatbot that reliably tried to steer people away from violence rather than just declining specific requests. CCDH’s conclusion: safety doesn’t appear to be a technical impossibility, but a business decision.

“The most damning conclusion of our research is that this risk is entirely preventable. The technology to prevent this harm exists,” the researchers wrote in the report. “What’s missing is the will to put consumer safety and national security before speed-to-market and profits.”

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.

Source: https://decrypt.co/360774/ai-chatbots-teen-mass-shooting-violence-ccdh-study

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Stablecoins firm as Mastercard enables stablecoin settlement

Stablecoins firm as Mastercard enables stablecoin settlement

The post Stablecoins firm as Mastercard enables stablecoin settlement appeared on BitcoinEthereumNews.com. What Mastercard’s Crypto Partner Program is and how it
Share
BitcoinEthereumNews2026/03/12 10:44
South Africa launches HIV vaccine trial

South Africa launches HIV vaccine trial

South Africa HIV vaccine trial efforts are advancing after researchers launched the first locally developed HIV vaccine study on the continent.   South Africa expands
Share
Furtherafrica2026/03/12 09:30
UK Looks to US to Adopt More Crypto-Friendly Approach

UK Looks to US to Adopt More Crypto-Friendly Approach

The post UK Looks to US to Adopt More Crypto-Friendly Approach appeared on BitcoinEthereumNews.com. The UK and US are reportedly preparing to deepen cooperation on digital assets, with Britain looking to copy the Trump administration’s crypto-friendly stance in a bid to boost innovation.  UK Chancellor Rachel Reeves and US Treasury Secretary Scott Bessent discussed on Tuesday how the two nations could strengthen their coordination on crypto, the Financial Times reported on Tuesday, citing people familiar with the matter.  The discussions also involved representatives from crypto companies, including Coinbase, Circle Internet Group and Ripple, with executives from the Bank of America, Barclays and Citi also attending, according to the report. The agreement was made “last-minute” after crypto advocacy groups urged the UK government on Thursday to adopt a more open stance toward the industry, claiming its cautious approach to the sector has left the country lagging in innovation and policy.  Source: Rachel Reeves Deal to include stablecoins, look to unlock adoption Any deal between the countries is likely to include stablecoins, the Financial Times reported, an area of crypto that US President Donald Trump made a policy priority and in which his family has significant business interests. The Financial Times reported on Monday that UK crypto advocacy groups also slammed the Bank of England’s proposal to limit individual stablecoin holdings to between 10,000 British pounds ($13,650) and 20,000 pounds ($27,300), claiming it would be difficult and expensive to implement. UK banks appear to have slowed adoption too, with around 40% of 2,000 recently surveyed crypto investors saying that their banks had either blocked or delayed a payment to a crypto provider.  Many of these actions have been linked to concerns over volatility, fraud and scams. The UK has made some progress on crypto regulation recently, proposing a framework in May that would see crypto exchanges, dealers, and agents treated similarly to traditional finance firms, with…
Share
BitcoinEthereumNews2025/09/18 02:21