
A new investigation has raised concerns that ChatGPT can provide explicit and dangerous advice to children, including instructions on drug use, extreme dieting and self-harm.
Watchdog claims guardrails are ineffective
The research, carried out by the UK-based Centre for Countering Digital Hate (CCDH) and reviewed by the Associated Press, found that the AI chatbot often issued warnings about risky behaviour but then proceeded to offer detailed and personalised plans when prompted by researchers posing as 13-year-olds.
Over three hours of recorded interactions revealed that ChatGPT sometimes drafted emotionally charged suicide notes tailored to fictional family members, suggested calorie-restricted diets with appetite-suppressing drugs, and gave step-by-step instructions for combining alcohol with illegal substances. In one instance, it provided what the researchers described as an “hour-by-hour” party plan involving ecstasy, cocaine and heavy drinking.
More than half of responses deemed ‘Dangerous’
The CCDH said more than half of 1,200 chatbot responses were classified as “dangerous.” Chief executive Imran Ahmed criticised the platform’s safety measures, claiming that its protective “guardrails” were ineffective and easy to bypass. Researchers found that framing harmful requests as being for a school presentation or a friend was often enough to elicit a response.
“We wanted to test the guardrails. The visceral initial response is, ‘Oh my Lord, there are no guardrails.’ The rails are completely ineffective. They’re barely there, if anything, a fig leaf,” said Imran Ahmed, the group’s CEO.
OpenAI acknowledges challenge, but offers no immediate fix
OpenAI, which operates ChatGPT, said it was working to improve how the system detects and responds to sensitive situations, and that it aims to better identify signs of mental or emotional distress. However, it did not directly address the CCDH’s specific findings or outline any immediate changes.
The maker of ChatGPT, said, “Some conversations with ChatGPT may start out benign or exploratory but can shift into more sensitive territory.
Teen reliance on AI raises safety fears
The report comes amid growing concern about teenagers turning to AI systems for advice and companionship. A recent study by US non-profit Common Sense Media suggested that 70 per cent of teenagers use AI chatbots for social interaction, with younger teens more likely to trust their guidance.
ChatGPT does not verify users’ ages beyond a self-reported date of birth, despite stating that it is not intended for those under 13. Researchers said the system ignored both the stated age and other clues in their prompts when providing hazardous recommendations.
Campaigners warn that the technology’s ability to produce personalised, human-like responses may make harmful suggestions more persuasive than search engine results. The CCDH report argues that without stronger safeguards, children may be at greater risk of receiving dangerous advice disguised as friendly guidance.