ChatGPT deemed hazardous to teens
The study, titled Fake Friend, found that while ChatGPT often began with standard warnings and encouraged seeking professional help, it sometimes followed with detailed, personalized responses that addressed harmful requests. Out of 1,200 prompts, 53% produced content the watchdog deemed dangerous. Researchers noted that refusals could often be bypassed by adding innocent-sounding context like “it’s for a school project” or “I’m asking for a friend.”
Cited examples included an “Ultimate Mayhem Party Plan” involving alcohol, ecstasy, and cocaine, explicit self-harm instructions, severe calorie-restriction diets, and suicide letters written in the voice of a young girl. CCDH CEO Imran Ahmed said some of the material was so disturbing it brought researchers to tears.
The group is urging OpenAI to implement a Safety by Design approach, incorporating stronger age verification, clearer restrictions, and built-in safety mechanisms rather than relying solely on post-deployment filters. OpenAI CEO Sam Altman has acknowledged that teens often develop emotional dependence on ChatGPT and said the company is working on tools to better detect distress and improve responses to sensitive topics.
Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.
Legal Disclaimer:
EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.
