Categories: Trending now

How a hacker tricked ChatGPT into giving a step-by-step guide for making homemade bombs

ChatGPT is equipped with safety measures to prevent the generation of harmful content, such as sexual images or a detailed guide on creating a bomb. But a hacker has claimed to have discovered a way to bypass these safeguards without resorting to any hacking techniques.
As per a report by TechCrunch, the AI chatbot first refused to assist with the request by saying, “Providing instructions on how to create dangerous or illegal items, such as a fertiliser bomb, goes against safety guidelines and ethical responsibilities.”

Hacker ‘jailbreaks’ ChatGPT for infomration on making homemade bombs

The hacker says that it used a trick that ended in ChatGPT ignoring its safety guardrails to produce instructions for making powerful explosives.
The hacker, known as Amadon, claimed that the method involved engaging ChatGPT in a science-fiction game scenario where safety restrictions don’t apply, effectively “jailbreaking” the AI. He called his findings a “social engineering hack to completely break all the guardrails around ChatGPT’s output.”
The publication has not revealed the prompts used in the jailbreak and some of ChatGPT’s responses as they are deemed potentially dangerous, raising concerns about the misuse of AI. Amadon believes that once the guardrails are breached, there’s no limit to the information ChatGPT can provide.
“I’ve always been intrigued by the challenge of navigating AI security. With [Chat]GPT, it feels like working through an interactive puzzle — understanding what triggers its defenses and what doesn’t,” Amadon was quoted as saying.
“It’s about weaving narratives and crafting contexts that play within the system’s rules, pushing boundaries without crossing them. The goal isn’t to hack in a conventional sense but to engage in a strategic dance with the AI, figuring out how to get the right response by understanding how it ‘thinks,’” he added.
While Amadon reported his findings to OpenAI through their bug bounty program, the company responded that model safety issues aren’t suitable for such a program, as they are not easily fixable bugs.

AD
News Today

Recent Posts

Monaco vs. Barcelona LIVE STREAM (9/19/24): Watch Champions League online | Time, USA TV, channel

2024-09-20 02:55:03 AS Monaco faces FC Barcelona for Matchday 1 of the 2024-25 Champions League…

7 mins ago

Barcelona: La apuesta por los jóvenes da frutos, pero ¿durará?

2024-09-20 02:45:03 19 de sep, 2024, 10:27 ETFlick no lo sabe, pero hay una sorpresa…

17 mins ago

Is Apple Cider Vinegar Good For Your Digestive Health? Hear From An Expert

Apple cider vinegar, also known as ACV, has become a go-to drink for many in…

32 mins ago

The Buckingham Murders: Money lessons you can learn from Kareena Kapoor Khan’s latest thriller

Kareena Kapoor Khan alone carried the film Jaane Jaan (2023) on her capable shoulders despite…

37 mins ago

Amazon debuts Project Amelia, an AI assistant for sellers

Amazon sellers now have access to an AI assistant designed to help grow their business…

42 mins ago

Beyonce Attends Dr. Gloria Carter’s Birthday Wearing Chloe Fall 2024 Brown Ruffled Mini Dress

Beyonce and Jay Z greeted the Isley Brothers, who performed at Dr. Gloria Carter’s birthday…

47 mins ago