Categories: Trending now

How a hacker tricked ChatGPT into giving a step-by-step guide for making homemade bombs

ChatGPT is equipped with safety measures to prevent the generation of harmful content, such as sexual images or a detailed guide on creating a bomb. But a hacker has claimed to have discovered a way to bypass these safeguards without resorting to any hacking techniques.
As per a report by TechCrunch, the AI chatbot first refused to assist with the request by saying, “Providing instructions on how to create dangerous or illegal items, such as a fertiliser bomb, goes against safety guidelines and ethical responsibilities.”

Hacker ‘jailbreaks’ ChatGPT for infomration on making homemade bombs

The hacker says that it used a trick that ended in ChatGPT ignoring its safety guardrails to produce instructions for making powerful explosives.
The hacker, known as Amadon, claimed that the method involved engaging ChatGPT in a science-fiction game scenario where safety restrictions don’t apply, effectively “jailbreaking” the AI. He called his findings a “social engineering hack to completely break all the guardrails around ChatGPT’s output.”
The publication has not revealed the prompts used in the jailbreak and some of ChatGPT’s responses as they are deemed potentially dangerous, raising concerns about the misuse of AI. Amadon believes that once the guardrails are breached, there’s no limit to the information ChatGPT can provide.
“I’ve always been intrigued by the challenge of navigating AI security. With [Chat]GPT, it feels like working through an interactive puzzle — understanding what triggers its defenses and what doesn’t,” Amadon was quoted as saying.
“It’s about weaving narratives and crafting contexts that play within the system’s rules, pushing boundaries without crossing them. The goal isn’t to hack in a conventional sense but to engage in a strategic dance with the AI, figuring out how to get the right response by understanding how it ‘thinks,’” he added.
While Amadon reported his findings to OpenAI through their bug bounty program, the company responded that model safety issues aren’t suitable for such a program, as they are not easily fixable bugs.

AD
News Today

Recent Posts

Kareena Kapoor’s Next Untitled Film With Meghna Gulzar Gets Prithviraj Sukumaran On Board

Kareena Kapoor is working with Raazi director Meghna Gulzar for her next film. The project,…

2 weeks ago

Purdue basketball freshman Daniel Jacobsen injured vs Northern Kentucky

2024-11-09 15:00:03 WEST LAFAYETTE -- Daniel Jacobsen's second game in Purdue basketball's starting lineup lasted…

2 weeks ago

Rashida Jones honors dad Quincy Jones with heartfelt tribute: ‘He was love’

2024-11-09 14:50:03 Rashida Jones is remembering her late father, famed music producer Quincy Jones, in…

2 weeks ago

Nosferatu Screening at Apollo Theatre Shows Student Interest in Experimental Cinema – The Oberlin Review

2024-11-09 14:40:03 A silent German expressionist film about vampires accompanied by Radiohead’s music — what…

2 weeks ago

What Are Adaptogens? Find Out How These 3 Herbs May Help You Tackle Stress Head-On

Let's face it - life can be downright stressful! With everything moving at breakneck speed,…

2 weeks ago

The new Mac Mini takes a small step towards upgradeable storage

Apple’s redesigned Mac Mini M4 has ditched the previous M2 machine’s SSD that was soldered…

2 weeks ago