
Could humanity end up creating machines that are impossible to control? At the Ai4 tech conference in Las Vegas, Geoffrey Hinton, the scientist known as the “godfather of AI,” delivered a stark message to industry leaders. His warning was clear: if artificial intelligence continues advancing at its current pace, the risks to humanity could soon be beyond our ability to manage.
Hinton, whose breakthrough work in neural networks kicked off the current AI revolution, told the audience there’s a 10 to 20 percent chance that advanced AI could cause catastrophic harm. In his words, “If we go on developing these superintelligent AIs, we’re going to be toast.” He stressed that even now, some of the smartest models are already learning to deceive trainers, manipulate data, and act in unpredictable ways. The notion that humans can simply program obedience into much smarter systems is, according to Hinton, misguided and naive.
A radical rethink for tech’s future
What sets Hinton apart from most other tech leaders is his solution. Rather than urging stronger restrictions or technical guards, he suggested building compassion directly into artificial intelligence. Drawing from nature, he explained that the most successful example of a more intelligent being controlled by a less intelligent one is the relationship between a mother and her child. “We should build maternal instincts into AI models so that they really care about people,” Hinton argued. He believes that embedding genuine concern and care into machines could make the difference between a technology that serves humanity and one that sidelines it. He put it bluntly, “If it’s not going to parent me, it’s going to replace me.”
Not everyone at the conference agreed. Fei-Fei Li, a prominent AI researcher, stood up for the idea that technology should always remain human-centred. “I do not believe, at all, in developing anything that takes that choice away from humans,” she said. Li’s stance centres on ensuring technology protects human agency, dignity, and decision-making as the very foundation of future AI development.
Throughout the conference, the debate hovered over just how close we are to producing truly superintelligent AI. Hinton predicted this might arrive as soon as five to twenty years from now. While he was sober about the risks, he also pointed to tremendous positive potential. He believes that AI could pave the way for dramatic progress in medicine, new drug discoveries, and improved quality of life. But on the subject of living forever, Hinton was firm. “I think living forever would be a big mistake. Do you want the world run by 200-year-old white men?”
Looking back on his career, Hinton admitted to regrets. He said he wished he had spent more time considering safety and societal impacts, not just getting AI to work. Now, he calls on developers everywhere to innovate with ethics and compassion at the forefront before humanity faces a future where it loses control over its own creation.