Aravind Srinivas, the chief executive of AI search startup Perplexity, has said that artificial intelligence systems are still fundamentally dependent on humans to define what problems are worth solving, arguing that curiosity and question framing remain beyond the reach of machines.
Speaking on a recent podcast with writer and entrepreneur Prakhar Gupta, Srinivas stressed that while AI excels at solving, optimising and verifying solutions, it does not independently identify meaningful problems. The episode was released earlier this week.
“AI could help humans solve an existing problem but it is very different from AI solving it autonomously,” Srinivas said. “I think the edge would lies with the humans because it was a human who identified the problem in the first place.”
‘The spark remains human’
Srinivas challenged the idea that AI systems possess genuine curiosity, describing it as a distinctly human trait that drives scientific breakthroughs and intellectual progress.
“Did AI pose a question and try to go to solve it? No,” he said. “The curiosity of the human that led to even considering that it is important for them to think about conjecture.”
According to Srinivas, no AI system to date has demonstrated the ability to ask fundamental questions purely out of curiosity, a limitation he believes defines the current boundary between artificial and biological intelligence.
He added that while AI can outperform humans in specific tasks, recognising what truly matters remains a human advantage.
On-device AI could threaten data centres
One of the most striking parts of the discussion focused on the future of AI infrastructure. Srinivas suggested that advances in locally run AI systems could pose a significant challenge to the dominance of large-scale data centres.
“The biggest threat to a data centre is if the intelligence can be packed locally on a chip that’s running on the device, and then there’s no need to run inference on all of it on one centralised data centre,” he said.
Responding to a question about potential hardware breakthroughs, Srinivas outlined a scenario where AI models capable of high-quality on-device inference reduce reliance on centralised computing infrastructure. Such a shift, he argued, could disrupt the economics behind billions of dollars currently being invested in data-centre construction worldwide.
This transition could also encourage a more decentralised AI ecosystem, lowering barriers for individuals and smaller organisations to deploy advanced systems without depending on cloud-based platforms.
Human brains still outperform AI on efficiency
Srinivas also highlighted the stark contrast between human and artificial intelligence when measured in terms of energy efficiency. He noted that the human brain operates at a fraction of the power required by modern data centres to perform comparable tasks.
He attributed this efficiency not only to biology but also to the way human intelligence is shaped by curiosity, intuition and the ability to challenge assumptions, qualities he said current AI models lack by design.
AI, work and widening access
Looking ahead, Srinivas suggested that personalised and widely available AI tools could reshape how people work and learn, much like smartphones did over the past decade.
He argued that AI could help level the playing field between individuals and large institutions by giving more people access to powerful tools, regardless of age or background.
Key Takeaways
- Human curiosity drives the identification of meaningful problems, a trait AI currently lacks.
- Advances in local AI systems could disrupt the dominance of centralized data centers, leading to a more decentralized AI ecosystem.
- AI excels at solving defined problems but cannot autonomously identify or frame those problems.