Most of us assume that people we think are smarter than us are right about everything. We also tend to believe that experts in a field are always spot-on about stuff in that field. Nowhere is this clearer than in AI debates. One of the hottest topics right now is that we’re on the verge of artificial general intelligence (AGI), and it could doom humanity because these AIs might rewrite their own rules and turn against us.
Versions of this argument come from big names like Elon Musk, Sam Altman, Geoffrey Hinton, and other AI insiders. But remember, Musk and Altman are businessmen who gain from drumming up AI alarmism. It draws massive attention and funding. Still, I know they buy into it somewhat, since so many Silicon Valley leaders worship the Singularity: that hypothetical tipping point where AI outsmarts humans entirely. We figure it must be true because, hey, Elon launches reusable rockets, so he must be right about the Singularity too.

The Singularity has basically turned into a religion for a lot of folks in Silicon Valley. That’s reason enough to approach it skeptically, but there are also solid scientific and philosophical grounds to doubt it’ll ever happen. AI won’t achieve true human-level intelligence. For one, we’ve been chasing AI with human-like smarts for over 60 years. The brightest minds have poured into it, yet we’re not much further along than in the 1960s. And no, ChatGPT isn’t even in the ballpark of real human intelligence.
Compare that to other fields we’ve cracked in 60 years or less—like aeronautics, electricity, or computing. But AI keeps stumping us, and I’ll argue it always will. That doesn’t mean it won’t drive huge prosperity; as economist and tech thinker George Gilder says, it just won’t become human.
Elon is spot-on about plenty, but he’s off-base here, and here’s why:
AI Can’t Think About Thinking
One of the things that separates human intelligence from computers is meta-cognition, or the ability to think about thinking. Computers are limited because all computers require an “oracle”, or an external source of insight or programming that lies outside the machine’s own system. This means machines are forever dependent on human creators for their rules, data, and goals.
They can’t generate truly novel, outside-the-box intelligence on their own. An AI has no access to knowledge or truths beyond its programmed limits, so it can’t create an entity smarter than its human designers, who provide the “oracle” role. Therefore machines will never create new intelligence to supersede humans.
AI Relies on Pattern Recognition from Past Data, Not Wisdom
Systems like ChatGPT are what we call consensus engines. They perform pattern recognition on historical data from the internet, projecting past patterns forward without generating new insights. George Gilder argues this isn’t wisdom or thinking, it’s regurgitation. He says, “All those answers are programmed from the past. They’re pattern recognition from digital feeds of data that happen to dominate the internet. And that’s not wisdom. That’s the ability to see patterns in the past and project them into the future.“ He dismisses claims of AI achieving superior prose or poetry as overhyping narrow capabilities.
Human Consciousness and Creativity Are Non-Computable
Human traits like consciousness—the true source of thinking—don’t emerge from computation and cannot be replicated in silicon. The singularity is impossible in principle because there’s no bridge from narrow AI (specialized tasks like protein folding, writing or chess) to general intelligence that matches human versatility. AI is already transforming jobs by handling rote tasks, but it can’t "think” because real intelligence isn’t computable. It’s uniquely biological and non-algorithmic.
