In the whirlwind years since ChatGPT’s launch, it’s been hard to escape the euphoria or dread surrounding artificial intelligence. Tech CEOs have certainly contributed to this fever pitch, painting pictures of a world on the brink of profound, immediate disruption. But as someone who’s been building AI for over a decade and observing the industry closely, I’m here to tell you that it’s crucial to separate the signal from the noise and recognize when the emperor might not be wearing as many clothes as we once thought. When it comes to AI, identifying the charlatans—or at least the disingenuous claims—is more vital now than ever.
The Astonishing Claims That Fueled the Fire
For a long time, the narrative was driven by astonishing claims from AI’s biggest players:
- Daario Amadeay (Anthropic CEO): Claimed AI had advanced from a “smart high school student” to a “smart college student,” raising fears of massive white-collar job disruption.
- Sam Altman (OpenAI CEO): Compared AI to the Manhattan Project, suggesting it was a “completely new not human scale kind of power” that would reshape the world.
- Mark Zuckerberg (Meta CEO): Declared that “developing super intelligence is now in sight,” even hinting that AI systems could improve themselves.
This drumbeat of CEO hype created the expectation that unfathomable disruption was imminent.
GPT-5: The Needle Scratch on the Hyped-Up Tune
Expectations for GPT-5 were sky-high. Altman himself bragged about the model, suggesting it would be the next leap toward AGI. But the reality was sobering:
- Better at small tasks (like creating Pokémon chess scenarios).
- Worse at practical outputs (like YouTube thumbnails or birthday invitations).
- Users online expressed “pointed disappointment,” calling it the “biggest piece of garbage even as a paid user.”
Gary Marcus summed it up best: “it’s not what people expected or hoped it would be… there ain’t no humpback whale there.” GPT-5 was the needle scratch on the hyped-up tune the industry had been playing, forcing many to ask: “What if this is as good as AI gets for now?”
The Myth of Current AI Job Disruption
There has also been significant misinformation about AI’s real-world economic impact:
- Ed Zitron (Tech Analyst): Points out the myth that companies are replacing workers with AI. Layoffs at Amazon and Microsoft were due to pandemic overspending, not AI.
- Media headlines: “AI is wrecking an already fragile job market” exaggerated the situation by conflating economic contraction with AI disruption.
- Economic reality: Current models are “very expensive to run” due to compute and electricity costs. The industry’s $34-40 billion revenue is dwarfed by $560 billion in AI spending.
Zitron concludes: “We’re in the silliest time in tech in history.”
The Strange Death of the Scaling Law
The heart of the problem lies in the scaling law. Nvidia’s Jensen Wong explained the old assumption: “the more data, bigger models, and more compute → more capability.” This worked spectacularly for GPT-3 and GPT-4, leading to hopes of AGI through endless scaling.
But then… the scaling stopped working:
- OpenAI’s GPT-5 “Orion” project produced only minor gains despite massive resources.
- Meta’s “BTH” and Musk’s Grok-3 also flopped relative to expectations.
- Ilia Suskever admitted: “The 2010s were the age of scaling. Now we’re back in the age of wonder and discovery once again.”
The consensus: throwing more compute at models won’t create a digital god.
The Pivot to Post-Training: Incremental Gains
With scaling stalling, companies pivoted to post-training techniques:
- Reinforcement Learning with Human Feedback (RLHF): Fine-tuning to align with human preferences.
- Synthetic Data Generation: Feeding models AI-created training data.
- Inference-time compute: Letting models “think longer” on harder problems.
This has produced some incremental wins in coding and math, but the grand AGI vision is on pause. As I put it: “We left the path that was supposed to lead us to AGI. Now we’re just souping up Camrys and calling them supercars.”
Separating the Signal from the Noise
This saga highlights a crucial lesson: critical thinking is the antidote to hype. The leap from GPT-2 to GPT-4 was real. But when scaling failed, companies waved their hands, hoping investors wouldn’t notice. Now, post-training is dressed up as world-changing progress, when in truth it’s incremental tuning.
I’ve seen this before in blockchain. Grand promises, little substance, and products that solved problems no one cared about. With AI, we must stay focused on practical, narrow applications instead of falling for the sales pitch of digital gods.
What the Future Actually Holds
Instead of apocalyptic AGI scenarios, the reality is likely:
- Steady, gradual advances in narrow tasks (summarization, drafting, coding).
- Specific professions (like copywriting or voice acting) feeling sharper impact.
- Moderate disruption, not collapse of the job market.
- Alternative AI paths (like neurosymbolic AI) holding long-term promise in the 2030s.
This reprieve gives us valuable time to build ethical frameworks, regulation, and integrated education that match AI’s actual trajectory, not its hype cycle.