Quick Facts
- Category: Programming
- Published: 2026-05-01 07:06:43
- Maximizing Your Pixel Watch 4: The Complete Guide to the Official USB-C Charger
- How to Implement Single-Vesicle Profiling for Next-Generation Liquid Biopsies
- Exploring VECT 2.0 Ransomware Irreversibly Destroys Files Over 131KB on Windo...
- Critical Git Push Flaw: How GitHub Contained a Remote Code Execution Attack in Under Two Hours
- Meta Threatens to Remove Facebook, Instagram, WhatsApp from New Mexico Over 'Impossible' Safety Demands
Breaking: AI Giants Bet Billions on Transformers, Experts Question Path to AGI
The artificial intelligence industry is pouring tens of billions of dollars into pre-trained transformer models, betting that scaling them will achieve human-level general intelligence. But prominent AI researcher Ben Goertzel warns this concentration is a massive gamble that risks squandering resources on a single, possibly flawed approach.

“The commercial AI industry is just betting everything on copying GPT in various permutations, which in my view is a waste of resources,” Goertzel told Fast Company. “All these LLMs are kind of doing about the same thing.”
The Transformer Bet: Scale at Any Cost
Leading labs—including OpenAI, Google DeepMind, and Anthropic—are dedicating nearly all their R&D and capital expenditure to transformer architectures trained via backpropagation. The strategy has delivered steady intelligence gains as models grow, but each leap in performance now demands exponentially more compute.
Training a single frontier model can cost hundreds of millions of dollars, with ongoing operational expenses rivaling those of small countries. While returns have so far justified the outlay, the trajectory raises a stark question: will diminishing returns eventually make scale alone unsustainable?
Background: The AGI Debate and Transformer Limits
The term “AGI” was popularized by Goertzel in his 2005 book co-written with DeepMind co-founder Shane Legg. He argues that today’s transformers lack a core capability: continual learning from new experiences. Instead, they reset to baseline parameters after each interaction, unable to truly adapt in real time like humans.
“Scale is not enough without the right underlying algorithms,” Goertzel emphasized. The reliance on static training data and backpropagation may be a fundamental barrier to the kind of generalization AGI requires.
What This Means: A Fork in the AI Road
The industry’s single-minded focus leaves little room for exploring radically different architectures. Yet several labs are quietly investigating alternatives. Google DeepMind, Microsoft, and Ilya Sutskever’s Safe Superintelligence are probing neural networks capable of continual learning and more flexible reasoning.
“DeepMind has incredible diversity within their AI team,” Goertzel noted, pointing to a “deep bench” of experience with paradigms beyond transformers. The result may be a bifurcated future: one path dominated by ever-larger LLMs, another by novel architectures that could unlock true AGI within a few years.
Goertzel remains optimistic that human-level AGI could emerge soon—but only if the industry diversifies its bets. “Concentrating all resources on one method is risky. The next breakthrough might come from somewhere else entirely,” he warned.