IMD business school for management and leadership courses



How a 30-Year-Old Chart Explains Intel’s Fall, Nvidia’s Rise, and DeepSeek’s Disruption
Timeless Lessons From Clay—And a Gift to the World
When the world is throwing question marks at you, having a theory is the only way to see past the noise.
—Clayton Christensen, the late Harvard Business School professor and author of The Innovator’s Dilemma
Clayton Christensen, father of disruptive innovation, would have seen it coming. The patterns he identified decades ago—patterns that drove Intel’s decline and that now shape Nvidia’s trajectory—are the same forces propelling DeepSeek’s breakthrough.
On January 20, 2025, as news broke of DeepSeek—a Chinese AI startup that had achieved near-GPT-4 performance—I found myself transported back to his classroom.
Christensen passed away in that same month five years earlier. But his theory of disruptive innovation has been playing out with precision in the world of artificial intelligence.
I can almost hear his voice explaining why.
“Why do great companies fail?” he asked. The question hung in the air of the doctoral seminar room.
Clay, as we would call him, stood at the blackboard. He was a giant of a man with chalk in hand. I was a doctoral student, still unsure of how businesses worked, let alone why they failed. He began to unpack his theory of disruptive innovation. It would be a theory that changed how I saw the world.

🔑 Key Takeaways:
- Disruptive innovation provides a timeless framework for understanding market shifts.
- Intel’s decline and Nvidia’s success illustrate classic disruptive dynamics.
- DeepSeek’s unexpected rise reveals how constraints can spark innovation from the low end.
Christensen’s Chalkboard
Clay’s insight was deceptively simple: The very things that make great companies successful—listening to customers, investing in high-margin products, and focusing on profitable markets—become their undoing.
His theory isn’t just about understanding the past. It’s about anticipating the future. Once you grasp the principles, I have realized, you start to recognize the patterns that tend to repeat themselves.
In one seminar, we asked him, “What exactly is disruptive technology?”
He answered us in his quintessentially patient style:
“A disruptive technology is not a breakthrough that makes better products for existing customers. It’s an offering that is initially inferior. Yet it’s cheaper, simpler, more accessible.”
“Inferior?” someone asked.
“Yes, it grows from the fringes—where customers or users have been overlooked—and steadily improves. Then, almost without warning, it displaces the established leaders who remain fixated on the high-end of the market.”
It’s a pattern Clay had observed in disk drives and steel mills, then in everything from automobiles to education. In the subsequent years, when I was writing my dissertation, I never once tired of hearing his parables.
Now, in 2025, we’re watching similar disruption unfold in artificial intelligence as DeepSeek’s models seem to rival OpenAI’s capabilities at a fraction of the cost.
What matters most, though, is not DeepSeek’s performance—what matters is how they achieved it.
30 Years of Disruption Framework

The Innovator’s Dilemma Right Before AI
What made Clay unique, beyond his theory’s clarity, was his empathy for executives.
“I don’t think they’re stupid,” he often said. “They’re trying to do the right thing: serve their best customers, deliver higher margins, maintain brand prestige.”
But ironically, that very logic is what subjects them to low-end assaults.
I remember how the hairs on my arms stood up when Clay first mentioned “asymmetric motivation.” I didn’t fully grasp it then, but oh, how real it feels now. The resources and processes of large organizations rarely reward those who explore the fringes.
And that’s just how Intel has fallen—a textbook example of how a company’s very strengths can become its weaknesses in the face of disruption.
Intel’s Downfall and the Trap of Success
When mobile computing first emerged, Intel clung to its high-performance but power-hungry processors. Intel’s integrated operations—designed to control everything from research to manufacturing—once made it the king of Silicon Valley. Ironically, that very completeness also became its prison.
The overall market, meanwhile, was shifting away from PCs toward more efficient designs for smartphones.
Despite this shift, Intel couldn’t resist the siren song of high margins from its existing business. It didn’t confront the threat from ARM’s design (ARM, short for “Advanced RISC Machines,” is known for its low-power chip architectures).
Those simpler, more energy-efficient chips would come to power the mobile revolution. By 2023, ARM-based chips held over 90% market share in smartphones, a market Intel had entirely ceded.
I remember Clay drawing diagrams on the blackboard, showing how companies often see disruption coming but still can’t change course.
Clay’s Pattern of Disruption:
- Success breeds focus on high-end customers.
- High margins reinforce existing strategies.
- New players target overlooked segments.
- Established leaders can’t respond effectively.
When veteran Pat Gelsinger returned to Intel as CEO in 2021, he had harbored dreams of restoring American semiconductor supremacy. The government had pledged billions to support the company’s manufacturing expansion. But the ‘risk-averse bureaucratic culture’—as board member Lip-Bu Tan would later describe it—proved too entrenched even for a charismatic CEO to pivot it.
A classic case of disruption.
Nvidia’s High-Stakes AI Strategy
Intel wouldn’t be the last to succumb to such misguided stagnation. Today, Nvidia stands at a similar crossroads.
CEO Jensen Huang has built an empire on high-end GPUs and the CUDA software platform that has become indispensable for AI.
GPUs, or “Graphics Processing Units,” are specialized chips originally designed for rendering images. Like Intel before it, Nvidia excels at serving its most demanding customers with ever-more-powerful chips.
The H100, commanding prices up to four times higher than AMD’s competing offerings, embodies this premium strategy. And just as Intel’s focus on high-performance processors blinded it to the threat of ARM, Nvidia’s dominance in the high-end GPU market could make it vulnerable to disruption from below.
Within this fissure, DeepSeek opens up a radically different future—a surprising yet classic move in the disruption playbook.
Just as ARM’s processors consume less energy for mobile devices, DeepSeek’s approach achieves comparable AI performance using far fewer, less powerful GPUs.
It doesn’t rely on a huge cluster of top-of-the-line GPUs. DeepSeek couldn’t; it had to use far fewer GPUs. And it had to use the “nerfed” versions that were designed to comply with U.S. restrictions that represented Washington’s attempt to hobble Beijing’s AI ambitions.
These inferior chips (H800s), with their curtailed memory bandwidth (around 70% of the H100), had been a handicap. And yet that hardware constraint turned out to be a catalyst for low-end disruption.
DeepSeek’s engineers devised novel techniques for load balancing on the software side. They dared to predict multiple tokens ahead—a bold programming maneuver. Most audaciously, they embraced lower-precision calculations, a path considered almost heretical in the high-stakes world of AI.
For non-tech readers, they essentially fractionated the cost of AI training—dividing the work, forecasting multiple words at once, and using clever shortcuts that save time without sacrificing too much accuracy.
By open-sourcing their model, moreover, they empower developers to create powerful DIY derivative models, even if training an R1 model from scratch would still demand significant GPU resources.
You may ask, “How did DeepSeek achieve this breakthtough?”
The answer lies in a fundamental principle of disruption that Clay often emphasized: Sometimes the most innovative solutions emerge when traditional market pressures are removed.
Inside DeepSeek’s Lab Unleashing Innovation
Here is DeepSeek’s unique operating environment. First, their lab faces no pressure to commercialize its breakthroughs. Although DeepSeek offers an API that charges a nominal fee—just enough to recoup training costs—they are not driven by revenue targets and likely don’t maintain a traditional sales team.
This freedom allows the team to rapidly iterate and incorporate the latest open research without the constraints of commercial expectations.
Second, DeepSeek has been running its own data center since around 2019—long before ChatGPT and the era of expert control.
Rooted in its origins as a quant fund (a model common among such funds), this in-house setup maximizes speed and efficiency. It enables DeepSeek to fine-tune everything from network bandwidth to load balancing, and—as detailed in their recent R1 paper—they even work a layer below CUDA, going deep into assembly language to extract every ounce of GPU performance.
Finally, unburdened by commercial pressures, DeepSeek is committed to openness, making it arguably the most open-source AI model on the market today.
The result is the V3 model, a marvel of efficiency that reportedly cost a mere $5.6 million to train.
In contrast, estimates for training GPT-4 range from $63 million to over $100 million.
Here is the thing: No matter your industry or role, having a basic grasp of disruption can give you clarity amid the daily barrage of headlines.
Next time you read about a scrappy startup in your industry—or even outside it—ask yourself:
Are they serving an overlooked segment with a simpler, cheaper product?
If so, you might be witnessing early-stage disruption in real time.
Or when you see that a small startup has blindsided a mega-corporation, you’ll now understand why.
If Clay were here, he might also point out that Nvidia’s greatest strength—its tight integration of high-end GPU and CUDA software—could now be a vulnerability.
“The future has no data,” Clay would remind us. But a good theory lets us see the contours of what might come.

“I Don’t Have an Opinion—But My Theory Does”
The cost disparity between ChatGPT and DeepSeek has profound implications, particularly for developers in emerging markets.
Sure, you lose some polish, but if you are a developer in Asia, Africa, or Latin America lacking deep pockets for GPU clusters, “efficiency” is more valuable than “best in class.”
And not to mention, running with fewer chips means lower energy requirements too. You don’t end up like Google and Amazon, who have needed to explore nuclear energy to power their planned data centers.
Meanwhile, western incumbents are making colossal bets to run faster.
The recent unveiling of Stargate—a jaw-dropping $500 billion joint venture among Oracle, SoftBank, and OpenAI—aims to erect massive data centers in Texas. The ribbon-cutting ceremony featured Sam Altman, Larry Ellison, and Masayoshi Son, with President Trump praising the project. It’s a made-for-TV illustration of established giants doubling down on the largest, most lucrative AI workloads.
I can imagine Clay’s voice noting how the established AI giants are funneling billions into giant leaps forward. “That’s a sensible strategy,” they say. “We have to remain at the top.” Meanwhile, small players or newly emerging markets can’t pay those big premiums. Instead, they’ll embrace a model that’s 90% as good but 50% the cost.

Dario Amodei, CEO of Anthropic, the maker of Claude, which competes with ChatGPT, has offered a reality check: “DeepSeek produced a model close to the performance of US models 7-10 months older, for a good deal less cost. … DeepSeek-V3 is not a unique breakthrough or something that fundamentally changes the economics of LLM’s.” But in disruptive terms, “good enough” is sometimes more than enough.1
Then venture capitalist Marc Andreessen, in a post on X (formerly Twitter), stated, “DeepSeek … as open source, [is] a profound gift to the world.” Meta’s Chief AI Scientist, Yann LeCun, added, “DeepSeek has profited from open research and open source. Because their work is published and open source, everyone can profit from it.”
What struck me was how DeepSeek’s leaders talk about “merely being an enabler” for local entrepreneurs. They look to a different, underserved audience—developers who can’t afford a $100 million training run but can scrape together $450 for some compute credits. And it’s exactly this open-source approach that lets Linux eat away at proprietary operating systems, or how open-source web servers dethroned the old guard.
Cheaper, simpler, widely accessible.2

Shifting the Power of Global Innovation
Corporate leaders are not shortsighted or clueless. It’s the gravity of existing success that pulls them deeper into the status quo. And once you appreciate this, the decisions of Intel, Nvidia, or even OpenAI become more comprehensible, even if they’re not always defensible.
That’s the power of a simple theory. It can suddenly make sense of the seemingly chaotic world.
Of course, one cannot discuss DeepSeek without acknowledging the geopolitical undercurrents at play, where the surrounding irony is thick enough to cut with a knife.
In a roundabout way, U.S. export controls, designed to curb China’s AI ambitions, have accelerated the rise of a formidable competitor. It has taken a page, not from the playbook of Silicon Valley giants, but from the hallowed pages of The Innovator’s Dilemma.
Disruption isn’t a one-time event—it’s an ongoing process.
Ask yourself, “Am I ready to spot opportunity in every challenge?” Stay curious, embrace change, and be prepared to ride the next wave of innovation.
When was the last time a new idea completely shifted your perspective?
Clay’s Last Lesson
Clay passed away in January 2020. Throughout this career, he had written about health care, higher education, and even personal life choices. You might not run the show like a CEO. But you can choose to align your career, your skill-building, and your team’s small projects toward the pivot that will inevitably come.
His lesson wasn’t merely about how companies succeed or fail—it was about the power of theory to guide our perspectives. Once you see how seemingly random market moves follow predictable cycles, disruption theory can bring a sense of calm. Calm emerges not from avoiding waves, but from learning to read the currents.
This has happened before, and here’s how it unfolds.
During his final years, Clay still taught. He was battling leukemia yet insisted on returning to campus. Sometimes he’d simply sit in his own MBA elective, taught by a colleague. From his seat, he’d still ask incisive questions. The class remained oversubscribed. Students were crowding the hallways to hear his voice.
“When you can see how things are connected,” he used to say, “you don’t get sucked into the day’s random twists and turns.”
He urged us to see farther, dream bigger, and remain open to the unexpected rises from the low end—because “disruption is a process, not an event.”
By understanding disruptive innovation—whether it’s Intel’s past, Nvidia’s present, or DeepSeek’s emerging future—we gain more than insight.
We develop what Clay called “theories of causation” that bring clarity to seemingly chaotic market shifts.
The next wave is always out there, waiting, silent until it isn’t.
Will you be watching—and are you ready to ride the rise?
What Else Is Howard Reading and Writing?
- DeepSeek’s strength doesn’t stem from a groundbreaking new architecture. Instead, it comes from pushing engineering to its absolute limits by applying techniques such as key‑value caching, mixture of experts (MoE), reinforcement learning with human feedback (RLHF), and chain‑of‑thought (CoT) reasoning. These methods are focused on maximizing efficiency rather than representing fundamental research breakthroughs like those pursued by OpenAI, Anthropic, or DeepMind. DeepSeek’s achievement lies in optimizing every component of the AI pipeline to achieve state‑of‑the‑art performance with far fewer resources—it makes the best use of existing tools without inventing new mathematics, cognitive paradigms, or entirely new AI frameworks. ↩︎
- OpenAI CEO Sam Altman acknowledged the DeepSeek challenge this week, saying on Reddit that the company is considering releasing some model weights—settings that guide how the AI responds—publicly. “I personally think we’ve been on the wrong side of history here and need to figure out a different open source strategy,” he wrote. ↩︎