Nvidia (NVDA) is the AI king. Its share of the global AI chip market is estimated at anywhere between 70% and 90%. Its high-powered graphics processors, which are perfect for training AI models and putting them to work, are in such demand that getting them is a task all its own.
In June, with the AI craze in full swing, Nvidia’s market cap eclipsed the $1 trillion mark. And on Friday, shares of the company hit an all-time high of $549.91.
It’s not just Nvidia’s hardware that helps it stay ahead of its rivals. The company’s Cuda software, which developers use to create AI platforms, is just as important to Nvidia’s staying power.
“Software continues to be Nvidia’s strategic moat,” explained Gartner VP analyst Chirag Dekate. “These … turnkey experiences enable Nvidia to be at the forefront of mindshare, as well as adoption.”
Nvidia’s lead didn’t happen overnight. It’s been working on AI products for years, even as investors questioned the move.
“Nvidia, to its credit, started about 15 years ago working with universities to find novel things that you could do with GPUs, aside from gaming and visualization,” explained Moor Insights & Strategy CEO Patrick Moorhead.
“What Nvidia does is they help create markets and that puts competitors in a very tough situation out there, because by the time they’ve caught up, Nvidia is on to the next new thing,” he added.
But threats to Nvidia’s reign are rising. Rivals Intel (INTC) and AMD (AMD) are marshaling their forces to grab their own slice of the AI pie. In December, AMD debuted its MI300 accelerator, which is designed to go head-to-head with Nvidia’s own data center accelerators. Intel, meanwhile, is building out its Gaudi3 AI accelerator, which will also compete with Nvidia’s offerings.
It’s not just AMD and Intel, though. Hyperscalers, which include cloud service providers Microsoft (MSFT), Google (GOOG, GOOGL), and Amazon (AMZN), as well as Meta (META), are increasingly turning to their own chips in the form of what are known as ASICs or application-specific integrated circuits.
Think of AI graphics accelerators from Nvidia, AMD, and Intel as jacks of all trades. They can be used for a litany of different AI-related tasks, ensuring that whatever a company needs, the chips can handle it.
ASICs, on the other hand, are masters of a single trade. They’re built specifically for a company’s own AI needs and often are more efficient than the graphics processing units from Nvidia, AMD, and Intel.
That’s a problem for Nvidia, as hyperscalers are huge spenders when it comes to AI GPUs. But as hyperscalers focus more on their own ASICs, they may have less of a need for Nvidia’s chips.
That said, on the whole Nvidia’s technology is well ahead of its competitors.
“They have a … long-term research pipeline to continue driving the future of GPU leadership,” Dekate explained.
One other thing to keep in mind when it comes to AI chips is how they’re used. The first way is training models, which is called, well, training. The second is putting those models into practice so that people can use them to do things like generate a specific output you want, whether that’s in the form of a text, images, or something else entirely. That’s called inferencing.
OpenAI has inferences of ChatGPT, while Microsoft has inferences of Copilot. And every time you send a request to either program, they take advantage of AI accelerators to generate the text or image you want.
Over time, inferencing is likely to become the primary use case for AI chips as more companies seek to take advantage of different AI models.
Still, the AI explosion is only beginning. And the vast majority of companies that will benefit from AI have yet to get into the game. So even if Nvidia’s market share takes a hit, its revenue will continue to increase as the AI space booms.
Click here for the latest technology news that will impact the stock market.
Read the latest financial and business news from Yahoo Finance