Wall Street seems confident that Nvidia Corp. will post a “beat-and-raise” earnings report this week. The key question now is how much upside the chip giant can deliver.
While other companies are upbeat about their abilities to realize artificial-intelligence demand down the road, Nvidia
NVDA,
has a different problem. Demand for its AI equipment seemingly is already so high that the company has run up against limited supply.
Nvidia made heads turn on Wall Street back in May when it forecast revenue of $10.78 billion to $11.22 billion for the fiscal second quarter. That top-line performance would vastly exceed the company’s previous quarterly revenue record of $8.29 billion, and the forecast came in well ahead of the $7.17 billion average estimate that analysts had at the time.
Analysts surveyed by FactSet said they expect revenue of $11.19 billion, along with adjusted earnings of $2.08 a share.
The outlook “was the largest single increase in one quarter in semiconductor history,” Morgan Stanley’s Joseph Moore wrote recently. Yet he also noted that “less than half of the current demand is being met,” suggesting the prospect of a further data-center ramp once supply improves.
He gave Nvidia’s stock an overweight rating with a $500 target price.
“It is widely assumed at this point that NVDA is posting a beat and raise,” Evercore ISI’s C.J. Muse wrote ahead of Nvidia’s fiscal second-quarter earnings, which are due out after Wednesday’s closing bell. The company is expected to have seized on demand for generative AI, and now “the primary focus” is “centered on magnitude,” he said.
“The ongoing AI infrastructure build and continued Hopper ramp will be the primary drivers, with demand across both training and inference applications resulting in Data Center GPU supply falling well short of demand ([chip on wafer on substrate] + [high bandwidth memory]) and strong visibility of at least 6-9-months ” Muse noted.
Another topic will be the company’s expectations for China given the prospect of additional export restrictions.
“On China, Nvidia previously commented that they do not see any material impact from potential new US restrictions near-term; however, over the long term, if sales of data center GPU to China are restricted, it would impact Nvidia’s sales to China, which have historically been in the 20-25% of data center sales,” Citi Research analyst Atif Malik, who has a buy rating and a $520 target on the stock, said recently.
While Nvidia’s report is expected to bring record revenue, Jefferies analyst Mark Lipacis sees another way the company’s latest quarter could mark a milestone. He estimates that Nvidia’s data-center GPU revenue will exceed Intel and AMD’s data-center CPU revenues for the first time in history.
“This would officially mark the 4th Tectonic Shift to a Parallel Processing Computing Era, that we originally argued for in 2017.”
Read: Nvidia gets more good news from Big Tech, even as AI spending ‘may not lift all boats’
Lipacis expects Nvidia to capture 80% of the data-center market over time with the rise of AI, based on the assumption that “every computing era is 10x the previous era in size” and “one ecosystem typically captures 80% of the value generated in each computing era.” Additionally, he noted that the ecosystem capturing 80% of value “is typically delivered by a single, vertically integrated company that delivers the chip, the hardware and software.”
He said this pattern rang true with International Business Machines Corp.
IBM,
in the mainframe era, DEC in the minicomputer era, Nokia Corp.
NOKIA,
at the start of the cellphone era and Apple Inc.
AAPL,
with the smartphone era. The only era of exception, Lipacis noted, was the PC era, which was the only horizontal era.
“Since Nvidia is also a vertically integrated ecosystem company that has consistently taken 5 points of share a year for the past 6 years vs Intel and AMD in data center, we forecast that it will continue to take 5 points of share until it hits 80% of the market over the next 5 years,” Lipacis said.
Read from March: Nvidia CEO expects AI revenue to grow from ‘tiny, tiny, tiny’ to ‘quite large’ in the next 12 months
Nvidia recently unveiled its next-generation AI-chipset recently, the DGX GH200 Grace Hopper Superchip, for use with large-memory generative-AI models like OpenAI’s ChatGPT “to scale out the world’s data centers.” This will see an expected second-quarter 2024 release.
The chipset appears to be a response to Advanced Micro Devices Inc.
AMD,
Nvidia’s closest rival in the AI chip space, which introduced its MI300X CPU + GPU in June. AMD Chief Executive Lisa Su recently predicted that there would be “multiple winners” in the AI race.
See also: Nvidia ‘should have at least 90%’ of AI chip market with AMD on its heels
This story originally appeared on Marketwatch