You didn’t think Nvidia Corp. would let the lead AI data-center product of Advanced Micro Devices Inc., its closest rival in the artificial-intelligence data-center chip market, go unanswered, did you?
At 2023’s Special Interest Group on Computer Graphics and Interactive Techniques, or SIGGRAPH, conference, Nvidia NVDA,
In his keynote address Tuesday, Huang introduced the next-generation DGX GH200 Grace Hopper Superchip, for use with large-memory generative-AI models like OpenAI’s ChatGPT — which is backed by Microsoft Corp. MSFT,
Nvidia shares, which had been down about 1% before the announcement, dropped as much as 3% to an intraday low of $440.56 following the announcement. AMD shares also traded down 4% near session lows of $111.41. Meanwhile, the PHLX Semiconductor Index SOX fell 2.5%, the S&P 500 SPX declined 0.9% and the tech-heavy Nasdaq Composite COMP dropped 1.2%.
In a press conference ahead of the announcement, Nvidia’s head of hyperscale and high-performance computing, Ian Buck, told reporters the GH200 packs more memory and more bandwidth than the company’s H100-based data-center system. The GH200 uses Nvidia’s Hopper GPU and marries it with its Arm Ltd. architecture-based Grace CPU. The chip carries 141 GB of HBM3 memory and 5 TB per second of bandwidth.
The GH200 can be doubled up in the NVLink-dual GH200 system to increase capacity by 3.5 times and triple bandwidth. Both will be available in the second quarter of 2024, but Nvidia did not comment on pricing.
Buck said a vast majority of AI training and inferencing is done on Nvidia’s current HGX systems. The GH200 offers a new option for inference customers to support AI workloads, at two times the performance per watt, as cloud-service providers hope to build out their capacity without significantly increasing their energy costs.
In addition to Microsoft’s Azure cloud-service provider, other companies with hyperscaler capacity, like Amazon.com Inc.’s AMZN,
Read: Chip-equipment suppliers rally after Lam says AI servers will drive growth
On Friday, AMD broke with the broad tech selloff to finish higher on the week as analyst support gathered for the chip maker’s AI position following its earnings report, in which AMD Chair and CEO Lisa Su forecast “multiple winners” in the AI race. Nvidia reports its earnings after the market close on Aug. 23.
AMD introduced its MI300X CPU + GPU at its AI product launch in June. The chip maker is regarded as a distant second to Nvidia when it comes to AI data-center hardware market share.
Read: Nvidia gets more good news from Big Tech, even as AI spending ‘may not lift all boats’
Year to date, AMD shares have gained more than 73%, while Nvidia shares have soared more than 200% and the SOX index has rallied 44%. The S&P 500 has advanced 17% and the Nasdaq has grown 32% in the same time frame.
Read: Nvidia ‘should have at least 90%’ of AI chip market with AMD on its heels