Lisa Su, CEO of Advanced Micro Devices (AMD), gave a keynote speech at Computex 2024 in Taiwan on June 3, 2024. During the event, AMD announced the launch of a new artificial intelligence chip called the Instinct MI325X. This chip is targeted directly at Nvidia’s data center graphics processors, GPUs. The Instinct MI325X will begin production before the end of 2024.
If developers and cloud giants view AMD’s AI chips as a viable alternative to Nvidia’s products, it could create pricing pressure on Nvidia, which has been enjoying high demand and roughly 75% gross margins in the past year. The demand for advanced generative AI models like OpenAI’s ChatGPT has led to the need for more companies to provide AI chips.
While Nvidia has been dominating the data center GPU market, AMD is currently in second place. However, AMD is now looking to capture a significant portion of the market, estimated to be worth $500 billion by 2028. Lisa Su mentioned that the demand for AI continues to grow, and the rate of investment is increasing across the board.
With the launch of the MI325X, AMD plans to release new chips annually to compete better with Nvidia and capitalize on the growing AI chip market. The MI325X is the successor to the MI300X, which was released late last year. AMD’s future chips, the MI350 in 2025 and the MI400 in 2026, are part of this accelerated product schedule.
The rollout of the MI325X will bring it into competition with Nvidia’s upcoming Blackwell chips, set to ship in significant quantities early next year. A successful launch of AMD’s newest data center GPU could attract investors interested in companies benefiting from the AI boom.
AMD’s stock fell 4% following the announcement, while Nvidia’s shares rose by about 1%. AMD’s main challenge in gaining market share is Nvidia’s use of its programming language, CUDA, which has become the industry standard among AI developers. To address this, AMD has been enhancing its ROCm software to make it easier for developers to switch their AI models to AMD’s accelerators.
AMD has positioned its AI accelerators as more competitive for tasks involving content creation or predictions rather than processing large amounts of data. This is due in part to the advanced memory used in AMD’s chip, allowing it to outperform Nvidia chips in certain applications, such as serving Meta’s Llama AI model faster.
While AI accelerators and GPUs are in the spotlight, AMD’s primary business remains central processors (CPUs) that power most servers globally. During the June quarter, AMD reported over $2.8 billion in data center sales, with AI chips accounting for about $1 billion. AMD holds around 34% of the total spending on data center CPUs, but Intel leads the market with its Xeon line of chips.
To compete with Intel, AMD announced a new line of CPUs, EPYC 5th Gen, ranging from low-cost 8-core chips to high-performance 192-core processors for supercomputers. These CPUs are well-suited for AI workloads, as nearly all GPUs require a CPU to boot up the system.
Lisa Su emphasized that today’s AI heavily relies on CPU capability, especially in data analytics applications. AMD aims to strengthen its position in the market with its latest CPU offerings while continuing to innovate in the AI chip space to challenge Nvidia’s dominance.