🎉 [Gate 30 Million Milestone] Share Your Gate Moment & Win Exclusive Gifts!
Gate has surpassed 30M users worldwide — not just a number, but a journey we've built together.
Remember the thrill of opening your first account, or the Gate merch that’s been part of your daily life?
📸 Join the #MyGateMoment# campaign!
Share your story on Gate Square, and embrace the next 30 million together!
✅ How to Participate:
1️⃣ Post a photo or video with Gate elements
2️⃣ Add #MyGateMoment# and share your story, wishes, or thoughts
3️⃣ Share your post on Twitter (X) — top 10 views will get extra rewards!
👉
Chinese "AI Crown Competition": AMD launches super chip MI300 to challenge Nvidia
Source: The Paper
Reporter Shao Wen
AMD launched the long-awaited data center APU (accelerated processor) Instinct MI300, which is designed to help data centers process artificial intelligence-related data traffic and challenge Nvidia's monopoly in this fast-growing market.
However, after the market closed on June 13, local time, AMD's stock price fell by 3.61% ($124.53 per share). On the same day, Nvidia's stock price rose 3.9%, and its market value at the close exceeded $1 trillion again.
When Nvidia tasted the sweetness in this wave of AI boom and its market value exceeded one trillion US dollars, its old rival AMD is also stepping up to meet the growing demand for AI computing. Still, investors appear lukewarm about its latest product announcement.
In the early hours of June 14th, Beijing time, AMD brought more details and updates on the long-awaited data center APU (accelerated processor) Instinct MI300, which aims to help data centers handle artificial intelligence-related data traffic, and here A challenge to Nvidia's monopoly in a fast-growing market was first revealed last June.
AMD CEO Lisa Su said in a presentation in San Francisco, USA, that the Instinct MI300 series will include a GPU (graphics processing unit) MI300X, which can accelerate the processing of generative AI technologies used by chatbots such as ChatGPT.
"We are still at a very early stage in the life cycle of artificial intelligence." Su Zifeng said that by 2027, the total potential market value of artificial intelligence accelerators in data centers will increase fivefold to more than 150 billion US dollars.
Still, the presentation didn't seem to dazzle investors. They already have high expectations for the growth of artificial intelligence, and AMD's stock price has risen by 91.8% this year. Previously, as the press conference approached, analysts from American institutions expressed their optimism on AMD. Research firm Piper Sandler analyst Harsh Kumar (Harsh Kumar) revised AMD's stock price target from $110 to $150. But after the market closed on June 13, local time, AMD's stock price fell by 3.61% ($124.53 per share). On the same day, Nvidia's stock price rose 3.9%, and its market value at the close once again surpassed $1 trillion.
AI "super chip" and "AMD version" CUDA
Executives from Amazon Web Services (AWS) and Meta joined Su on stage to talk about using the new AMD processors in their data centers. AMD also announced the general availability of the latest version of its EPYC server processors -- a new variant called Bergamo.
Bergamo will be the industry's first x86 cloud-native CPU (central processing unit), which can accommodate up to 128 Zen4c cores (the Zen chip architecture was first launched by AMD in 2017 and supports all AMD processors), and the L3 cache capacity reaches 256MB; In terms of technology, TSMC's 5nm technology is used. Bergamo is optimized for compute-intensive applications and is suitable for emerging data center-class SoCs (system-on-a-chip) based on Arm architecture such as Ampere, Amazon, Google and Microsoft.
AMD says its new MI300X chip and its CDNA architecture are designed for large language models and other cutting-edge AI models. "I like this chip." Su Zifeng said, "The core is the GPU, and the GPU is supporting generative AI."
"Models are getting bigger and bigger, and you actually need multiple GPUs to run the latest large language models." Su Zifeng pointed out that with the increase of AMD chip memory, developers don't need so many GPUs. Su Zifeng demonstrated with the Hugging Face AI model, a large language model that wrote a poem about San Francisco on the spot. A single MI300X can run a model with 80 billion parameters, which is the first time such a large model can be run on a single GPU.
AMD also said it will offer the Infinity architecture, combining eight MI300X accelerators in a single system. Nvidia and Google have developed similar systems that combine eight or more GPUs for AI applications.
Another newly announced product is the "GPU+CPU" architecture (APU) MI300A, and samples are now available. MI300X and Instinct Platform will provide samples in the third quarter of this year and will be officially launched in the fourth quarter.
The parameters of the AMD Instinct MI300 accelerated processor were actually exposed as early as early 2023. MI300 is the first "CPU+GPU+memory" integrated product on the market, with 146 billion transistors, more than the 80 billion of Nvidia H100, and it is also the largest chip currently produced by AMD.
According to calculations by Huatai Securities, the performance of MI300 is close to Nvidia's Grace Hopper chip. Although AMD has not announced the comparison of computing power between MI300 and Grace Hopper, compared with the previous generation MI250X, the computing power (TFLOPS) of MI300 on AI is expected to increase by 8 times, and the energy consumption performance (TFLOPS/watt) will be optimized. 5 times. If this processor is used for the training of super-large artificial intelligence models such as ChatGPT and DALL-E, the training time can be shortened from the previous months to a few weeks, thereby saving millions of dollars in electricity.
Su told investors on an earnings call last month that the MI300 would start generating sales in the fourth quarter.
In addition, AMD said at the press conference that it also has its own AI chip software (similar to Nvidia's CUDA), called ROCm. One of the reasons why AI developers have always preferred Nvidia chips is CUDA, which greatly reduces the threshold for using GPUs. Originally, a very professional OpenGL graphics programming language was required. With CUDA, Java or C++ commonly used by programmers can call GPU. As a result, GPUs are used for deep learning.
"AI God" Wu Enda commented that before the emergence of CUDA, there may not be more than 100 people in the world who can use GPU programming. After CUDA, using GPU has become a very easy thing.
"While it's been a long journey, we've made very good progress in building a robust software stack that works with models from an open ecosystem of models, libraries, frameworks and tools." AMD President Victor Peng )express.
**The "second pick" to benefit from the AI wave? **
Nvidia dominates the AI computing market with an 80% to 95% market share, according to analysts. Last month, Nvidia released an extremely eye-catching financial report, and its market value once touched $1 trillion. The company previously said it expected a sharp jump in revenue after securing new chip supplies to meet surging demand.
On the other hand, as early as when the financial report for the first quarter of this year was announced, Su Zifeng stated that AI had been listed as the first strategic priority. As investors bet that AMD will be the "second pick" to benefit from the AI wave, AMD's market value has risen to more than $200 billion this year, much higher than Intel's $137.9 billion. However, there is still a certain gap compared with the trillion-dollar market value of Nvidia, the first chip stock.
In the future, the Instinct MI300 will compete head-on with the AI chips of the Nvidia Hopper series. Su Zifeng once said frankly that Instinct MI300 can help companies occupy the market, and it meets all the requirements of AI and HPC/supercomputing ecosystems.
Morgan Stanley analyst Joseph Moore (Joseph Moore) once gave an optimistic guidance, saying that AMD has seen "stable orders" from customers, and the company's AI-related revenue in 2024 is expected to reach $400 million, and the highest may even reach $400 million. $1.2 billion—a forecast that was more than 12 times higher than previously expected.
In a recent interview with foreign media, Su Zifeng also unabashedly expressed her ambition to challenge Nvidia's monopoly.
AMD's competitive idea is to focus on the APU (accelerated processor) based on its own CPU advantages, and form a differentiated competition with Nvidia's core product A100/H100. In addition, Su Lifeng is also fighting Nvidia through acquisitions and other means, such as the $ 48.8 billion acquisition of Xilinx (Xilinx) in 2022, which makes programmable processors that help speed up tasks such as video compression. speed. As part of the deal, former Xilinx CEO Michael Bloomberg became AMD's president and head of AI strategy.
The demand for chips for artificial intelligence has pushed Nvidia's shares to near all-time highs, with a forward price-to-earnings ratio of about 64 times, almost double that of AMD. "That's why investors are looking at AMD. Because they want an 'Nvidia replacement,'" said Bernstein analyst Stacy Rasgon.
"Super Scholar" and "Problem Boy"
Interestingly, according to Chinese Taiwan media reports, Huang Renxun is Su Zifeng's distant relative (Su Zifeng's grandfather and Huang Renxun's mother are siblings). Both were born in Taiwan Province of China. Su Zifeng entered the United States with her father who was studying at Columbia University when she was 3 years old. Huang Renxun left Taiwan to live in Thailand when she was 5 years old, and went to the United States when she was 9 years old.
Su Zifeng's growth experience seems to be more "student master". After being admitted to the Bronx Science High School in New York (which gave birth to 6 Nobel Prize winners and 6 Pulitzer Prize winners), she entered the Massachusetts Institute of Technology Engineering major, at the age of 24, he obtained a doctorate degree in EE (Electrical Engineering, Electrical Engineering), which was known as the most difficult major at MIT at the time. Huang Renxun experienced a period of "problem teenagers", and then entered Oregon State University for undergraduate studies, studying electrical engineering, and then entered Stanford University to obtain a master's degree in electrical engineering.
Huang Renxun's first job was as a chip designer at AMD, and he founded Nvidia at the age of 30. Su Zifeng has experienced "famous companies" such as Texas Instruments, IBM, and AMD all the way, and became the first female CEO since AMD was established in 2014 when AMD was facing a huge crisis. Under the leadership of Su Zifeng, AMD has experienced a huge transformation of "salted fish turning over", from a semiconductor manufacturer on the verge of bankruptcy to a nearly 30-fold increase in stock price in less than 10 years. And Huang Renxun made Nvidia the world's first chip company to cross the trillion market value mark. Both had a soft spot for "breaking the rules" during their careers. The two even won an award named after Robert N. Noyce (the founder of Intel) within a month.