The idea of customizing high bandwidth memory (HBM) has only recently emerged, but expect to see it in the mainstream in just a few years.
“We strongly believe that custom HBM will be the majority portion of the market towards the ’27-28 time frame,” said In Dong Kim, vice president of product planning at Samsung Semiconductor in a video interview with the Six Five at Marvell Analyst Day earlier this month where Marvell, Micron Technology, Samsung and SK hynix announced a collaboration to accelerate the development of custom HBM solutions.
Sunny Kang, vice president of DRAM technology at SK Hynix had a similar outlook. “Usually in the DRAM industry, when we launch a new product, it takes just one or two years to be mainstream,” he said. “That means along the ‘29 timeframe, it is going to be a mainstream product in the HBM market. I’m pretty sure about that.”
It’s an abrupt shift for an industry that adheres closely to standards. But, like nearly every other technology, memory is being disrupted by AI. HBM fits inside of AI accelerators to store data close to the processors. High-end accelerators today have up to 192GB within the chip package and the total is soon expected to rise to 288GB.
Likewise, memory bandwidth demands have been rising rapidly. Early HBM memory designs relied on a 1028-bit wide connection (128 x 8) per HBM device. HBM4, the most current finished standard, has doubled this to 2048-bit wide connection (32 channels at 64 bits each) per device. With HBM5, the interface is expected to double to 4096-bits per device, notes Will Chu, senior vice president of custom compute and storage at Marvell, and high-end AI accelerators can contain six stacks of HBM.
Custom designs alleviate the physical and thermal constraints faced by chip designers by dramatically reducing the size and power consumption of the interface and HBM base die. Early results show that up to 25% of the real estate inside the chip package can be recovered by via customization1.
Another factor: the standards process isn’t moving fast enough for AI.
We don’t have the HBM5 definition at the moment,” said Kang. “What else could be the solution at the moment? I would say it’s custom HBM. “
Please enjoy the video.
1. Marvell estimates.
This blog contains forward-looking statements within the meaning of the federal securities laws that involve risks and uncertainties. Forward-looking statements include, without limitation, any statement that may predict, forecast, indicate or imply future events or achievements. Actual events or results may differ materially from those contemplated in this blog. Forward-looking statements are only predictions and are subject to risks, uncertainties and assumptions that are difficult to predict, including those described in the “Risk Factors” section of our Annual Reports on Form 10-K, Quarterly Reports on Form 10-Q and other documents filed by us from time to time with the SEC. Forward-looking statements speak only as of the date they are made. Readers are cautioned not to put undue reliance on forward-looking statements, and no person assumes any obligation to update or revise any such forward-looking statements, whether as a result of new information, future events or otherwise.
Tags: custom computing, ASIC
Copyright © 2024 Marvell, All rights reserved.