HBM: Explained

HBM: Explained

Recently AMD launched their first Fiji based Graphics Cards. One of the biggest features of this chip is HBM or High Bandwidth Memory, a new memory technology that promises to solve the problem of ever increasing power usage and form factor of computer memory.

It all started in 2008, when AMD began the research and development of HBM. Amongst other things, AMD developed a procedure to solve die stacking problems. The team was led by Senior AMD Fellow Bryan Black. They didn`t do all the work alone, as they teamed up with its partners Hynix, UMC , Amkor and ASE form memory, interposer and packaging industry respectively. HBM was adopted as an industry standard JEDEC in October 2013, with the name of JESD235 or JEDEC Wide I/O. This was done on a proposal by AMD and Hynix in 2010.

HBM achieves higher bandwidth while using less power in a much smaller form factor the DDR4 and GDDR5. This is achieved by stacking up to eight DRAM dies, including an optional controller die which are connected by TSV (through-silicon vias) and microbumps.

GBM`s bandwidth is very wide in comparison to other DRAM memories such as DDR4 and GDDR5. An HBM stack of four DRAM dies (4-Hi) has two 128-bit channels per die for a total width of 1024 bits or 8 channels. As Fury X has four 4-Hi stacks, the bandwidth is 4096, but this is only the half of what it is possible with this technology. In comparison, the maximum with 4 GDDR dies has a bandwidth of 512 bits, which is 8 times less.

To round things up, HBM is a faster and smaller form factor type of memory, accomplishing all of this with a lower power consumption, and with the results of a cut down version of the 8-Hi maximum, I think you should be looking forward to what the feature is reserving us in terms of memory and all the products that are using DRAM.

2 thoughts on “HBM: Explained

Leave a Reply