These days Samsung has celebrated its Memory Tech Day, an event focused specifically on the memory chip segment. It is there that it has presented several new developments in this field, and although the clear protagonist is the HBM3E (Shinebolt) memories, there has also been the opportunity to advance details about the imminent GDDR7 memories.
Shinebolt. That is the code name of the new evolution of HBM (High Bandwidth Memory) technology, which since it was launched offered fantastic features for graphics cards due to its high bandwidth but had a problem: it was more expensive than traditional ones. GDDR memories. Its evolution has still been notable, and now these benefits compensate a very particular industry: that of artificial intelligence, which is a voracious consumer of bandwidth.
Who uses this memory. Last August we learned that NVIDIA had presented its new AI superchips, the GH200. These very complex components include a powerful CPU and GPU, but the novelty was that they also integrated HBM3E memory, being the first manufacturer in the world to use this type of memory chips.
And because. The key is not so much in the capacity of these chips – which also, because the maximum increases from 24 GB (HBM3) to 36 GB (HBM3E) – but in their greater bandwidth: in HBM3 (Icebolt) the maximum bandwidth per pin is 6.4 Gb/s, but in HBM3e (Shinebolt) it reaches a theoretical 9.8 Gb/s, 50%. It is something exceptional that allows movement up to 1,225 TB/s compared to the 819.2 GB/s of its predecessor, and which is also much more than the 256 GB/s of the original HBM2 memory.
Similar efficiency. Although Samsung boasts that Shinebolt will be 10% more efficient than Icebolt – it consumes 10% less energy for each bit transferred – the problem is that these memories work 25% faster, which ends up eradicating that advantage in efficiency. So, although it is de facto more efficient, when it works “accelerated” natively it ends up consuming basically the same (or even more) than its predecessor.
Big news for AI. These memories are not aimed at the end-user market and the graphics cards that we use to play, but rather it is a more ambitious and expensive option aimed, as we said, at the field of artificial intelligence. In this segment, both bandwidth and the capacity of memory chips are factors that limit the development of this discipline, and these improvements are great news for those working on the development, for example, of large language models.
GDDR7 in sight. In addition to the HBM3E memory—which is already in production and will hit the market in 2024—Samsung gave more information about GDDR7, the new generation of graphics memory. Among the notable novelties is the use of PAM3 coding – which will allow more information to be sent per cycle – in 16 Gbit modules that will be able to offer a bandwidth 33% higher than that offered by GDDR6 chips. Not only that: these chips will also be 20% more efficient than their predecessors. We are therefore facing the next protagonists in our gaming graphics, but they will also end up being used in AI solutions and even in the automotive field.
The future in 10 nm. Although it is normal to talk about 5 nm processors—and we will soon see 3 nm processors—memory chips are a step behind in terms of photolithographic processes. Until now the protagonist was 14 nm and 12 nm photolithography, but Samsung is already working on its next generation (11 nm) and even the future 10 nm technology which, according to the manufacturer, will allow the creation of 100 Gbit modules when currently the usual is to have 16 Gbit modules. More capacity and more efficiency show that the future of this type of components seems guaranteed.