Discussion about this post

User's avatar
Schauer, Dr. Floyd's avatar

I was thinking that 12hi HBM would be outdated by the time when this concept could be realized, I would love to see the numbers considering at least 16hi or even 20hi HBM stacks...

Neural Foundry's avatar

Thermal bottlenecks at the HBM-logic interface are becoming teh defining constraint for next-gen AI accelerators, and imec's approach is practical given packaging realities. Halving frequency sounds brutal but its a tradeoff between peak throughput and sustained performance; if the chip throttles anyway due to thermal limits then slower-but-steady can actualy win. I've seen similar dynamics in hyperscale deployments where underclocking servers improved TCO because you could run more stable workloads per watt without hitting emergency shutdowns.

3 more comments...

No posts

Ready for more?