7 Comments

I'm actually more concerned about AMD's strategies to catch up with Nvidia.

A report by Citigroup research analyst Christopher Danely indicates that NVIDIA will capture "at least 90%" of the AI chip market, with AMD in second place.

The gap between their AI chips is not just in hardware performance; the core difference lies in the software ecosystem. NVIDIA's CUDA, developed over more than 10 years, dominates with 4 million developers and is the first to support various AI applications. In contrast, AMD's ROCm is still catching up, recently starting to support Windows systems and expanding support for Radeon gaming cards to move from high-end to mainstream graphics card markets.

Expand full comment
author

It's worth asking how many of those NVIDIA developers are actually writing CUDA, vs some other higher level language or ML framework such as pyTorch or TensorFlow. Part of the appeal of NVIDIA is that there is already a lot of optimized libraries for it - written in MLIR or PTX, not CUDA. CUDA is a great way to start learning, but for performance it's not usually the platform of choice. But often people want to buy and use 'what works' or what's out there, or a service built on whatever hardware it was built on. For most of those cases today, it's NVIDIA.

But consider this. NVIDIA has one main product family: graphics. OK there's some networking in there as well, but it's mostly graphics. 26000 employees means more than half are dedicated to high performance coding on NVIDIA. Now consider AMD - similar number of employees, but a much larger product portfolio - CPUs, GPUs, FPGAs, networking, with many different markets. NVIDIA simply has more software engineers, more FAEs, to go an optimize for NVIDIA. The downside is that 52+ week lead time if you want to buy a system, compared to the ~26 weeks for an AMD system. Do you have the staff that can monetize and get to market quicker than the NVIDIA system will arrive (and you start to use it)? Azure is already saying it's significantly cheaper to run LLM inference on AMD hardware, and I actually spoke to a company today building fine-tuning as a service on AMD saying exactly the same thing. They had to rewrite the GPU driver to get there, but it works.

Expand full comment

Thank You for Analyzing from Market and Ecosystem Perspectives: Delivery Speed May Be the Key Competitive Factor, Determining Market Share.

Expand full comment

Okay, just asking almost all questions about AI is a waste of time.

You should have asked about the future 3D CPUs and/or the features of new Radeon graphics cards.

Expand full comment
author

Any question about future products is always given with 'we don't speak about future products'. Having been in more than a decade of Q&A, they don't answer those questions.

Expand full comment

Hi Ian, do you have any info or can you inquire about SODIMM support on Strix Point? I've seen in a video from Hardware Canucks that they have been told that it will not be supported, any chance to confirm it?

Expand full comment
author

I'm pretty sure it's supported, but AMD hasn't gone into details yet. I can ask / look for DDR5 versions at Computex this week.

Expand full comment