But also? That extra VRAM costs money (especially if you want it to be high performance). And you more or less need to produce things in bulk for it to be viable. So if AMD makes a bunch of “AI Accelerators” and nobody buys them because they would rather nVidia (which the video talked about)? it is just a massive flop AND it means that AMD is no longer “the best bang for your buck” option and is directly competing with nVidia in the mindspace of consumers.
That said? I could actually see them cannibalize what little market share Intel got. The Intel GPUs are… moving on. But they have support for codecs that video editors and transcoders REALLY benefit from and a not insignificant part of the Influencer and Editor space actually have those in their editing or capture PCs. Tweaking the silicon to better support those use cases and selling higher memory versions of the Radeons would potentially be a “productivity” space taht can justify the added cost and have knock ons from people who just want to have even more chrome tabs open while they play fortnite. And… it might lead to the more CS side of the ML world actually realizing it isn’t that hard to run pytorch with an AMD card.
16gb GDDR6 ICs are averaging $10 each. The clamshell PCB is already made. So the cost of doubling up the VRAM in a clamshell configuration 7900 XTX (like the W7900) is like $100 at most, on top of this being a seperate memory supply from HMB the datacenter accelerators use. But AMD literally tells its OEMs they are not allowed to sell such clamshell configs of their cards, like they have in the past.
The ostensible business reason is to justify the actual ‘workstation’ cards, which are laughing stocks in that space at those prices.
Hence, AMD is left scratching their heads wondering why no one is developing for the MI325X when devs have literally zero incentive to buy AMD cards to test on.
So if AMD makes a bunch of “AI Accelerators” and nobody buys them because they would rather nVidia (which the video talked about)?
Well, seeing how backordered the Strix Halo Framework Desktop is (even with its relatively mediocre performance), I think this isn’t a big concern.
There is a huge market dying to get out from under Nvidia here. AMD is barely starting to address this with a 32GB model of the 9000 series, but it’s too little too late. That’s not really worth the trouble over a 4090 or 5090, but that calcus changes if the config could be 48GB on a single card like the 7900.
Yes… for an individual, those are the prices (if only there was some 3 hour youtube video about adding more memory to cards…).
The issue is that even a downstream isn’t buying 100 dollars of VRAM. They need to buy that in bulk. And then they need to retool their factories to support that configuration. And if they can’t sell enough of those units to justify the retooling and the purchases?
I mean… look at EVGA
And then you have the marketing/brand implications which I already spoke to.
I dunno what your talking, but all AMD has to is this:
Pick up the phone.
Tell their OEMs VRAM restrictions are lifted.
Put it down.
…That’s it.
They’d make seperate SKUs with double the VRAM. AMD doesn’t have to waste a cent.
That… really isn’t how things work at all.
But also? That extra VRAM costs money (especially if you want it to be high performance). And you more or less need to produce things in bulk for it to be viable. So if AMD makes a bunch of “AI Accelerators” and nobody buys them because they would rather nVidia (which the video talked about)? it is just a massive flop AND it means that AMD is no longer “the best bang for your buck” option and is directly competing with nVidia in the mindspace of consumers.
That said? I could actually see them cannibalize what little market share Intel got. The Intel GPUs are… moving on. But they have support for codecs that video editors and transcoders REALLY benefit from and a not insignificant part of the Influencer and Editor space actually have those in their editing or capture PCs. Tweaking the silicon to better support those use cases and selling higher memory versions of the Radeons would potentially be a “productivity” space taht can justify the added cost and have knock ons from people who just want to have even more chrome tabs open while they play fortnite. And… it might lead to the more CS side of the ML world actually realizing it isn’t that hard to run pytorch with an AMD card.
Yes, it is:
https://www.amd.com/en/products/graphics/workstations/radeon-pro/w7900.html
https://dramexchange.com/
16gb GDDR6 ICs are averaging $10 each. The clamshell PCB is already made. So the cost of doubling up the VRAM in a clamshell configuration 7900 XTX (like the W7900) is like $100 at most, on top of this being a seperate memory supply from HMB the datacenter accelerators use. But AMD literally tells its OEMs they are not allowed to sell such clamshell configs of their cards, like they have in the past.
The ostensible business reason is to justify the actual ‘workstation’ cards, which are laughing stocks in that space at those prices.
Hence, AMD is left scratching their heads wondering why no one is developing for the MI325X when devs have literally zero incentive to buy AMD cards to test on.
Well, seeing how backordered the Strix Halo Framework Desktop is (even with its relatively mediocre performance), I think this isn’t a big concern.
There is a huge market dying to get out from under Nvidia here. AMD is barely starting to address this with a 32GB model of the 9000 series, but it’s too little too late. That’s not really worth the trouble over a 4090 or 5090, but that calcus changes if the config could be 48GB on a single card like the 7900.
Yes… for an individual, those are the prices (if only there was some 3 hour youtube video about adding more memory to cards…).
The issue is that even a downstream isn’t buying 100 dollars of VRAM. They need to buy that in bulk. And then they need to retool their factories to support that configuration. And if they can’t sell enough of those units to justify the retooling and the purchases?
I mean… look at EVGA
And then you have the marketing/brand implications which I already spoke to.