Easy fix “Nvidia, no one has the supply your asking for, you can wait for your order just like anyone else” Imagine a company ordering the world supply of paper and being told “yes, we’ll divert all stock straight you and everyone else can have the leftovers for 8x the cost.”
Micron Says AI-Driven Memory Crunch is ‘Unprecedented’
Submitted 3 weeks ago by themachinestops@lemmy.dbzer0.com to technology@lemmy.world
Comments
helpImTrappedOnline@lemmy.world 2 weeks ago
frongt@lemmy.zip 2 weeks ago
Nvidia: “we will pay you three times your asking cost”
Mfrs: “yes sir, your chips, right away sir”
t00l@lemmy.world 2 weeks ago
The plans are all part of the company’s commitment to bring 40% of its DRAM manufacturing onto US soil, a goal enabled by a $6.2 billion Chips Act award the company clinched in 2024, and the ability to tap into a now-35% tax credit while construction is ongoing.
So nice that taxpayers are funding their own shortages now.>
sirboozebum@lemmy.world 2 weeks ago
Such innovation
SharkAttak@kbin.melroy.org 2 weeks ago
The shortage may be "unprecedented" but you can't fool me that it wasn't unforeseen... Fuck you Micron.
ZILtoid1991@lemmy.world 2 weeks ago
Easy fix: OpenAI not getting any more chips until they haven’t found use for their current inventory.
rimu@piefed.social 3 weeks ago
Investing 1.8 bn just before China invades…. Ok then.
FiniteBanjo@feddit.online 3 weeks ago
Good news is that when it crashes theres gonna be so much surplus.
ZILtoid1991@lemmy.world 3 weeks ago
A lot of that memory is at best ECC-enabled, or HBM at worst, so it won’t be…
FiniteBanjo@feddit.online 3 weeks ago
ECC might be slower but if a ton of it floods the market all at once it could still be a good 2x64 GB purchase. Plus, it’ll be great for selfhosts even if not for gamers.
tal@lemmy.today 3 weeks ago
There might be some way to make use of it.
Linux apparently can use VRAM as a swap target:
wiki.archlinux.org/title/Swap_on_video_RAM
So you could probably take an Nvidia H200 (141 GB memory) and set it as a high-priority swap partition, say.
Normally, a typical desktop is liable to have problems powering an H200 (600W), but that’s with all the parallel compute hardware active, and I assume that if all you’re doing is moving stuff in and out of memory, it won’t use much power.
That being said, it sounds like the route on the Arch Wiki above is using vramfs, which is a FUSE filesystem, which means that it’s running in userspace rather than kernelspace, which probably means that it will have more overhead than is really necessary.
MalReynolds@slrpnk.net 3 weeks ago
You’re not wrong, but when/if (joyously, apparently, often it’s more profitable to destroy things for the tax break than to sell them) a significant surplus appears, adapters or new motherboards will appear fairly soon. Even things like H200s can probably be made into co-processors (hopefully running at a sane wattage for home users), as u/tal says there’s already ways to integrate into the linux kernel as (very fast) RAM, I doubt the compute will be left on the table for long.
H200 PCIe5 x 16 card anyone?
AmbiguousProps@lemmy.today 3 weeks ago
ECC these days is decent, I wouldn’t hate it even in my gaming PC. It’s the HBM that I’m worried about.
just_an_average_joe@lemmy.dbzer0.com 3 weeks ago
Bruh, its good to have some hope but im sure they will find a way to screw us anyways. Economy goes up, rich gets richer. Economy goes down, rich gets richer.