BUT BUT you’ll get 5% fasTEr SpeED!!! And MOrE seCuRiTy!!!
Comment on Even Apple finally admits that 8GB RAM isn't enough
mp3@lemmy.ca 4 months agoAnd why they solder the RAM, or even worse make it part of the SoC.
cm0002@lemmy.world 4 months ago
umami_wasbi@lemmy.ml 4 months ago
Well. The claim they made still holds true, despit how I dislike this design choice. It is faster, and more secure (though attacks on NAND chips are hard and require high skill levels that most attacker won’t posses).
And add one more: it saves power when using LPDDR5 rather DDR5. To a laptop that battery life matters a lot, I agree that’s important. However, I have no idea how much standby or active time it gain by using LPDDR5.
balder1991@lemmy.world 4 months ago
In this particular case the RAM is part of the chip as an attempt to squeeze more performance. Nowadays, processors have become too fast but it’s useless if the rest of the components don’t catch up. The traditional memory architecture has become a bottleneck the same way HHDs were before the introduction of SSDs.
You’ll see this same trend extend to Windows laptops as they shift to Snapdragon processors too.
stoly@lemmy.world 4 months ago
People do like to downplay this, but SoC is the future. There’s no way to get performance over a system bus anymore.
helenslunch@feddit.nl 4 months ago
There is. It’s called CAMM.
stoly@lemmy.world 4 months ago
Funny that within one minute, they state the exact same problem.
rockSlayer@lemmy.world 4 months ago
There are real world performance benefits to ram being as close as possible to the CPU, so it’s not entirely without merit. But that’s what CAMM modules are for.
akilou@sh.itjust.works 4 months ago
But do those benefits outweigh doubling or tripling the amount of RAM by simply inserting another stick that you can buy for dozens of dollars?
rockSlayer@lemmy.world 4 months ago
That’s extremely dependent on the use case, but in my opinion, generally no. However CAMM has been released as an official JEDEC interface and does a good job at being a middle ground between repairability and speed.
halcyoncmdr@lemmy.world 4 months ago
It’s an officially recognized spec, so Apple will ignore it as long as they can. Until they can find a way to make money from it or spin marketing as if it’s some miraculous new invention of theirs, for something that should just be how it’s done.
BorgDrone@lemmy.one 4 months ago
Yes, there are massive advantages. It’s basically what makes unified memory possible on modern Macs. Especially with all the interest in AI nowadays, you really don’t want a machine with a discrete GPU/VRAM, a discrete NPU, etc.
Take for example a modern high-end PC with an RTX 4090. Those only have 24GB VRAM and that VRAM is only accessible through the (relatively slow) PCIe bus. AI models can get really big, and 24GB can be too little for the bigger models. You can spec an M2 Ultra with 192GB RAM and almost all of it is accessible by the GPU directly. Even better, the GPU can access that without any need for copying data back and forth over the PCIe bus, so literally 0 overhead.
The advantages of this multiply when you have more dedicated silicon. For example: if you have an NPU, that can use the same memory pool and access the same shared data as the CPU and GPU with no overhead. The M series also have dedicated video encoder/decoder hardware, which again can access the unified memory with zero overhead.
For example: you could have an application that replaces the background on a video using AI. It takes a video, decompresses it using the video decoder , the decompressed video frames are immediately available to all other components. The GPU can then be used to pre-process the frames, the NPU can use the processed frames as input to some AI model and generate a new frame and the video encoder can immediately access that result and compress it into a new video file.
The overhead of just copying data for such an operation on a system with non-unified memory would be huge. That’s why I think that the AI revolution is going to be one of the driving factors in killing systems with non-unified memory architectures, at least for end-user devices.
vaultdweller013@sh.itjust.works 4 months ago
I feel like this is an arguement for new specialized computers at best. At worst it shows that this AI crap is even more harmful to the end user.
neo2478@sh.itjust.works 4 months ago
That’s a fantastic explanation! Thank you!
dustyData@lemmy.world 4 months ago
Bus goes Vrrrroom vrrooom. Fuck AI.
FarraigePlaisteach@lemmy.world 4 months ago
And even if the out-of-the-box RAM is soldered to the machine, it should still be possible to add supplementary RAM that isn’t soldered for when the system demands it. Other computers have worked like this in the past with chip RAM but a socket to add more.
gravitas_deficiency@sh.itjust.works 4 months ago
It’s highly dependent on the application.
For instance, I could absolutely see having certain models with LPCAMM expandability as a great move for Apple, particularly in the pro segment, so they’re not capped by whatever they can cram into their monolithic SoCs. But for most consumer (that is, non-engineer/non-developer users) applications, I don’t see them making it expandable.
Or more succinctly: they should absolutely put LPCAMM in the next generation of MBPs, in my opinion.
TheGrandNagus@lemmy.world 4 months ago
Apple’s SoC long predates CAMM.
Dell first showed off CAMM in 2022, and it only became JEDEC standardised in December 2023.
That said, if Dell can create a really good memory standard and get JEDEC to make it an industry standard, so can Apple. They just chose not to.