Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

New 'DRAM+' memory designed to provide DRAM performance with SSD-like storage capabilities, uses FeRAM tech

⁨171⁩ ⁨likes⁩

Submitted ⁨⁨5⁩ ⁨weeks⁩ ago⁩ by ⁨simple@lemm.ee⁩ to ⁨technology@lemmy.world⁩

https://www.tomshardware.com/pc-components/dram/dram-memory-designed-to-provide-dram-performance-with-ssd-like-storage-capabilities-uses-feram-tech

source

Comments

Sort:hotnewtop
  • carl_dungeon@lemmy.world ⁨5⁩ ⁨weeks⁩ ago

    Oooh! Two “disruptive”’s and a “game changer”!

    source
  • brucethemoose@lemmy.world ⁨5⁩ ⁨weeks⁩ ago

    Yeah, it’s a solution in search of a problem.

    Is it cheaper than DRAM? Great! But until then, even if it’s lower power due to not needing the refresh, flash is just so cheap that it can be scaled up much better.

    And I dunno what they mean by AI workloads. How would non volatility help at all, unless it’s starting to approach SRAM performance.

    source
    • just_another_person@lemmy.world ⁨5⁩ ⁨weeks⁩ ago

      This type of memory creates a new kind of state capability for HPC computing and huge core-scaled workloads, so maybe that’s why you’re confused.

      HP basically created the physical use-case awhile back with something called The Machine. They got up to the point of having all the hardware pieces functional and even built a Linux-ish OS, but then needed customers before getting to tackling the memory portion. Hence, why this type of memory tech exists.

      We’re in a bit of weird time right now with computing in general where we’re sort of straddling the line between continuing projects with traditional computing and computers, or spending the time and effort to attempt to adapt certain projects to quantum computing. This memory is just one hardware path forward for traditional computing to keep scaling outward.

      Where it makes the most sense: huge HPC clusters. Where it doesn’t: everywhere else.

      I assume the author mentions “AI” because you could load an entire data set into this type of memory and have workers and have many NPU cores or clusters working off of the same address space without it being changed. Way faster than disk and it eliminates the context switching problem if you’re sure it’s state stays static.

      source
      • brucethemoose@lemmy.world ⁨5⁩ ⁨weeks⁩ ago

        How is that any better than DRAM though? It would have to be much cheaper/GB, yet reasonably faster than the top-end SLC/MLC flash Samsung sells.

        Another thing I don’t get… in all the training runs I see, dataset bandwidth needs are pretty small. Like, streaming images (much less like 128K tokens of text) is a minuscule drop in the bucket compared to how long a step takes, especially with hardware decoders for decompression.

        Weights are an entirely different duck, and stud like Cerebras clusters do stream them, but they need the speed of DRAM.

        source
        • -> View More Comments
    • muusemuuse@lemm.ee ⁨5⁩ ⁨weeks⁩ ago

      Didn’t intently this with 3D cross point or something like that? Then it failed and was repurposed to optane, which also flopped?

      source
      • brucethemoose@lemmy.world ⁨5⁩ ⁨weeks⁩ ago

        Yes because ultimately, it just wasn’t good enough.

        That’s what I was trying to argue below. Unified memory is great if it’s dense and fast enough, but that’s a massive if.

        source
  • surph_ninja@lemmy.world ⁨5⁩ ⁨weeks⁩ ago

    That’s cool, but does it have the same life cycle limitations as an SSD? Is this RAM gonna burn out any sooner?

    source
  • ag10n@lemmy.world ⁨5⁩ ⁨weeks⁩ ago

    Ahh yes, optane redux

    source
    • IndustryStandard@lemmy.world ⁨5⁩ ⁨weeks⁩ ago

      Optane 2 electronic boogaloo

      source
  • dojan@lemmy.world ⁨5⁩ ⁨weeks⁩ ago

    I’m here for the shade thrown in the comments, haha.

    source
  • RememberTheApollo_@lemmy.world ⁨5⁩ ⁨weeks⁩ ago

    I wonder if this would improve suspend or sleep features on devices. Last state is held in memory, ready to go.

    source