• 2 Posts
  • 22 Comments
Joined 1 year ago
cake
Cake day: March 22nd, 2024

help-circle












  • The model was run (and I think trained?) on very modest hardware:

    The computer used for this paper contains an NVIDIA Quadro RTX 6000 with 22 GB of VRAM, 200 GB of RAM, and a 32-core Xeon CPU, courtesy of Caltech.

    That’s a double VRAM Nvidia RTX 2080 TI + a Skylake Intel CPU, an aging circa-2018 setup. With room for a batch size of 4096, nonetheless! Though they did run into some preprocessing bottleneck in CPU/RAM.

    The primary concern is the clustering step. Given the sheer magnitude of data present in the catalog, without question the task will need to be spatially divided in some way, and parallelized over potentially several machines






  • brucethemoose@lemmy.worldtoTechnology@lemmy.worldAdobe Gets Bullied Off Bluesky
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    12 days ago

    OP’s being abrasive, but I sympathize with the sentiment. Bluesky is algorithmic just like Twitter.

    …Dunno about Bluesky, but Lemmy feels like a political purity test to me. Like, I love Lemmy and the Fediverse, but at the same time mega upvoted posts/comments like “X person should kill themself,” explulsion of nuance in specific issues, leaks into every community and such are making me step back more and more.


  • It’s not theoretical, it’s just math. Removing 1/3 of the bus paths, and also removing the need to constantly keep RAM powered

    And here’s the kicker.

    You’re supposing it’s (given the no refresh bonus) 1/3 as fast as dram, similar latency, and cheap enough per gigabyte to replace most storage. That is a tall order, and it would be incredible if it hits all three of those. I find that highly improbable.

    Even dram is starting to become a bottleneck for APUs, specifically, because making the bus wide is so expensive. This applies to the very top (the MI300A) and bottom (smartphones and laptop APUs).

    Optane, for reference, was a lot slower than DRAM and a lot more expensive/less dense than flash even with all the work Intel put into it and busses built into then top end CPUs for direct access. And they thought that was pretty good. It was good enough for a niche when used in conjunction with dram sticks


  • You are talking theoretical.

    A big reason that supercomputers moved to a network of “commodity” hardware architecture is that its cost effective.

    How would one build a giant unified pool of this memory? CXL, but how does it look physically? Maybe you get a lot of bandwidth in parallel, but how would it be even close to the latency of “local” DRAM busses on each node? Is that setup truly more power efficient than banks of DRAM backed by infrequently touched flash? If your particular workload needs fast random access to memory, even at scale the only advantage seems to be some fault tolerance at a huge speed cost, and if you just need bulk high latency bandwidth, flash has got you covered for cheaper.

    …I really like the idea of non a nonvolatile, single pool backed by caches, especially at scale, but ultimately architectural decisions come down to economics.