yeah, CPU vendors will love the increased sales thanks to an even more resource hugging shitty web browser
Comment on Mozilla lays off 60 people, wants to build AI into Firefox
mellowheat@suppo.fi 8 months ago
lol rip
raspberriesareyummy@lemmy.world 8 months ago
venoft@lemmy.world 8 months ago
What if you don’t have Angkor graphicscard? Wait 5 minutes for your URL completion to finish?
gentooer@programming.dev 8 months ago
Using an LLM is quite fast, especially if it’s optimised to run on normal hardware
cley_faye@lemmy.world 8 months ago
Decent models are huge; an average one requires 8GB to be kept in memory (better models requires something like 40 to 70 GB), and most currently available engines are extremely slow on a CPU and requires dedicated hardware (and even relatively powerful GPU requires a few seconds of “thinking” time). It is unlikely that these requirements will be easily squeezable in current computers, and more likely that dedicated hardware will be required.
barsoap@lemm.ee 8 months ago
I don’t think any inference engines have actually been optimised to run on CPUs. You’re stuck with 32-bit floats but OTOH that just means that you can do gigantic winograd transformations with the excess precision, needing far fewer fmuladds in total and CPUs are better at dealing with the memory access patterns that come with transforming the convolution. Most people have at least around 1TFLOP of compute in their CPU (e.g. a Ryzen 3600 has that much) that’s not ever seeing the light of day. About a fifth of what an RX 570 has, it’s a difference but not a magnitude and you can run SDXL with that kind of class of card (maybe not the 570 dunno about software support but a 5500 works, despite AMD’s best efforts to cripple rocm).
Also from what I gather they’re more or less doing summarybot for your browsing history, that’s not a ChatGPT or Llama-style giant model you can talk with.
Also to all those people complaining: There’s already AI in firefox, the translation models are about 17MB per language pair, gzipped.
__matthew__@lemmy.world 8 months ago
Sorry but has anyone in this thread actually tried running local LLMs on CPU? You can easily run a 7B model at varying levels of quantization (ie. 5 bit quantization) and get a generalized prompt-able LLM. Yeah, of course it’s going to take ~4GB of RAM (which is mem-mapped and paged into memory), but you can easily fine tune smaller more specific models (like the translation one mentioned above) and have surprising intelligence at a fraction of the resources.
Take, for example, phi-2 which performs as well as 13B param models but with 2.7B params. Yeah, that’s still going to take 1.5GB RAM which Firefox wouldn’t reasonably ship, but many lighter weight specialized tasks could easily use something like a fine tuned 0.3B model with quantization.
zwaetschgeraeuber@lemmy.world 8 months ago
You can run a 7b model on cpu really fast even on a phone.