FCKGW-RHQQ2-YXRKT-8TG6W-2B7Q8
Seems legit
Submitted 1 month ago by The_Picard_Maneuver@piefed.world to [deleted]
https://media.piefed.world/posts/pi/lp/pilpL91Tgtn5TK9.jpg
Comments
tomiant@piefed.social 1 month ago
Ghostalmedia@lemmy.world 1 month ago
CrAcKeD
eager_eagle@lemmy.world 1 month ago
male sure to disconnect the internet first
bjoern_tantau@swg-empire.de 1 month ago
It’s just audio of French farting cats.
Lemmyoutofhere@lemmy.ca 1 month ago
Le pfffft.
Akasazh@feddit.nl 5 weeks ago
My bet was on porn.
Or a copy of an old Encarta cd-rom
SSUPII@sopuli.xyz 1 month ago
If we assume a CD, you can probably fit a 256M model in it. But it will LOAD.
MacNCheezus@lemmy.today 1 month ago
DVDs exist. They can fit approx. 7B params, enough to be somewhat productive.
khepri@lemmy.world 1 month ago
Could you crunch an LLM into 700Mb that was still functional? Cause this looks like a fun thing to actually do as a joke.
yellowbadbeast@lemmy.blahaj.zone 1 month ago
Qwen3-0.6B is about 400 MB at Q4 and is surprisingly coherent for what it is.
khepri@lemmy.world 1 month ago
That’s so crazy that an LLM capable of doing anything at all can be that small! That’s leaves room for like an entire .avi episode of family guy at dvd resolution on there, which is the natural choice for the remaining space of course
khepri@lemmy.world 1 month ago
Wow, just popped it onto my very slow desktop and this little model rips haha. I really think tiny LLMs with a good LoRA on top are going to be a huge deal going forward
lime@feddit.nu 1 month ago
yes! tinyllama is somewhere around 600MB. it’s hilariously inept. it’s like someone jpeg-compressed a robot.
NullPointerException@lemmy.ca 1 month ago
That’s just Dr Sbaitso.
uriel238@lemmy.blahaj.zone 1 month ago
Offline LLMs exist but tend to have a few terabytes of base data just to get started (e.g. before LORAs)
nomorebillboards@lemmy.world 1 month ago
I thought it was more like 10-20GB to start out with a usable (but somewhat stupid) model.
Are you confusing the size of the dataset with the size of the model?
MidsizedSedan@lemmy.world 1 month ago
Isn’t it possible to download all of wikipedia, and it being surprisenly a small file size? Can it fit on a CD?
AmbiguousProps@lemmy.today 1 month ago
It could fit on a BDXL disc.
masterspace@lemmy.ca 1 month ago
You can fit text-only wikipedia on a normal Blu Ray as it’s only about 24GB. You can also easily fit Llama 3.1 or any of the other open, offline capable ai models as they’re only about 4GB.
gustofwind@lemmy.world 1 month ago
could also store it on a flashdrive or micro sd card
Axolotl_cpp@feddit.it 1 month ago
No, you really can’t; It’s like 43 gb the text only version
BanMe@lemmy.world 1 month ago
So gonna need like 2 CDs then
puppycat@lemmy.blahaj.zone 1 month ago
yes you really can; it’s like 20-25 gb depending on how recent of a copy you have. I’ve been seeding wikipedia for almost a year and it barely takes any space on my computer
SSUPII@sopuli.xyz 1 month ago
No
(English) 24,05GB without media. Adding media adds 428,36TB.
Axolotl_cpp@feddit.it 1 month ago
Can you give me the text only version link? I found only a version tgat is like 43gb
GregorGizeh@lemmy.zip 1 month ago
500TB is still surprisingly reasonable for what is essentially a library of human (surface level) knowledge.
It would be interesting to know how large the file would be including all Text Form references (i’d imagine anything else such as videos would completely blow the proportions)
rain_worl@lemmy.world 1 month ago
kiwix? that’s compressed (afaik), and when i tried, it took up half of my disk space and needed ethernet
faizalr@piefed.social 1 month ago
It reminds me of the Britannica Encyclopedia on CD.
KyuubiNoKitsune@lemmy.blahaj.zone 1 month ago
Encarta 95
SubArcticTundra@lemmy.ml 1 month ago
Does anyone know of any OSS LLMs that can search the web the way ChatGPT can?
yellowbadbeast@lemmy.blahaj.zone 1 month ago
It’s not the LLM that does the web searching, but the software stack around it. On its own, an LLM is just a text completer. What you’d need a frontend like OpenWebUI or Perplexica that would ask the LLM for, say five internet search queries that could return useful information for the prompt, throw those queries into SearxNG, and then pipe the results into the LLM’s context for it to be used.
As for the models themselves, any decently-sized one that was released fairly recently would work. If you’re looking specifically for open-source rather than open-weight models (meaning that the training data and methodologies were also released rather than just the model weights), GPT-OSS 20B/120B and the OLMo models are recent standouts there. If not, the Qwen3 series are pretty good.
SubArcticTundra@lemmy.ml 1 month ago
Thank you
MonkderVierte@lemmy.zip 1 month ago
Depends. Does ChatGPT ignore robots.txt too?
SanctimoniousApe@lemmings.world 1 month ago
Maybe they meant GTA?
cupcakezealot@piefed.blahaj.zone 5 weeks ago
anyone have the serial?
DarkCloud@lemmy.world 1 month ago
You can get offline versions of LLMs.
criss_cross@lemmy.world 1 month ago
And gpt-oss is an offline version of chatgpt
utopianfiat@lemmy.world 1 month ago
Indeed huggingface.co/openai-community
linkinkampf19@lemmy.world 1 month ago
First thing that came to mind: GPT4All
sp3ctr4l@lemmy.dbzer0.com 1 month ago
I’ve been toying with Qwen3.
Opensource too!
JustAnotherKay@lemmy.world 5 weeks ago
Oh my god I feel so stupid. I’ve been arguing back and forth whether it was worth de-atomizing my steam deck to spin up alpaca in docker. I forgot they have a flatpak
Ghostalmedia@lemmy.world 1 month ago
I mean, most people have a local LLM in their pocket right now.
sp3ctr4l@lemmy.dbzer0.com 5 weeks ago
Unless I am missing something:
Most people do not have a local LLM in their pocket right now.
Most people have a client app that talks to a remote LLM, which ‘lives’ in an ecologically and economically dubious mega-datacenter, in their pocket right now.
SubArcticTundra@lemmy.ml 1 month ago
ollama.org