Comment on How Much Do LLMs Hallucinate in Document Q&A Scenarios? A 172-Billion-Token Study Across Temperatures, Context Lengths, and Hardware Platforms [TLDR: 25%]

<- View Parent
SuspciousCarrot78@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

No, it’s real. I’m running on a Quadro P1000 with 4GB vram (or a Tesla P4 with 8GB). My entire raison d’être is making potato tier computing a thing.

openwebui.com/…/vodka_when_life_gives_you_a_potat…

That and because fuck Chatgpt

I refuse to believe in no win scenarios.

source
Sort:hotnewtop