Comment on Nation-state hackers deliver malware from “bulletproof” blockchains
finitebanjo@piefed.world 23 hours agoUnfortunately, an LLM lies about 1 in 5 to 1 in 10 times: 80% to 90% accuracy, with a proven hard limit by OpenAI and Deepmind research papers that state even with infinite power and resources it would never approach human language accuracy. Add on top of that the fact that the model is trained on human inputs which themselves are flawed, so you multiply an average person's rate of being wrong.
In other words, you're better off browsing forums and asking people, or finding books on the subject, because the AI is full of shit and you're going to be one of those idiot sloppers everybody makes fun of, you won't know jack shit and you'll be confidently incorrect.
null@piefed.nullspace.lol 22 hours ago
They just explained how to use AI in a way where “truth” isn’t relevant.
finitebanjo@piefed.world 21 hours ago
And I explained why that makes them a moron.
Cabbage_Pout61@lemmy.world 19 hours ago
How would I search something I don’t know? As I explained the AI is just responsible to tell me “hey this Xg X exists”, and after that I go look for it on my own.
Why am I a moron? Isn’t it the same as asking another person and then doing the heavy lifting yourself?
finitebanjo@piefed.world 19 hours ago
That was your previous example. You had a very specific thing in mind, meaning you knew what to search from from reputable sources. There are tons of ways to discover new previously unknown things, all of which are better than being a filthy stupid slopper.
"Hey AI, can you please think for me? Please? I need it, idk what to do."
null@piefed.nullspace.lol 16 hours ago
No you didn’t.