😆 I can’t believe how absolutely silly a lot of you sound with this.
LLM is a tool. It’s output is dependent on the input. If that’s the quality of answer you’re getting, then it’s a user error. I guarantee you that LLM answers for many problems are definitely adequate.
It’s like if a carpenter said the cabinets turned out shit because his hammer only produces crap
someacnt@sh.itjust.works 5 days ago
Wdym, I have seen researchers using it to aid their research significantly. You just need to verify some stuff it says.
davidagain@lemmy.world 5 days ago
Verify every single bloody line of output. Top three to five are good, then it starts guessing the rest based on the pattern so far. If I wanted to make shit up randomly, I would do it myself.
People who trust LLMs to tell them things that are right rather than things that sound right have fundamentally misunderstood what an LLM is and how it works.
someacnt@sh.itjust.works 5 days ago
It’s not that bad, the output isn’t random. Time to time, it can produce novel stuffs like new equations for engineering. Also, verification does not take that much effort. At least according to my colleagues, it is great. Also works well for coding well-known stuffs, as well!
davidagain@lemmy.world 5 days ago
It’s not completely random, but I’m telling you it fucked up, it fucked up badly, time after time, and I had to check every single thing manually. It’s correctness run never lasted beyond a handful. If you build something using some equation it invented you’re insane and should quit engineering before you hurt someone.