Maybe the owners of LLMs need to be held responsible for the problematic software they release
Comment on Dad demands OpenAI delete ChatGPT’s false claim that he murdered his kids
michaelmrose@lemmy.world 2 weeks agoMaybe people need to learn that AI hallucinates
zipzoopaboop@lemmynsfw.com 2 weeks ago
Melvin_Ferd@lemmy.world 2 weeks ago
There’s no problem here
Petter1@lemm.ee 2 weeks ago
Yea, I’m mind blown, how, after 3 years people still don’t know how to use LLM effectively in use cases they bring value (by reducing work time)
- start a second chat and ask different to verify
- if you use chatGPT reason feature, read reasoning output as well!
- best search for verifiable thing, like code, that you can run or similar
- if you use it for research, only trust the info, if it used web search and you have read the webpages it used to summarise as well, or use traditional web search to verify based on the output
- it is great to manipulate text until sounds as desired (if you are not good in wording stuff anyway)
- plan what steps to do in a project next (like “i want to do xxx have y and need it to be z, make me a list of todos)
- and of course it is great to generate simple python scripts fast (I often use it as my python writing slave)
Using AI like this, helped me enormously in work and live Like, I learned a lot C, C++, how linux kernel modules work, how PO/POT works, helped me with translations, introduced me into music production, helped me set up appFlowy and general windows/linux issues.
BakerBagel@midwest.social 2 weeks ago
So then what’s the use of the program if it uses a bunch of energy to just make shit up?
lime@feddit.nu 2 weeks ago
sometimes you need a machine that makes things up according to a given specification.
michaelmrose@lemmy.world 2 weeks ago
Because it makes up things that are 99% correct and in some areas the 99% + verification and expansion can be superior time wise to the 100% manual route
BakerBagel@midwest.social 2 weeks ago
What models are youseeing where things are 99% correct? Google’s search chat bot can’t even keep Windows vs Mac hotkey commands straight.
desktop_user@lemmy.blahaj.zone 2 weeks ago
it’s pretty good a getting grammar correct.
surewhynotlem@lemmy.world 2 weeks ago
And when it hallucinates harmful things, protections need to be put onto the output.
michaelmrose@lemmy.world 2 weeks ago
Ok so explain particularly what this means
surewhynotlem@lemmy.world 2 weeks ago
If you have a service, and that service is generating things that harm people, you should have to stop it.
michaelmrose@lemmy.world 2 weeks ago
We value the gains both immediate and presumed more than the harm
pyre@lemmy.world 2 weeks ago
you misspelled “is fucking wrong all the goddamn time”
michaelmrose@lemmy.world 2 weeks ago
It would be more accurate to say that rather than knowing anything at all they have a model of the statistical relationship between a series of tokens and subsequent tokens which words are apt to follow other words and because the training set contains many true things the words produced in response to queries often contain true statements and almost always contain statements that LOOK like true statements.
Since it has no inherent model of the world to draw on and only such statistical relationships you should check anything important
pyre@lemmy.world 2 weeks ago
you say more accurate but all I see is a very roundabout way of saying fucking wrong all the goddamn time
desktop_user@lemmy.blahaj.zone 2 weeks ago
it produces things that appear to be cohesive sentences. there is no reason to assign correctness to a sentence.