Sounds good, let’s put it in charge of cars and nuclear power plants!
Comment on Huh
Pyro@programming.dev 8 months ago
GPT doesn’t really learn from it’s conversations, it’s the over-correction of OpenAI in the name of “safety” which is likely to have caused this.
rtxn@lemmy.world 8 months ago
OpenStars@startrek.website 8 months ago
Even getting 2+2=2 98% of the time is good enough for that. :-P
spoiler
(wait, 2+2 is what now?)
lugal@sopuli.xyz 8 months ago
2+2 isn’t 5 anymore? Literally 1985
OpenStars@startrek.website 8 months ago
Stop trying to tell the computer what to do - it should be free to act however it wants to! :-P
FiniteBanjo@lemmy.today 8 months ago
It used to get 98%, now it only gets 2%.
2% is not good enough.
OpenStars@startrek.website 8 months ago
I mean… some might argue that even 98% wasn’t enough!? :-D
What are people supposed to - ask every question 3 times and take the best 2 out of 3, like this was kindergarten? (and that is the best-case scenario, where the errors are entirely evenly distributed across the entire problem space, which is the absolute lowest likelihood model there - much more often some problems would be wrong 100% of the time, while others may be correct more like 99% of the time, but importantly you will never know in advance which is which)
Actually that does on a real issue: some schools teach the model of “upholding standards” where like the kids actually have to know stuff (& like, junk, yeah totally) - whereas conversely another, competing model is where if they just learn something, anything at all during the year, that that is good enough to pass them and make them someone else’s problem down the line (it’s a good thing that professionals don’t need to uh… “uphold standards”, right? anyway, the important thing there is that the school still receives the federal funding in the latter case but not the former, and I am sure that we all can agree that when it comes to the next generation of our children, the profits for the school administrators are all that matters… right? /s)
All of this came up when Trump appointed one of his top donors, Betsy Devos to be in charge of all edumacashium in America, and she had literally never stepped foot inside of a public school in her entire lifetime. I am not kidding you, watch the Barbara Walters special to hear it from her own mouth. Appropriately (somehow), she had never even so much as heard of either of these two main competing models. Yet she still stepped up and acknowledged that somehow she, as an extremely wealthy (read: successful) white woman, she could do that task better than literally all of the educators in the entire nation - plus all those with PhDs in education too,
jeeringcheering her on from the sidelines.Anyway, why we should expect “correctness” from an artificial intelligence, when we cannot seem to find it anywhere among humans either, is beyond me. These were marketing gimmicks to begin with, then we all rushed to ask it to save us from the enshittification of the internet. It was never going to happen - not this soon, not this easily, not this painlessly. Results take real effort.
Redward@yiffit.net 8 months ago
Just for the fun of it, I argued with chatgpt saying it’s not really a self learning ai, 3.5 agreed that it’s a not a fully function ai with limited powers. 4.0 on the other hand was very adamant about being fully fleshed Ai
lugal@sopuli.xyz 8 months ago
I assumed they reduced capacity to save power due to the high demand
MalReynolds@slrpnk.net 8 months ago
This. They could obviously reset to original performance (what, they don’t have backups?), it’s just more cost-efficient to have crappier answers. Yay, turbo AI enshittification…
CommanderCloon@lemmy.ml 8 months ago
Well they probably did power down the performance a bit but censorship is known to nuke LLM’s performance as well
MalReynolds@slrpnk.net 8 months ago
True, but it’s hard to separate, I guess.