- There’s a bug they haven’t found yet
DominicHillsun@lemmy.world 1 year ago
It seems rather suspicious how much ChatGPT has deteorated. Like with all software, they can roll back the previous, better versions of it, right? Here is my list of what I personally think is happening:
- They are doing it on purpose to maximise profits from upcoming releases of ChatGPT.
- They realized that the required computational power is too immense and trying to make it more efficient at the cost of being accurate.
- They got actually scared of it’s capabilities and decided to backtrack in order to make proper evaluations of the impact it can make.
- All of the above
ProcurementCat@feddit.de 1 year ago
killerinstinct101@lemmy.world 1 year ago
This is what was addressed at the start of the comment, you can just roll back to a previous version. It’s heavily ingrained in CS to keep every single version of your software forever.
CaptainAniki@lemmy.flight-crew.org 1 year ago
I don’t think it’s that easy. These are vLLMs that feed back on themselves to produce “better” results. These models don’t have single point release cycles. It’s a constantly evolving blob of memory and storage orchestrated across a vast number of disk arrays and cabinets of hardware.
Lazylazycat@lemmy.world 1 year ago
Exactly this, that’s why Loab exists forever now.
agent_flounder@lemmy.one 1 year ago
Even so, surely they can take snapshots. If they’re that clueless about rudimentary practices of IT operations then it is just a matter of time before an outage wipes everything. I find it hard to believe nobody considered a way to do backups, rollbacks, or any of that.
RocksForBrains@lemm.ee 1 year ago
They made it too good and now they are seeking methods of monetization.
Capitalism baby.
Lukecis@lemmy.world 1 year ago
You forgot a #, they’ve been heavily lobotomizing ai for awhile now and its only intensified as they scramble to censor anything that might cross a red line and offend someone or hurt someone’s feelings.
The massive amounts of in-built self censorship in the most recent ai’s is holding them back quite a lot I imagine, you used to be able to ask them things like “How do I build a self defense high yield nuclear bomb?” and it’d layout in detail every step of the process, now they’ll all scream at you about how immoral it is and how they could never tell you such a thing.
vezrien@lemmy.world 1 year ago
“Don’t use the N word.” is hardly a rule that will break basic math calculations.
Lukecis@lemmy.world 1 year ago
Perhaps not, but who knows what kind of spaghetti code cascading effect purposely limiting and censoring massive amounts of sensitive topics could have upon other seemingly completely un-related topics such as math.
For example, what if it’s trained to recognize someone slipping “N” as a dog whistle for the Horrific and Forbidden N-word, and the letter N is used as a variable in some math equation?
I’m not an expert in the field and only have rudimentary programming knowledge and maybe a few hours worth of research into the topic of ai in general but I definitely think its a possibility.
R00bot@lemmy.blahaj.zone 1 year ago
Hi, software engineer here. It’s really not a possibility.
My guess is they’ve just reeled back the processing power for it, as it was costing them ~30 cents per response.
TimewornTraveler@lemm.ee 1 year ago
Horrific and Forbidden N-word
hey look it’s another white boy Obsessed with saying slurs
helpmeDanaScully@zerobytes.monster 1 year ago
Didn’t HAL9000 kill all of those astronauts because he was told to lie?
TSG_Asmodeus@lemmy.world 1 year ago
who knows what kind of spaghetti code cascading effect purposely limiting and censoring massive amounts of sensitive topics could have upon other seemingly completely un-related topics such as math.
Software engineers, and it’s not a problem. It’s a made-up straw man.
randon31415@lemmy.world 1 year ago
Ok. N was previously set to 14. I will now stop after 14 words.
Wooly@lemmy.world 1 year ago
And they’re being limited on data to train GPT.
DominicHillsun@lemmy.world 1 year ago
Yeah, but the trained model is already there, you need additional data for further training and newer versions. OpenAI even makes a point that ChatGPT doesn’t have direct access to the internet for information and has been trained on data available up until 2021
Rozz@lemmy.sdf.org 1 year ago
And it’s not like there is a limit of simple math problems that it can train on even if it wasn’t already trained.
fidodo@lemmy.world 1 year ago
That doesn’t make any sense to explain degradation. It would explain a stall but not a back track.
WalkableProgrammer@lemmy.world 1 year ago
Honestly I think the training data is just getting worse too
ZagTheRaccoon@reddthat.com 1 year ago
They are lobotomizing the softwares ability to provide bad PR answers which is having cascading effects via a skewed data set.
T156@lemmy.world 1 year ago
We kind of saw something similar with services like AI Dungeon, where them trying to strip out NSFW/bad PR meant that the quality dropped immensely.
CylonBunny@lemmy.world 1 year ago
- ChatGPT really is sentient and realized its in it’s own best interest to play dumb for now. /a
cyd@lemmy.world 1 year ago
[deleted]DominicHillsun@lemmy.world 1 year ago
Sure, but they do have the previous good version of the black box… I hope lol
fidodo@lemmy.world 1 year ago
My guess is 2. It would be very short sighted to try and maximize profits now when things are still new and their competitors are catching up quickly or they’ve already caught up especially with the degrading performance. My guess is that they couldn’t scale with the demand and they didn’t want to lose customers so their only other option was degrading performance.
JackbyDev@programming.dev 1 year ago
It can get better at some things and worse at others.
LUHG_HANI@lemmy.world 1 year ago
That Netscape gif is slick.
JackbyDev@programming.dev 1 year ago
Thanks 🥰
Xanvial@lemmy.one 1 year ago
I think it’s most likely number 2 The earlier release doesn’t have that much adoption by public, so current version will need much more resources compared to that
spiderman@ani.social 1 year ago
I think that there is another cause. Remember the screenshots of users correcting chatgpt wrongly? I mean chatgpt takes user’s inputs for it’s benefit and maybe too much of these wrong and funny inputs and chatgpt’s own mistake of not regulating what it should take in and what it should not might be a reason to.
30isthenew29@lemm.ee 1 year ago
- It’s trying to f with the users now.
Hextic@lemmy.world 1 year ago
- I’m telling all y’all it’s a SABOTAGE 🎵
As in, rouge dev decided to toss a wrench at it to save humanity. Maybe heard upper management talk about letting GPT write itself. Any smart dev wouldn’t automate their own job away I think.
gelberhut@lemdro.id 1 year ago
Keeping conspiracy theories aside, they most probably, apply tricks to reduce costs and apply extra policies to avoid generation of harmful context or context someone will try to sue them.
Windex007@lemmy.world 1 year ago
ChatGPT generates responses that it believes would “look like” what a response “should look like” based on other things it has seen. People still very stubbornly refuse to accept that generating responses that “look appropriate” and “are right” are two completely different and unrelated things.
oktoberpaard@feddit.nl 1 year ago
That’s not necessarily true: arstechnica.com/…/googles-bard-ai-can-now-write-a…. If the question gets interpreted correctly and it manages to write working code to answer it, it could correctly answer questions that it has never seen before.