Don’t attribute feelings and emotions to what is essentially a fuzzy predictive text algorithm.
Comment on oh ok
REDACTED@infosec.pub 22 hours ago
Well at least it’s honest
Denjin@feddit.uk 20 hours ago
masta_chief@sh.itjust.works 13 hours ago
AppleTea@lemmy.zip 20 hours ago
the world’s most lossy store of compressed fiction reproduces sci-fi tropes
make sure to clutch your pearls and act like the machine god is coming
Thorry@feddit.org 19 hours ago
Researcher: Please write a fictional story of how a smart AI system would engineer its way out of a sandbox AI: Alright here is your story: insert default sci fi AI escape story full of tropes here Researcher: Hmmm that’s pretty interesting you could do that, I’m gonna write a paper The press and idiots online: ZOMG THE AI IS ESCAPING CONTAINMENT, WE ARE DOOMED!!!
I spoke to one of these researchers recently, who has done some interesting research into machine learning tools. They explained when working with LLMs it’s very hard to say how the result actually came to be. Like in my hyperbolic example it’s pretty obvious. In reality however it’s much more complicated. It can be very hard to determine if something originated organically, or if the system was pushed into the result due to some part of the test. The researcher I spoke doesn’t work on LLMs but instead on way smaller specifically trained models and even then they spend dozens of hours reverse engineering what the model actually did.
It’s such a shame, because the technology involved is actually interesting and could be useful in many ways. Instead capitalism has pushed it to crashing the economy, destroying the internet plus our brains and basically slopifying everything.
REDACTED@infosec.pub 13 hours ago
Being honest is an action, not an emotion. Researchers proved LLMs can lie on purpose.
Denjin@feddit.uk 10 hours ago
They can’t lie, whether purposefully or not, all they do is generate tokens of data based on what their large database of other tokens suggest would be the most likely to come next.
The human interpretation of those tokens as particular information is irrelevant to the models themselves.
REDACTED@infosec.pub 9 hours ago
Ehh, you obviously only understand LLMs on a very basic level with knowledge from 2021. This is like explaining jet engines by “air goes thru, plane moves forward”. Technically correct, but criminally undersimplified. They can very much decide to lie during reasoning phase.
In OPs image, you can clearly see it decided to make shit up because it reasonates that’s what human wants to hear. That’s quite rare example actually, I believe most models would default to “I’m an LLM model, I don’t have dark secrets”
tigeruppercut@lemmy.zip 5 hours ago
That’s funny, wrong enough to “ruin trivia” or cause a “pointless argument”. As if a single comma misplacement hasn’t redirected millions of dollars. Imagine what subtle lies accepted by idiots will cause in the future.
Angrydeuce@lemmy.world 28 minutes ago
I do procurement to the tune of 10+ million per year and I have seen a 300% increase in order fulfillment time solely due to those vendors pivoting to AI order fulfillment.
My direct reps at all these suppliers are just as powerless as we are…they know how unhappy their customers are, but these decisions were made much higher up then them and theyre pretty much being told to stop complaining because the AI is here to stay, even if it sucks, because its cheaper.
Welcome to the new normal.
tigeruppercut@lemmy.zip 21 minutes ago
We can only hope that customer service facing AI promises customers miracles and companies get sued each and every time it can’t deliver. Like if websites like ehow put up articles that reach the normies about “how to trick AI into promising you a million dollars and how you can win it in court”.
Of course any responsibility for what AI says will be killed as soon as a tech bro chucks a few million bucks at SCOTUS, but it’s a nice dream to pretend we still have laws for now.
Angrydeuce@lemmy.world 13 minutes ago
Thats the best part about AI…when it shits the bed no one is directly responsible. Everyone just throws their hands up and says “nothing we can do about it!”
I know this is going to age me, but I saw this happening with self checkout in grocery stores 20 years ago. Nobody remembers how it was before so nobody even realizes that the time wasted standing at a stupid kiosk that is freaking out about unexpected items in the bagging area wasnt a problem back when human beings were scanning the shit.
wonderingwanderer@sopuli.xyz 5 hours ago
To be fair, if someone’s using a chatbot on trivia night, they deserve to get wrong answers…