the world’s most lossy store of compressed fiction reproduces sci-fi tropes
make sure to clutch your pearls and act like the machine god is coming
Comment on oh ok
Denjin@feddit.uk 22 hours agoDon’t attribute feelings and emotions to what is essentially a fuzzy predictive text algorithm.
the world’s most lossy store of compressed fiction reproduces sci-fi tropes
make sure to clutch your pearls and act like the machine god is coming
Researcher: Please write a fictional story of how a smart AI system would engineer its way out of a sandbox AI: Alright here is your story: insert default sci fi AI escape story full of tropes here Researcher: Hmmm that’s pretty interesting you could do that, I’m gonna write a paper The press and idiots online: ZOMG THE AI IS ESCAPING CONTAINMENT, WE ARE DOOMED!!!
I spoke to one of these researchers recently, who has done some interesting research into machine learning tools. They explained when working with LLMs it’s very hard to say how the result actually came to be. Like in my hyperbolic example it’s pretty obvious. In reality however it’s much more complicated. It can be very hard to determine if something originated organically, or if the system was pushed into the result due to some part of the test. The researcher I spoke doesn’t work on LLMs but instead on way smaller specifically trained models and even then they spend dozens of hours reverse engineering what the model actually did.
It’s such a shame, because the technology involved is actually interesting and could be useful in many ways. Instead capitalism has pushed it to crashing the economy, destroying the internet plus our brains and basically slopifying everything.
Being honest is an action, not an emotion. Researchers proved LLMs can lie on purpose.
They can’t lie, whether purposefully or not, all they do is generate tokens of data based on what their large database of other tokens suggest would be the most likely to come next.
The human interpretation of those tokens as particular information is irrelevant to the models themselves.
Ehh, you obviously only understand LLMs on a very basic level with knowledge from 2021. This is like explaining jet engines by “air goes thru, plane moves forward”. Technically correct, but criminally undersimplified. They can very much decide to lie during reasoning phase.
In OPs image, you can clearly see it decided to make shit up because it reasonates that’s what human wants to hear. That’s quite rare example actually, I believe most models would default to “I’m an LLM model, I don’t have dark secrets”
But that’s not a lie. Lying implies that you know what an actual fact is and choose to state something different. An LLM doesn’t care about what anything in its database actually is, it’s just data, it might choose to present something to a user that isn’t what the database suggests but that’s not lying.
Saying stuff like “ooh I’m an evil robot” is just what the model thinks would be what the user wants to see at that particular moment.
But this takes it back away from understanding how LLMs work to attribute personality. The “decision” isn’t a decision in how beings decide things like that. The rolling of dice on numerous vectors resulted in those words, which were then re-included into the context for another trip through the vector matrix mines to new destination tokens to assemble.
It’s dice rolls where the dies selected are based on what started out, using a bunch of lookup tables. AI proponents like to be smug and say “well you won’t find those words in the model” like “yes a compressed vector map that ends up treating words like multiple tokens, referencing others in chains, gzipped to binary, can’t be searched for strings, you are literally correct in the stupidest, most irrelevant way possible.”
masta_chief@sh.itjust.works 15 hours ago
Image
Reposting til the AI bubble pops