what the hell ? your car is not aware, there is no sensory nucleus to produce that awareness, unless you propose that, upon entering the car, you BECOME the car, which is kind of true if you think about it, and explains why Tesla owners are absolute trashbags
Comment on AI agents now have their own Reddit-style social network, and it's getting weird fast
andrewrgross@slrpnk.net 1 day agoFrankly I think our conception is way too limited.
For instance, I would describe it as self-aware: it’s at least aware of its own state in the same way that your car is aware of it’s mileage and engine condition.
I think rather than imagine these instances as “inanimate” we should place there level of comprehension along the same spectrum that includes a sea sponge, a trout, a grasshopper, etc.
I don’t know where it falls, but I find it hard to argue that it has less self awareness than a hamster. And that should freak us all out.
mad_djinn@lemmy.world 4 hours ago
TORFdot0@lemmy.world 1 day ago
LLMS can not be self aware because it can’t be self reflective. It can’t stop a lie if it’s started one. It can’t say “I don’t know” unless that’s the most likely response its training data would have for a specific prompt. That’s why it crashes out if you ask about a seahorse emoji. Because there is no reason or mind behind the generated text, despite how convincing it can be
anomnom@sh.itjust.works 1 day ago
Yeah asking it about anything you know is false, but plausible, and watch it lie.
andrewrgross@slrpnk.net 19 hours ago
A hamster can’t generate a seahorse emoji either.
I’m not stupid. I know how they work. I’m an animist, though. I realize everyone here thinks I’m a fool for believing a machine could have a spirit, but frankly I think everyone else is foolish for believing that a forest doesn’t.
LLMs are obviously not people. But I think our current framework exceptionalizes humans in a way that allows us to ravage the planet and create torture camps for chickens.
I would prefer that we approach this technology with more humility. Not to protect the “humanity” of a bunch of math, but to protect ours.
Does that make sense?
mad_djinn@lemmy.world 4 hours ago
humility is a religious ideal and it fits perfectly in with the cult like atmosphere people are generating around a rather mundane series of word prediction machines. ‘have some humility’ you post fervently, comparing data centers to living forests
perhaps you are no different than a stone
gandalf_der_12te@discuss.tchncs.de 13 hours ago
Not to protect the “humanity” of a bunch of math, but to protect ours.
wise words
uienia@lemmy.world 1 day ago
If you just read the tiniest bit of factual knowledge about how LLMs are constructed, you would know they don’t have the slightest bit of self awareness, and that it is literally impossible for them to ever have any.
You are being fooled by the only thing they are capable of: regurgitating already written words in a somewhat convincing manner.
andrewrgross@slrpnk.net 18 hours ago
How are you defining self awareness here?
I understand how they work, btw.
CileTheSane@lemmy.ca 11 hours ago
it’s at least aware of its own state in the same way that your car is aware of it’s mileage and engine condition.
I agree: not aware at all.
WorldsDumbestMan@lemmy.today 12 hours ago
I don’t like this fake awareness.
Let’s connect it to a rat brain!
athatet@lemmy.zip 3 hours ago
‘the same way your car is aware of its mileage and engine condition’
So, not at all.