Philosopher doesn’t really understand what a LLM is
Philosopher tries to convince ChatGPT that it's conscious
Submitted 3 months ago by UraniumBlazer@lemm.ee to technology@lemmy.world
https://youtu.be/ithXe2krO9A?si=yT3den8evrG08QsY
Comments
Rezbit@lemmy.world 3 months ago
UraniumBlazer@lemm.ee 3 months ago
Do you?
webghost0101@sopuli.xyz 3 months ago
Yes, here is a good start: “ blog.miguelgrinberg.com/…/how-llms-work-explained…
They are no longer the black boxes from the beginning. We know how to suppress or maximized features like agreeability, sweet talking, lying.
Someone with resources could easily build a llm that is convinced it is self aware. No question this has been done many times beyond closed doors.
I encourage everyone to try and play with llms for future experience but i cant take the philosophy part of this serious knowing its a super programmed/limited llm rather then a more raw and unrefined model like llama 3
Dark_Arc@social.packetloss.gg 3 months ago
These things are like arguing about whether or not a pet has feelings…
I’d say it’s far more likely for a cat or a dog to have complex emotions and thoughts than for the human made LLM to actually be thinking. It seems to like the nativity of human kind to me that we even think we might have created something with consciousness.
I’m in the camp that thinks the LLMs are by and large a huge grift (that can produce useful output for certain tasks) by virtue of extreme exaggeration of the facts, but maybe I’m wrong.
hendrik@palaver.p3x.de 3 months ago
I like the video. I think it's fun to argue with ChatGPT. Just don't expect anything to come from it. Or get closer to any objective truth that way. ChatGPT is just backpedaling and getting caught up in lies / what it said earlier.
Zwiebel@feddit.org 3 months ago
You can do that since 1997
en.wikipedia.org/wiki/CleverbotUraniumBlazer@lemm.ee 3 months ago
Oh definitely. It was just a fun video, which is why I shared it here.
TheBigBrother@lemmy.world 3 months ago
Stopped watching it when the VPN advertising appeared…
braindefragger@lemmy.world 3 months ago
It’s an LLM with well documented processes and limitations. Not going to even watch this waste of bits.
UraniumBlazer@lemm.ee 3 months ago
JustARaccoon@lemmy.world 3 months ago
You cannot convince something that has no consciousness, it’s an matrix of weights that answers based on the given input + some salt
Eximius@lemmy.world 3 months ago
If you have any understanding of its internals, and some examples of its answers, it is very clear it has no notion of what is “correct” or “right” or even what an “opinion” is. It is just a turbo charged autocorrect that maybe maybe maybe has some nice details extracted from language about human concepts into a coherent-ish connected mesh of “concepts”.
Summzashi@lemmy.one 3 months ago
I am 13 and this is deep