These things are like arguing about whether or not a pet has feelings…
I’d say it’s far more likely for a cat or a dog to have complex emotions and thoughts than for the human made LLM to actually be thinking. It seems to like the nativity of human kind to me that we even think we might have created something with consciousness.
I’m in the camp that thinks the LLMs are by and large a huge grift (that can produce useful output for certain tasks) by virtue of extreme exaggeration of the facts, but maybe I’m wrong.
webghost0101@sopuli.xyz 3 months ago
Yes, here is a good start: “ blog.miguelgrinberg.com/…/how-llms-work-explained…
They are no longer the black boxes from the beginning. We know how to suppress or maximized features like agreeability, sweet talking, lying.
Someone with resources could easily build a llm that is convinced it is self aware. No question this has been done many times beyond closed doors.
I encourage everyone to try and play with llms for future experience but i cant take the philosophy part of this serious knowing its a super programmed/limited llm rather then a more raw and unrefined model like llama 3