Comment on It's 2025, the year we decided we need a widespread slur for robots
lka1988@lemmy.dbzer0.com 1 day agoThe point is that technology has no understanding of empathy. You cannot program empathy. Computers do tasks based on logic, and little else. Empathy is an illogical behavior.
communist@lemmy.frozeninferno.xyz 1 day ago
Empathy is not illogical, behaving empathetically builds trust and confers longterm benefits.
sugar_in_your_tea@sh.itjust.works 1 day ago
An AI will always behave logically, it just may not be consistent with your definition of “logical.” Their outputs will always be consistent with their inputs, because they’re deterministic machines.
Any notion of empathy needs to be programmed in, whether explicitly or through training data, and it will violate that if its internal logic determines it should.
Humans, on the other hand, behave comparatively erratically since inputs are more varied and inconsistent, and it’s not proven whether we can control for that (i.e. does free will exist?).
lka1988@lemmy.dbzer0.com 1 day ago
My dude.
I’m not arguing the deeper facets of empathy. I’m arguing that technology is entirely incapable of genuine empathy.
CileTheSane@lemmy.ca 1 day ago
I don’t care if it’s genuine or not. Computers can definately mimic empathy and can be programmed to do so.
When you watch a movie you’re not watching people genuinely fight/struggle/fall in love, but it mimics it well enough.
lka1988@lemmy.dbzer0.com 1 day ago
Jesus fucking christ on a bike. You people are dense.
sp3ctr4l@lemmy.dbzer0.com 20 hours ago
Actually, a lot of non LLM AI development, (and even LLMs, in a sense) is based very fundamentally on concepts of negative and positive reinforcement.
In such situations… pain and pleasure are essentially the scoring rubrics for a generated strategy, and fairly often, in group scenarios… something resembling mutual trust, concern for others, ‘empathy’ arises as a stable strategy, especially if agents can detect or are made aware of the pain or pleasure of other agents.
This really shouldn’t be surprising… as our own human empathy really fundamentally just is a biological sort of ‘answer’ to the same sort of ‘question.’
It is actually quite possible to base an AI more fundamentally off of a simulation of empathy, than a simulation of logic.
Unfortunately, the people in charge of throwing human money at LLM AI are all largely narcissistic sociopaths… so of course they chose to emulate themselves, not the basic human empathy that their lack.
Their wealth only exists and is maintained by their construction and refinement of elaborate systems of confusing, destroying, and misdirecting the broad empathy of normal humans.
lka1988@lemmy.dbzer0.com 18 hours ago
At the end of the day, LLM/AI/ML/etc is still just a glorified computer program. It also happens to be absolutely terrible for the environment.
Insert “fraction of our power” meme here
communist@lemmy.frozeninferno.xyz 1 day ago
Well, that’s a bad argument, this is all a guess on your part that is impossible to prove, you don’t know how empathy or the human brain work, so you don’t know it isn’t computable, if you can explain these things in detail, enjoy your nobel prize. Until then what you’re saying is baseless conjecture with pre-baked assumptions that the human brain is special.
lka1988@lemmy.dbzer0.com 14 hours ago
You:
Image