communist
@communist@lemmy.frozeninferno.xyz
I’m an anarchocommunist, all states are evil.
Your local herpetology guy.
Feel free to AMA about picking a pet/reptiles in general, I have a lot of recommendations for that!
- Comment on You have one job. 8 hours ago:
Hard disagree, this and dealing with scammers are two of the best ones.
- Comment on It's 2025, the year we decided we need a widespread slur for robots 8 hours ago:
That’s pathetic.
- Comment on You have one job. 13 hours ago:
This may be a valid usecase for AI, too
- Comment on It's 2025, the year we decided we need a widespread slur for robots 1 day ago:
Well, that’s a bad argument, this is all a guess on your part that is impossible to prove, you don’t know how empathy or the human brain work, so you don’t know it isn’t computable, if you can explain these things in detail, enjoy your nobel prize. Until then what you’re saying is baseless conjecture with pre-baked assumptions that the human brain is special.
- Comment on It's 2025, the year we decided we need a widespread slur for robots 1 day ago:
Empathy is not illogical, behaving empathetically builds trust and confers longterm benefits.
- Comment on Be nice 6 days ago:
Ipad is 2 dollars.
- Comment on Lemmy User Feedback and Improvement Thread: Share Your Complaints, Suggestions, and Ideas 6 days ago:
Someone should really submit a patch to firefox and chromium for this honestly, this is pretty jank.
- Comment on Not for me, tho 5 weeks ago:
I’m stupid and read the clock wrong and didn’t check even a little.
- Comment on [deleted] 5 weeks ago:
Ah, so it’s not that you want it for any functional purpose, you just think it would be cheaper, understood.
- Comment on Not for me, tho 5 weeks ago:
Interestingly it would be right twice a day
- Comment on [deleted] 5 weeks ago:
My thinking is that having a desktop, laptop and phone that sync data to eachother accomplishes all of that and will do it better because they’re designed for their usecase, why not that?
- Comment on [deleted] 5 weeks ago:
Why though?
- Comment on Fairphone announces the €599 Fairphone 6, with a 6.31" 120Hz LTPO OLED display, a Snapdragon 7s Gen 3 chip, and enhanced modularity with 12 swappable parts 1 month ago:
What are you talking about this phone is established, this is their 6th one… and the bootloader is unlocked.
- Comment on Tesla Robotaxi Freaks Out and Drives into Oncoming Traffic on First Day 1 month ago:
It found out who made it so it knew what to do
- Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. 1 month ago:
Is that useful for completing tasks?
- Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. 1 month ago:
How do you define consciousness?
- Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. 1 month ago:
That’s not the only way to make meaningful change, getting people to give up on llms would also be meaningful change. This does very little for anyone who isn’t apple.
- Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. 1 month ago:
Meaningful change is not happening because of this paper, either, I don’t know why you’re playing semantic games with me though.
- Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. 1 month ago:
It does need to do that to meaninfully change anything, however.
- Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. 1 month ago:
that’s very true, I’m just saying this paper did not eliminate the possibility and is thus not as significant as it sounds. If they had accomplished that, the bubble would collapse, this will not meaningfully change anything, however.
- Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. 1 month ago:
It is, but this did not prove all architectures cannot reason, nor did it prove that all sets of weights cannot reason.
essentially they did not prove the issue is fundamental. And they have a pretty similar architecture, they’re all transformers trained in a similar way.
- Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. 1 month ago:
those particular models.
- Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. 1 month ago:
That indicates that it does not follow instructions, not that it is architecturally fundamentally incapable.
- Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. 1 month ago:
I think it’s important to note (i’m not an llm I know that phrase triggers you to assume I am) that they haven’t proven this as an inherent architectural issue, which I think would be the next step to the assertion.
do we know that they don’t and are incapable of reasoning, or do we just know that for x problems they jump to memorized solutions, is it possible to create an arrangement of weights that can genuinely reason, even if the current models don’t? That’s the big question that needs answered. It’s still possible that we just haven’t properly incentivized reason over memorization during training.
- Comment on agi graph slop, wtf does goverment collapse have to do with ai? 2 months ago:
I have, I simply disagree with your conclusions.
- Comment on agi graph slop, wtf does goverment collapse have to do with ai? 2 months ago:
No, the machine will and so would a conscious one. you misunderstand. This isn’t an area where a conscious machine wins.
- Comment on agi graph slop, wtf does goverment collapse have to do with ai? 2 months ago:
These have been listed repeatedly: love, think, understand, contemplate, discover, aspire, lead, philosophize, etc.
these are not tasks except maybe philosophize and discover, which even current models can do… discovery has also been done by current models.
deepmind.google/…/alphaevolve-a-gemini-powered-co…
I said a task, not a feeling, a task is a manipulation of the world to achieve a goal, not something vague and undefinable like love.
We want a machine that can tell us what to do, instead.
theres no such thing, there’s no objective right answer to this in the first place, it’s not like a conscious being we know of can do this, why would a conscious machine be able to? This is just you asking the impossible, consciousness would not help even the tiniest bit with this problem. you have to say “what to do to achieve x” for it to have meaning, which these machines could do without solving the hard problem of consciousness at all.
yet again you fail to name one valuable aspect of solving consciousness.
- Comment on agi graph slop, wtf does goverment collapse have to do with ai? 2 months ago:
Why do you expect an unthinking, non-deliberative zombie process to know what you mean by “empower humanity”? There are facts about what is GOOD and what is BAD that can only be grasped through subjective experience.
these cannot be grasped by subjective experience, and I would say nothing can possibly achieve this, not any human at all, the best we can do is poll humanity and go by approximates, which I believe is best handled by something automatic. humans can’t answer these questions in the first place, why should I trust something without subjective experience to do it any worse?
When you tell it to reduce harm, how do you know it won’t undertake a course of eugenics?
because this is unpopular, there are many things online saying not to… do you think humans are immune to this? When has consciousness ever prevented such an outcome?
How do you know it won’t see fit that people like you, by virtue of your stupidity, are culled or sterilized?
we don’t, but we also don’t with conscious beings, so there’s still no stated advantage to consciousness.
- Comment on agi graph slop, wtf does goverment collapse have to do with ai? 2 months ago:
You don’t understand the claims you’re making if you can’t explain them. Try again this time actually explaining yourself rather than just going “some guy said I’m right”, you keep doing that without engaging with the discussion, and you keep assuming the guy verified your claim when they actually verified an irrelevant one.
- Comment on agi graph slop, wtf does goverment collapse have to do with ai? 2 months ago:
That’s just because there are no consistent set of axioms for human intuition. Obviously the best you can do is approximate, and I see no reason you can’t approximate this, feel free to give me proof to the contrary but all you’ve done so far is appeal to authority and not explain your arguments.