communist
@communist@lemmy.frozeninferno.xyz
I’m an anarchocommunist, all states are evil.
Your local herpetology guy.
Feel free to AMA about picking a pet/reptiles in general, I have a lot of recommendations for that!
- Comment on Taekwondo player alleges gang-rape by priest, others inside Kanpur ashram. A national-level Taekwondo player has alleged that she was gang-raped inside an ashram located right next to a police station 4 hours ago:
this sounds similar in concept to one gorilla vs 100 men.
- Comment on agi graph slop, wtf does goverment collapse have to do with ai? 6 days ago:
I have, I simply disagree with your conclusions.
- Comment on agi graph slop, wtf does goverment collapse have to do with ai? 6 days ago:
No, the machine will and so would a conscious one. you misunderstand. This isn’t an area where a conscious machine wins.
- Comment on agi graph slop, wtf does goverment collapse have to do with ai? 6 days ago:
These have been listed repeatedly: love, think, understand, contemplate, discover, aspire, lead, philosophize, etc.
these are not tasks except maybe philosophize and discover, which even current models can do… discovery has also been done by current models.
deepmind.google/…/alphaevolve-a-gemini-powered-co…
I said a task, not a feeling, a task is a manipulation of the world to achieve a goal, not something vague and undefinable like love.
We want a machine that can tell us what to do, instead.
theres no such thing, there’s no objective right answer to this in the first place, it’s not like a conscious being we know of can do this, why would a conscious machine be able to? This is just you asking the impossible, consciousness would not help even the tiniest bit with this problem. you have to say “what to do to achieve x” for it to have meaning, which these machines could do without solving the hard problem of consciousness at all.
yet again you fail to name one valuable aspect of solving consciousness.
- Comment on agi graph slop, wtf does goverment collapse have to do with ai? 6 days ago:
Why do you expect an unthinking, non-deliberative zombie process to know what you mean by “empower humanity”? There are facts about what is GOOD and what is BAD that can only be grasped through subjective experience.
these cannot be grasped by subjective experience, and I would say nothing can possibly achieve this, not any human at all, the best we can do is poll humanity and go by approximates, which I believe is best handled by something automatic. humans can’t answer these questions in the first place, why should I trust something without subjective experience to do it any worse?
When you tell it to reduce harm, how do you know it won’t undertake a course of eugenics?
because this is unpopular, there are many things online saying not to… do you think humans are immune to this? When has consciousness ever prevented such an outcome?
How do you know it won’t see fit that people like you, by virtue of your stupidity, are culled or sterilized?
we don’t, but we also don’t with conscious beings, so there’s still no stated advantage to consciousness.
- Comment on agi graph slop, wtf does goverment collapse have to do with ai? 6 days ago:
You don’t understand the claims you’re making if you can’t explain them. Try again this time actually explaining yourself rather than just going “some guy said I’m right”, you keep doing that without engaging with the discussion, and you keep assuming the guy verified your claim when they actually verified an irrelevant one.
- Comment on agi graph slop, wtf does goverment collapse have to do with ai? 6 days ago:
That’s just because there are no consistent set of axioms for human intuition. Obviously the best you can do is approximate, and I see no reason you can’t approximate this, feel free to give me proof to the contrary but all you’ve done so far is appeal to authority and not explain your arguments.
- Comment on agi graph slop, wtf does goverment collapse have to do with ai? 6 days ago:
Jobs are not arbitrary, they’re tasks humans want another human to accomplish, an agi could accomplish all of those that a human can.
For instance, people frequently discuss AGI replacing governments. That would require the capacity for leadership. It would require independence of thought and creative deliberation. We simply cannot list (let alone program) all human goals and values. It is logically impossible to axiomatize our value systems. The values would need to be intuited. This is a very famous result in mathematics called Gödel’s first incompleteness theorem
Why do you assume we have to? Even a shitty current ai can do a decent job at this if you fact check it, better than a lot of modern politicians. Feed it the entire internet and let it figure out what humans value, why would we manually do this?
In other words, if we want to build a machine that shares our value system, we will need to do so in such a way that it can figure out our values for itself. How? Well, presumably by being conscious. I would be happy if we could do so without its being conscious, but that’s my point: nobody knows how. Nobody even knows where to begin to guess how. That’s why AGI is so problematic.
humans are conscious and have gotten no closer to doing this, ever, I see no reason to believe consciousness will help at all with this matter.
- Comment on agi graph slop, wtf does goverment collapse have to do with ai? 6 days ago:
A job is a task one human wants another to accomplish, it is not arbitrary at all.
philosophy, politics, science are among the most important non-family-oriented “jobs” we humans do. They require consciousness.
i don’t see why they do, a philosophical zombie could do it, why not an unconscious AI? alphaevolve is already making new science, I see no reason an unconscious being with the abilty to manipulate the world and verify couldn’t do these things.
Plus, if a machine does what it’s told, then someone would be telling it what to do. That’s a job that a machine cannot do. But most of our jobs are already about telling machines what to do. If an AGI is not self-directed, it can’t tell other machines what to do, unless it is itself told what to do. But then someone is telling it what to do, which is “a job.”
yes but you can give it large, vague goals like “empower humanity, do what we say and minimize harm.” And it will still do them. So what does it matter?
- Comment on agi graph slop, wtf does goverment collapse have to do with ai? 6 days ago:
The existence of black holes has a functional purpose in physics, the existence of consciousness only has one to our subjective experience, and not one to our capabilities.
- Comment on agi graph slop, wtf does goverment collapse have to do with ai? 1 week ago:
You’ve arbitrarily defined an agi by its consciousness instead of its capabilities.
- Comment on agi graph slop, wtf does goverment collapse have to do with ai? 1 week ago:
Most people can’t identify a correct mathematical equation from an incorrect one
this is irrelevant, we’re talking about something where nobody can tell the difference.
What is “economically important labor”? Arguably the most economically important labor is giving birth, raising your children, and supporting your family. So would an AGI be some sort of inorganic uterus as well as a parent and a lover? Lol.
it means a job. That’s obviously not a job and obviously not what is meant, an interesting strategy from one who just used “what most people mean when they say”
That’s a pretty tall order, if AGI also has to do philosophy, politics, and science. All fields that require the capacity for rational deliberation and independent thought, btw.
it just has to be at least as good as a human at manipulating the world to achieve its goals, I don’t know of any other definition of agi that factors in actually meaningful tasks
an agi should be able to do almost any task a human can do at a computer. It doesn’t have to be conscious and I have no idea why or where consciousness factors into the equation.
- Comment on agi graph slop, wtf does goverment collapse have to do with ai? 1 week ago:
Being able to decide its own goals is a completely unimportant aspect of the problem.
why do you care?
- Comment on agi graph slop, wtf does goverment collapse have to do with ai? 1 week ago:
If there’s no way to tell the illusion from reality, tell me why it matters functionally at all.
- Comment on agi graph slop, wtf does goverment collapse have to do with ai? 1 week ago:
If it quacks like a duck it changes the entire global economy and can potentially destroy humanity. All while you go “ah but it’s not really reasoning.”
what difference does it make if it can do the same intellectual labor as a human? If I tell it to cure cancer and it does will you then say “but who would want yet another machine that just does what we say?”
your point reads like complete nonsense to me. How is that economically valuable? Why are you asserting most people care about that and not the part where it cures a disease when we ask it to?
- Comment on I am having a weird experience on the Fediverse. 1 week ago:
Hilarious chaos is widely regarded as a nazi server, so it has a lot of defeds, they post a lot of antitrans content and it’s not against the rules, you may want to try another instance.
- Comment on agi graph slop, wtf does goverment collapse have to do with ai? 1 week ago:
A philosophical zombie still gets its work done, I fundamentally disagree.
- Comment on agi graph slop, wtf does goverment collapse have to do with ai? 1 week ago:
Consciousness is entirely overrated, it doesn’t mean anything important at all. An ai just needs logic, reasoning and a goal to effectively change things. Solving consciousness will do nothing of practical value, it will be entirely philosophical.
- Comment on agi graph slop, wtf does goverment collapse have to do with ai? 1 week ago:
Tbh would happen with high compute agi as it could then create low compute tendrils.
- Comment on Valve CEO Gabe Newell’s Neuralink competitor is expecting its first brain chip this year 1 week ago:
No, it won’t hold up for 50 years, but if you don’t want one don’t get it?
- Comment on Valve CEO Gabe Newell’s Neuralink competitor is expecting its first brain chip this year 1 week ago:
that’s where regulators step in, do you honestly believe elon musk would not be implanting healthy people with neuralinks if regulators would allow? They won’t, this is tech for people whose lives are so awful that not having one is worse than the things that may go wrong, for a very, very long time.
- Comment on Valve CEO Gabe Newell’s Neuralink competitor is expecting its first brain chip this year 1 week ago:
Why does it have to? All current bci’s are designed for the disabled, why would this one be an exception?
- Comment on Valve CEO Gabe Newell’s Neuralink competitor is expecting its first brain chip this year 1 week ago:
this isn’t for you, you’re not a paraplegic, are you?
- Comment on Valve CEO Gabe Newell’s Neuralink competitor is expecting its first brain chip this year 1 week ago:
You live long enough to help paraplegics game?
- Comment on SteamOS finally released by Valve 1 week ago:
I specialize in giving new users linux and documentation has not been an issue at all with bazzite, I just let them know to use rpm-ostree as little as possible ( and that it replaces the normal fedora package manager ) and only when necessary and search for atomic fedora for guides.
so far I have run into zero documentation issues that weren’t just general linux ones. I think your claim might have been true a while ago but no longer is.
- Comment on SteamOS finally released by Valve 1 week ago:
Tbh I copy and pasted a lot from a previous post I made about mint
- Comment on SteamOS finally released by Valve 1 week ago:
Tbh I don’t agree at all that kubuntu is easier for beginners, that may have been the case 5 or so years ago, but bazzite and aurora are the best now, also there’s literally no reason to use fedora over bazzite or aurora since they’re literally the same thing except with some added packages and important fixes (especially the ffmpeg fix that makes twitch work)
- Comment on [deleted] 4 weeks ago:
5
- Comment on [deleted] 5 weeks ago:
I do question your moral choice of putting flavor above killing
To be clear, I do not. What I’m doing is morally wrong, in fact, it’s morally terrible, but I do it anyway.
- Comment on [deleted] 5 weeks ago:
Did you choose to eat meat?
Yes.
What’s your logic?
There is none, I wholly accept that it is entirely illogical and unethical. I am addicted to the flavor.
Which animals?
Any so long as it is delicious.
Would you eat dog?
Yes. It’s no different than pig in my eyes.