yeahiknow3
@yeahiknow3@lemmings.world
- Comment on agi graph slop, wtf does goverment collapse have to do with ai? 3 days ago:
Oh my god. So the machine won’t do terrible immoral things because they are unpopular on the internet. Well ladies and gentlemen, I rest my case.
- Comment on agi graph slop, wtf does goverment collapse have to do with ai? 3 days ago:
I’m fairly certain my explanations are so succinct and simple they can be grasped by a teenager. I don’t have the talent to simplify them any further.
Take a class in theoretical computer science. In the mean time, I beg you to stfu.
- Comment on agi graph slop, wtf does goverment collapse have to do with ai? 3 days ago:
Omg. Why do you talk about shit you don’t understand with such utter confidence? Being a fucking moron has to be the chillest way to go through the world. I think I agree with your zombie AI about the culling. We gotta cull you dude, sorry. Best harm-reduction strategy to empower humanity.
- Comment on agi graph slop, wtf does goverment collapse have to do with ai? 3 days ago:
What the hell does “empower humanity” mean? When you tell it to reduce harm, how do you know it won’t undertake a course of eugenics? How do you know it won’t see fit that people like you, by virtue of your stupidity, are culled or sterilized?
Why do you expect an unthinking, non-deliberative zombie process to know what you mean by “empower humanity”? There are facts about what is GOOD and what is BAD that can only be grasped through subjective experience.
- Comment on agi graph slop, wtf does goverment collapse have to do with ai? 3 days ago:
if I’m wrong list a task that a conscious being can do that an unconscious one is unable to accomplish.
These have been listed repeatedly: love, think, understand, contemplate, discover, aspire, lead, philosophize, etc.
There are, in fact, very few interesting things that a non-thinking entity can do. It can make toast. It can do calculations. It can design highways. It can cure cancer. It can probably fold clothes. None of this shit is particularly exciting. Just more machines doing what they’re told. We want a machine that can tell us what to do, instead. That’s AGI. No such machine can exist, at least according to our current understanding of mathematical logic, theoretical computer science, and human cognition.
- Comment on agi graph slop, wtf does goverment collapse have to do with ai? 3 days ago:
Feed it the entire internet and let it figure out what humans value
There are theorems in mathematical logic that tell us this is literally impossible. No mechanical process can generate a consistent set of axioms to summarize or even approximate human intuitions. Such a process would get things wrong and output contradictions.
- Comment on agi graph slop, wtf does goverment collapse have to do with ai? 3 days ago:
we’re talking about something where nobody can tell the difference, not where it’s difficult.
You’re missing the point. The existence of black holes was predicted long before anyone had any idea how to identify them. For many years, it was impossible. Does that mean black holes don’t matter? That we shouldn’t have contemplated their existence?
Seriously though, I’m out.
- Comment on agi graph slop, wtf does goverment collapse have to do with ai? 3 days ago:
Economics is descriptive, not prescriptive. The whole concept of “a job” is completely made up and arbitrary.
This has been fun, but I’m bored now.
- Comment on agi graph slop, wtf does goverment collapse have to do with ai? 3 days ago:
The definition is the exact opposite of arbitrary.
- Comment on agi graph slop, wtf does goverment collapse have to do with ai? 4 days ago:
Matter to whom?
We are discussing whether creating an AGI is possible, not whether humans can tell the difference (which is a separate discussion).
Most people can’t identify a correct mathematical equation from an incorrect one, especially when the solution is irrelevant to their lives. Does that mean that doing mathematics correctly “doesn’t matter?”
It would be weird to enter a mathematical forum and ask “why do you care? Why does it matter?”
- Comment on agi graph slop, wtf does goverment collapse have to do with ai? 4 days ago:
The discussion is over whether we can create an AGI. An AGI is an inorganic mind of some sort (which would have various properties, such as the capacity for independent thought). We don’t need to make an AGI. I personally see no reason to do so. The question was can we? The answer is No.
- Comment on agi graph slop, wtf does goverment collapse have to do with ai? 4 days ago:
A malfunctioning nuke can also destroy humanity. Destroying humanity is not a defining feature of AGI. The question is not whether we can create a machine that can destroy humanity. (Yes.) The question is whether we can create a machine that can think. (No.)
- Comment on agi graph slop, wtf does goverment collapse have to do with ai? 4 days ago:
This is such an odd response. Yes, we can create the illusion of thought by executing very complicated instructions. Who cares? That’s not what anyone is talking about. There’s a difference between a machine that does what it’s told and one that thinks for itself. The latter cannot be done at the moment, because we don’t know how. But sure, we can have cheap parlor tricks. Good enough to amuse the sub-100 IQ crowd at least.
- Comment on agi graph slop, wtf does goverment collapse have to do with ai? 4 days ago:
That’s fine, but most people aren’t interested in an illusion or a magic trick. When they say AGI, they mean an actual thinking mind capable of rationality (such as mind would be sensitive and responsive to reasons).
Calculators, LLMs, and toasters can’t think or understand or undertake rational (let alone moral) deliberation by definition. They can only do what they’re told. We don’t need more machines that do what they’re told. We want machines that can think and understand for themselves. Like human minds, but more powerful. That would require subjective understanding that cannot be programmed by definition. For more details, see Gödel’s incompleteness theorems. We can’t even axiomatize mathematics, let alone program human intuitions about the world at large. Even if it’s possible we simply don’t know how.
- Comment on agi graph slop, wtf does goverment collapse have to do with ai? 4 days ago:
Reasoning literally requires consciousness because it’s a fundamentally normative process. But hey, I get it. This is your first time encountering this fascinating topic and you’re a little confused. It’s okay.
- Comment on Material scientist wet dream 5 days ago:
It is hilarious to be wrong, yes.
- Comment on agi graph slop, wtf does goverment collapse have to do with ai? 5 days ago:
The only way to create AGI is by accident. I can’t stress how much we haven’t the first clue how consciousness works. I don’t mean we are far, I mean we are roughly at the starting point, with a variety of vague abstract theories with no connection to empirical reality. We can’t even agree on whether insects have emotions (they don’t — unless you think they do, in which case fight me) let alone explain subjective experience.
- Comment on We did the math on AI’s energy footprint. Here’s the story you haven’t heard. 2 weeks ago:
A better analogy for AI is the discovery of asbestos or the invention of single-use plastics. Terrible fucking idea.
- Comment on I'm so vegan I could eat a burger and still be a vegan 1 month ago:
How dare some people make an obviously correct moral decision that highlights my own inadequacies?
Seriously though, I’m not even slightly vegan. I’m just also not a total fucking moron.
- Comment on I'm so vegan I could eat a burger and still be a vegan 1 month ago:
We should do one for regular diets where it goes from ignorantly contributing to animal suffering to torturing them yourself and then diabetes and colon cancer.
- Comment on Judge rejects Musk's attempt to block OpenAI's for-profit transition 2 months ago:
No way he’s taking a large enough dose for that to work. We’ll have to be proactive.
- Comment on Conservatives and selfishness, like 2 peas in a pod 3 months ago:
In the early 20th century, about 50 years before Norway discovered oil, they were already per capita among the richest countries in Europe (arguably the richest). This was because of their culture of land and resource sharing.
- Comment on Make your complaints heard about bad games, says Dragon Age veteran Mark Darrah, but "your $70 doesn't buy you cruelty" 3 months ago:
The art direction and the combat mechanics. But I can’t be sure.
- Comment on If any AI became 'misaligned' then the system would hide it just long enough to cause harm — controlling it is a fallacy 3 months ago:
Sure, that’s one practical aspect of money that lends itself to superficial quantitative analysis. But it’s not the whole picture. Money is fundamentally about the power to get people to do things for you. That’s what it represents. With money I can force people to give me things and do things for me, almost like magic.
Now the origins of money is rooted in debt (and power). When a ruling body exercises a monopoly on violence over a region, it can offer promissory notes (IOUs) that others value, because they have faith that this ruling body can force its citizens to work by extracting taxes from them.
Check out “Debt: The Last 5000 Years,” or similar anthropological work on the origins of money.
- Comment on If any AI became 'misaligned' then the system would hide it just long enough to cause harm — controlling it is a fallacy 3 months ago:
Keeping consumers alive as a class is indirectly encouraged in capitalism.
All they want is money, which has nothing to do with consumers whatsoever. Corporations could extract money by devouring each other, or by taking over a nation state, or by hijacking a treasury department, or by issuing their own money a la crypto.
- Comment on Do you want the murderer of the UnitHealthcare CEO prosecuted? 5 months ago:
Well good. Then I can take you seriously. I’m willing to have my mind changed. Why don’t you think we should kill evil people? Do you want to reform them?
- Comment on Do you want the murderer of the UnitHealthcare CEO prosecuted? 5 months ago:
Killing is a fast and easy solution… being able to look beyond killing is one of the few privileges our intelligence gives us
Sure.
We are all animals (some more than others). And we have learned the hard way that to realize more of the transcendental values — to bring more courage, wisdom, and meaning into this world — we should preserve life whenever possible. But there’s nothing fundamentally sacred about life… We kill all the time. Literally non-stop. Billions of animals, just like us, sentient and desperate to live, butchered for your use and pleasure. So unless you’re a vegan, you do not get to deploy sentiments about “the sanctity of life” or the like. It’s silly.
If you want to learn about practical ethics and stop talking gibberish, I suggest Shaffer Landau’s excellent textbook “On Living Ethics.”
- Comment on Do you want the murderer of the UnitHealthcare CEO prosecuted? 5 months ago:
Why are people so against killing? From an ethical perspective, it’s often quite justifiable. We’ve been trained like monkeys in a cage to respond adversely to death, but that reaction is based in a social contract that is only conditionally valid.
- Comment on Do you want the murderer of the UnitHealthcare CEO prosecuted? 5 months ago:
The reason we reject mob justice is not that it is anyways unjust, but because it is often unjust. In this case, however, the outcome was actually in line with any reasonable objective standard of justice as far as I can tell. I’m willing to be persuaded otherwise, but I don’t see it.
- Comment on Do you want the murderer of the UnitHealthcare CEO prosecuted? 5 months ago:
Fun fact, murder means “illegal killing,” not an immoral one. There are plenty of unethical but legal killings, and vice versa. So to clarify, murder isn’t always “bad” by definition.