jsomae
@jsomae@lemmy.ml
- Comment on hubris go brrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr 1 day ago:
OpenAI’s new model was able to get 5 out of 6 questions (a gold medal) on the 2025 International Math Olympiad. I am very surprised by this result, though I don’t see any evidence of foul play.
- Comment on Linux Reaches 5% Desktop Market Share In USA 4 days ago:
As pointed out on hackernews, this is likely attributed to (a) decrease in desktop usage by non-linux-users, and (b) the gaming industry embracing linux
- Comment on UwU brat mathematician behavior 6 days ago:
My initial thought was that it’s surprising that the engineer is using i whereas the mathematician is using j. But I know some engineers who are hardcore in favour of i. No mathematicians who prefer j though. So if such an engineer were dating a mathematician of all people who used j, I could see that being ♠ .
- Comment on Vintage gaming advertising pictures: a gallery 1 week ago:
I didn’t mean to be critical. I thought it was very funny actually.
- Comment on It's just loss. 1 week ago:
I know. It’s just still more than I expected.
- Comment on Vintage gaming advertising pictures: a gallery 1 week ago:
Thanks for this one, a really valuable find:
- Comment on It's just loss. 1 week ago:
You’re right to question the boiling. I was thinking of death by suffocation in heated steam. Boiling is not the technically correct term.
You’d be trying to get that particular farm shut down, get laws passed to prevent that from happening. But you’re not doing that
who is not doing that? Me specifically or animal rights people in general? I don’t see why shutting down a particular farm would be very helpful, the scale of the problem is incredibly massive; passing laws would be much more effective. I would like to see laws passed, though, to stop these kinds of abuses. What would make you think I am not interested in that?
- Comment on It's just loss. 1 week ago:
I know. It’s still more elephants than I expected.
- Comment on It's just loss. 1 week ago:
60 % of mammals are livestock, not 60% live in factory farms
99% of US farmed animals live in factory farms, according to this random website I just found. I don’t claim to be an expert, though, and worldwide is probably lower than than 99%, but I would bet you that the vast majority of livestock is factory-farmed.
Agreed though that not all livestock are factory farmed. I should have clarified.
A seal in the 4% living in the wild may be eaten alive by a killer whale or torn to shreds by a great white shark.
That’s bad, though probably not anywhere near as much agony as being boiled alive for several hours. Regardless of whether you feel morally obligated to reduce wild animal suffering, you should admit that (a) from a utilitarian perspective, it’s much easier to reduce factory farm suffering, and (b) from a deontological perspective, factory farming is (collectively) our fault, whereas the food chain isn’t.
- Comment on It's just loss. 1 week ago:
more elephants than I expected tbh
- Comment on It's just loss. 1 week ago:
Livestock have to live through horrible agony, like the worst kind of torture. This means (by biomass, which some people correlate indirectly with moral worth), at least 60% of mammals on Earth undergo horrible torture. Bentham’s Bulldog, “Factory Farming is Literally Torture.”
Excess pigs were roasted to death. Specifically, these pigs were killed by having hot steam enter the barn, at around 150 degrees, leading to them choking, suffocating, and roasting to death. It’s hard to see how an industry that chokes and burns beings to death can be said to be anything other than nightmarish, especially given that pigs are smarter than dogs.
Ozy Brennan: the subjective experience of animal’s suffering 10/10 intense agony is likely the same as the subjective experience of a human suffering such agony.
- Comment on YSK that apart from not having a car, the single greatest thing you can do for the climate is simply eating less red meat 1 week ago:
Okay so I must be in the minority, but I don’t feel any particular pathos for these billions of slaughtered animals. Seeing myriads of baby chicks ground into dust doesn’t really move me in the slightest. Just understanding that the farm industry causes intense, agonizing, slow deaths for billions if not trillions of creatures every year is enough for me to understand its morally imperative to not consume the vast majority of animal products. Bentham’s Bulldog has been quite moving for me.
Yesterday I decided to stop buying honey after reading his article about how honey plausibly causes orders of magnitude more suffering than everything else. I’m also vegetarian, and I have replaced most of the dairy in my diet with plant-based alternatives. I still haven’t eliminated cheese and eggs from my diet though. For cheese it’s because I don’t think there’s good evidence the cheese I buy causes any agony in particular, but eggs is the next step for me.
- Comment on YSK that apart from not having a car, the single greatest thing you can do for the climate is simply eating less red meat 1 week ago:
Given the amount of perpetual torture these very-likely-to-be-sentient creatures go through, it’s certainly worse than any genocide in history has ever been. Even if you only think that animals are capable of 5% of the suffering of humans.
- Comment on Ok, I'll pay you the 1995 price 1 week ago:
i hate phones
(i have no other comment.)
- Comment on YSK that apart from not having a car, the single greatest thing you can do for the climate is simply eating less red meat 1 week ago:
Ontop of that, factory farming is a lovecraftian horror that floods the universe with terrible agony. And there’s very good reason to believe that the suffering of animals is as real and awful as yours or mine.
- Comment on AI agents wrong ~70% of time: Carnegie Mellon study 1 week ago:
yeah, this is why I’m #fuck-ai to be honest.
- Comment on Oatmeal 1 week ago:
heavenly hunk.
- Comment on AI agents wrong ~70% of time: Carnegie Mellon study 1 week ago:
The notion that AI is half-ready is a really poignant observation actually. It’s ready for select applications only, but it’s really being advertised like it’s idiot-proof and ready for general use.
- Comment on Companies That Tried to Save Money With AI Are Now Spending a Fortune Hiring People to Fix Its Mistakes 1 week ago:
may well be a Gell-Mann amnesia simulator when used improperly.
- Comment on Companies That Tried to Save Money With AI Are Now Spending a Fortune Hiring People to Fix Its Mistakes 1 week ago:
In the situation outlined, it can be pretty effective.
- Comment on AI agents wrong ~70% of time: Carnegie Mellon study 1 week ago:
yeah.
- Comment on AI agents wrong ~70% of time: Carnegie Mellon study 1 week ago:
Hitler liked to paint, doesn’t make painting wrong. The fact that big tech is pushing AI isn’t evidence against the utility of AI.
That common parlance is to call machine learning “AI” these days doesn’t matter to me in the slightest. Do you have a definition of “intelligence”? Do you object when pathfinding is called AI? Or STRIPS? Or bots in a video game? Dare I say it, the main difference between those AIs and LLMs is their generality – so why not just call it GAI at this point tbh. This is a question of semantics so it really doesn’t matter to the deeper question. Doesn’t matter if you call it AI or not, LLMs work the same way either way.
- Comment on AI agents wrong ~70% of time: Carnegie Mellon study 1 week ago:
I’m impressed you can make strides with Rust with AI. I am in a similar boat, except I’ve found LLMs are terrible with Rust.
- Comment on AI agents wrong ~70% of time: Carnegie Mellon study 1 week ago:
The problem is they are not i.i.d., so this doesn’t really work. It works a bit, which is in my opinion why chain-of-thought is effective (it gives the LLM a chance to posit a couple answers first). However, we’re already looking at “agents,” so they’re probably already doing chain-of-thought.
- Comment on AI agents wrong ~70% of time: Carnegie Mellon study 1 week ago:
obviously
- Comment on AI agents wrong ~70% of time: Carnegie Mellon study 1 week ago:
It really depends on the context. Sometimes there are domains which require solving problems in NP, but where it turns out that most of these problems are actually not hard to solve by hand with a bit of tinkering. SAT solvers might completely fail, but humans can do it. Often it turns out that this means there’s a better algorithm that can exploit commanalities in the data. But a brute force approach might just be to give it to an LLM and then verify its answer. Verifying NP problems is easy.
- Comment on AI agents wrong ~70% of time: Carnegie Mellon study 2 weeks ago:
semantics.
- Comment on AI agents wrong ~70% of time: Carnegie Mellon study 2 weeks ago:
I think everyone in the universe is aware of how LLMs work by now, you don’t need to explain it to someone just because they think LLMs are more useful than you do.
IDK what you mean by glazing but if by “glaze” you mean “understanding the potential threat of AI to society instead of hiding under a rock and pretending it’s as useless as a plastic radio,” then no, I won’t stop.
- Comment on AI agents wrong ~70% of time: Carnegie Mellon study 2 weeks ago:
Are you just trolling or do you seriously not understand how something which can do a task correctly with 30% reliability can be made useful if the result can be automatically verified.
- Comment on AI agents wrong ~70% of time: Carnegie Mellon study 2 weeks ago:
Right, so this is really only useful in cases where either it’s vastly easier to verify an answer than posit one, or if a conventional program can verify the result of the AI’s output.