Iconoclast
@Iconoclast@feddit.uk
- Comment on [deleted] 17 hours ago:
seeing downvotes on what was simply a friendly introduction post is a bit disheartening.
If it’s any comfort, they didn’t even read your introduction. They saw “AI” in the title and that instantly made you the “other.”
A major chunk of users here are highly ideological and always on the hunt for the enemy. Saying non-critical things of AI is more than enough evidence for them to back up their assumptions about you.
- Comment on US sinks Iran warship in India's strategic waters. Why this is significant 21 hours ago:
Video of the impact for anyone interested.
- Comment on Can I assemble a metal building by myself? 1 day ago:
Nah, not him. It was in the recommendations - not someone I’ve subscribed to
- Comment on Can I assemble a metal building by myself? 1 day ago:
Probably not. It’s not going to be skill issue but no matter how handy you are you still don’t have 6 of them. Not too long ago I watched a video of a guy setting up a single car garage like this on his own and he was struggling immensely.
- Comment on I'm struggling to think of any online services for which I'd be willing to verify my identity or age 1 day ago:
I have no issue with an online service knowing my age for as long as that’s all they know and will ever know about me.
- Comment on Society is starting to appropriately accommodate neurodivergence, yet stupid/idiot/crazy/lazy etc. stay in the vocabulary. 1 day ago:
I’m not one to call people names with the intention of hurting their feelings, and I don’t even believe in free will to begin with. But if I were to call someone an idiot, it wouldn’t come from the assumption that they actively choose to be one or that they could choose otherwise. No, they’re helplessly an idiot - and I’m just making a factual statement about the world.
- Comment on Twitter Will Stop Paying People for Sharing Unlabeled AI-Generated War Footage 1 day ago:
Nobody’s paying them specifically for sharing disinformation. They’ve been paid for driving engagement as content creators. The whole point of the article is that the platform is stopping payments to these people precisely because they’re spreading disinformation.
Platforms letting creators in on ad revenue generated by engagement with their content isn’t exactly a new thing. But if you then switch to spreading lies for profit, of course they should get kicked out of the program.
- Comment on The Pentagon’s Claude Use in Iran Is a Reminder that Anthropic Never Objected to Military Use 2 days ago:
They are not willing to let their current models (Claude) be used in fully autonomous weapons right now, because they believe today’s frontier AI is still too unreliable and prone to errors. They explicitly say they “will not knowingly provide a product that puts America’s warfighters and civilians at risk.”
However, they have offered to work directly with the Department of Defense on R&D to improve the reliability of autonomous weapons technology in general (with our two requested safeguards in place) - so that in the future these systems might become safe and trustworthy enough to use.
They’re not ideologically against autonomous weapons systems. They’re against ones that run on our current AI models.
- Comment on The Pentagon’s Claude Use in Iran Is a Reminder that Anthropic Never Objected to Military Use 2 days ago:
That’s your intrepretation - not a direct quote.
- Comment on The Pentagon’s Claude Use in Iran Is a Reminder that Anthropic Never Objected to Military Use 2 days ago:
with our two requested safeguards in place.
Said safeguards being that their technology isn’t being used for mass surveillance or the development of autonomous drones. It’s explicitly mentioned in their statement - the one you’re desperately trying to massage and misquote to make it seem like they’re saying something they’re not - yet anyone can just go and read it themselves.
- Comment on Up votes and down votes tend to move a site toward being an echo chamber. 2 days ago:
Not sure what the answer is.
Keep the system but hide the scores from users.
- Comment on U.S. Supreme Court declines to hear dispute over copyrights for AI-generated material 3 days ago:
I don’t know how to write code myself, but intuitively it seems a little different in this case.
When it comes to photography, I can show the original unedited RAW file with full resolution and full metadata and everyone else just has a lower-resolution JPG. The same thing applies to most digital art.
- Comment on The Pentagon’s Claude Use in Iran Is a Reminder that Anthropic Never Objected to Military Use 3 days ago:
If by “good” you mean one that more reliably answers your questions correctly, then no. That’s not really what these systems are good at. They’re fully capable of giving a solid, accurate answer, but you can simply never trust it to be correct. They’re great for chit-chat and bouncing around ideas if you’re into that, but it’s not an oracle.
When it comes to translating languages, that’s one of the few things LLMs are actually somewhat decent at, and I don’t think there’s much difference between them in that regard.
- Comment on U.S. Supreme Court declines to hear dispute over copyrights for AI-generated material 3 days ago:
It’s not generally difficult at all for an artist to prove that they are the original creator of a certain piece. My photography for example is available for anyone for free and in high resolution but I’m the only one with the full resolution pictures and RAW files.
- Comment on The Pentagon’s Claude Use in Iran Is a Reminder that Anthropic Never Objected to Military Use 3 days ago:
Here’s the full quote including the parts you accidently left out.
Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk. We have offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer.
- Comment on The Pentagon’s Claude Use in Iran Is a Reminder that Anthropic Never Objected to Military Use 3 days ago:
My understanding is that it’s not military use broadly that they object to but the use of their systems for the development of fully autonomous drones.
- Comment on Iran war: US military aircraft crash in Kuwait 3 days ago:
You’re free to make up your own mind. No need to go with whatever group you identify with tells you you’re supposed to do or think.
- Comment on Why is the USA attacking Iran? 3 days ago:
What the US has often done in the past is provide air support for aligned local rebels on the ground. In this case, though, that rebel force doesn’t really exist, so I figure the reasoning is to show the people of Iran that if you want to take back your country, now’s the time - and we’ll help you. There’s at least some evidence that a big chunk of the Iranian population is fed up with the Islamist government but whether this’ll lead to an uprising or regime change remains to be seen.
- Comment on Why is the USA attacking Iran? 3 days ago:
It’s a state that’s always been hostile toward the US and its interests - both directly and by funding groups that share the same goals as Iran does. Over the past few years, though, the war in Ukraine, Israel’s strike on Iran, the US follow-up bombing of their nuclear sites, and the special Maduro operation have all shown that Russian air-defense systems aren’t much of a threat to Western fighter jets anymore. So they probably figured that if they’re going to do this, now’s the time - while Iran’s at its weakest - instead of waiting around. Countries like North Korea have dodged the same fate by holding Seoul hostage, but Iran doesn’t have that kind of leverage.
- Comment on YouTube Shorts and Instagram Reels are making you dumber, according to science 3 days ago:
I’m sure the title here is accurate summary of the conclusion of the study and I don’t need to even check because as a person who doesn’t conscume short-form media this confirms something I already know to be true about myself.
- Comment on It's rude to show AI output to people | Alex Martsinovich 4 days ago:
You dismiss the whole person just because they acknowledge using an LLM? That seems a bit harsh - especially since they had the decency to mention the source, which is basically the same as saying “take this with a grain of salt.”
- Comment on It's rude to show AI output to people | Alex Martsinovich 4 days ago:
I only did it here to illustrate a point. Typically I only use it on longer posts. I’m not a native english speaker and I often struggle to express my thoughts clearly and I find it immensely useful to run it through AI and see the corrections it made.
- Comment on It's rude to show AI output to people | Alex Martsinovich 4 days ago:
Just because the final output comes from AI doesn’t always mean a human didn’t put real effort into writing it. There’s a big difference between asking an LLM to write something from scratch, telling it exactly what to say, or just having it edit and polish what you already wrote.
A ton of my replies here - including this one - are technically “AI output,” but all the AI really did was take what I wrote, clean it up, and turn it into coherent text that’s easier for the reader to follow.
spoiler
Original text: Just because the final output is by AI doesn’t always mean human didn’t put effort into writing it. There’s a difference between asking LLM to write something, telling LLM what to write or asking it to edit something you wrote. A large number of my replies here, including this one, are technically “AI output” but all the AI did was go through what I wrote and try and turn it into coherent text that the is easy for the recipient to consume.
- Comment on Anthropic says it ‘cannot in good conscience’ allow Pentagon to remove AI checks 5 days ago:
I never claimed it will.
- Comment on Myanmar junta deploys AI-powered Russian and Chinese tracking system to target 50,000 individuals for new wave of arrests 6 days ago:
Couldn’t we all just get along…?
- Comment on Anthropic says it ‘cannot in good conscience’ allow Pentagon to remove AI checks 6 days ago:
You don’t seem very interested in sticking to the topic, do you? This conversation has been all over the place, complete with ad-hominems, concern-trolling, red herrings, strawmen, gish galloping - as if you’re trying to break some kind of record.
It’s pretty clear you’ve built up a cartoon-villain version of me in your head and now you’re fighting that imagined version like it’s real. I made a pretty simple claim about AGI, you’ve piled an entire story on top of it, and now you’re demanding I defend views I don’t even hold.
I’ve been trying to have a good-faith conversation here, but if this is what you’re going to keep doing, then I’ll just move on.
- Comment on Anthropic says it ‘cannot in good conscience’ allow Pentagon to remove AI checks 6 days ago:
That doesn’t have anything to do with my claim about the inevitability of AGI.
- Comment on Anthropic says it ‘cannot in good conscience’ allow Pentagon to remove AI checks 6 days ago:
So do you think Dyson Spheres are inevitable too?
I’m less certain about that than I am about AGI - there may be other ways to produce that same amount of energy with less effort - but generally speaking, yeah, it seems highly probable to me.
First you were implying that today’s AI would bring about AGI
I’ve never made such a claim. I’ve been saying the exact same thing since around 2016 or so - long before LLMs were even a thing. It’s in no way obvious to me that LLMs are the path to AGI. They could be, but they don’t have to be. Either way, it doesn’t change my core argument.
people you hold so dear
C’moon now.
- Comment on Anthropic says it ‘cannot in good conscience’ allow Pentagon to remove AI checks 6 days ago:
My argument is that we’ll incrementally keep improving our technology like we have done throughout human history. Assuming that general intelligence is not substrate dependent - meaning that what our brains are doing cannot be replicated in silicon - or that we destroy ourselves before we get there, then it’s just a matter of time before we create a system that’s as intelligent as we are: AGI.
I already said that the timescale doesn’t matter here. It could take a hundred years or two thousand - doesn’t matter. We’re still moving toward it. It does not matter how slow you move. As long as you keep moving, you’ll eventually reach your destination.
So, how I see it is that if we never end up creating AGI ever, it’s either because we destroyed ourselves before we got there or there’s something borderline supernatural about the human brain that makes it impossible to copy in silicon.
- Comment on Anthropic says it ‘cannot in good conscience’ allow Pentagon to remove AI checks 6 days ago:
If you’re just gonna keep ignoring every single point I make and keep rambling about unrelated shit, then there’s nothing left to discuss here. If you actually had an argument, you would’ve made it by now.