Canceling now only means you are not continuing to contribute to the war atrocities that the technology is going to be used for. If you had an account and used it, you have already contributed.
Canceling now only means you are not continuing to contribute to the war atrocities that the technology is going to be used for. If you had an account and used it, you have already contributed.
cygnus@lemmy.ca 7 hours ago
Thanks for doing this - it isn’t a proper leftist get-together without some assclown imposing impossible purity tests.
Upgrayedd1776@sh.itjust.works 7 hours ago
you are no true Irishman!
Gork@sopuli.xyz 7 hours ago
I have an IRA account, does that make me kinda Irish?
CIA_chatbot@lemmy.world 7 hours ago
Does it have 3$ in it and is there a Guinness in yer hand? Then welcome to the Irish me boy
frongt@lemmy.zip 2 hours ago
Well no Irishman is a Scotsman so that tracks
Unattributed@feddit.online 7 hours ago
Impossible purity test? That’s utter bull crap. There have been many warnings about the negative uses of AI for years now, for example: https://aiforgood.itu.int/event/addressing-the-dark-sides-of-ai/
To expect people to be able to understand that this use could be expanded to committing state sponsored atrocities is not a stretch.
cygnus@lemmy.ca 7 hours ago
And this is why anybody who made a mistake should be shunned forever, unless they invent a time machine to go back and undo their past misdeeds. They may as well just jump off a bridge and save us the trouble of setting up a firing squad.
Unattributed@feddit.online 57 minutes ago
Making a mistake is one thing. Ignoring the BIG FLASHING WARNING SIGNS is another. There have been massive warning signs around AI for several years. If you looked at the warning signs and proceeded anyway, you deserve what you get.
Rekorse@sh.itjust.works 6 hours ago
They never said they should be shunned, they didnt even list a social consequence. The fact remains that if you used OpenAI in the past you already contributed.
HalfAFrisbee@lemmy.world 7 hours ago
You are fucking insane. By your logic any customer of a company that might one day build a weapon is complicit. That is asinine.
Unattributed@feddit.online 51 minutes ago
That’s not the argument at all. The argument is that there have been warning signs, big flashing warning signs, about the dangers of using AI for years now. Most technology, in general doesn’t come with anywhere near as many warnings.
And, it’s been a known fact that people using AI are also training in the AI. That’s an active choice that people that signed up for accounts are making.
So yes, users of this technology are taking an active role in the training of the technology, that makes them complicit.
That is a far cry from data brokers going out and harvesting public records, or companies tracking your spending habits and feeding that into a database. If those companies then turned around and made a weapon, no I wouldn’t point the finger at people whose information got scraped. OTOH - if you continued to use a platform that you know is using you to gather information (aka, Facebook, Reddit, Twitter, etc.) and let them do it, then yeah…you have some level of complicity.
Rekorse@sh.itjust.works 6 hours ago
Yeah, we live and learn. We don’t expect perfection, we expect self improvement. Its important not to excuse bad decisions/behavior. Be more skeptical of new technology in the future and pay attention to who’s creating/selling it.
XLE@piefed.social 4 hours ago
With their last link, they’re complicit
sheetzoos@lemmy.world 5 hours ago
This person uses the internet, which for *years *has had TONs of negative uses.
How do you think Epstein emailed his buddies? The internet.
You can’t trust people that use evil technologies like user Unattributed. Thanks for the incredibly sound and intelligent logical framework!
Unattributed@feddit.online 1 hour ago
Yes, there are applications that can be used for good or evil. But being super reductive and claiming the whole internet has tons of negative uses is ridiculous. The internet itself is a series of protocols running on communications hardware.
It is up to the users of the applications to judge whether the application is inherently positive or negative, or whether the use of the technology is being handled in a positive and/or ethical manner. And more so, it’s up to the user to judge wether the technology aligns with their personal values.
Social networks: Xitter, Farcebook, Instawhore, TikTok, Reddit… all of them have proven they are platforms of manipulation, so I walked away. In fact, most of them I walked away from before it was shown how just how bad they were.
Cryptocurrencies: had the opportunity to be good, but grifters set in on them, so I never got involved.
NFTs: the next generation of CryptoGrifters, stayed away.
AI: has never been ready to be a public application / platform. That has been apparent for the last 3-5 years. If you didn’t read and pay attention to the signs and still signed up for an account despite all the warnings being out there, then yes, you have aided and abetted in the use of the technology in manners that are going to have a severely negative impact on the world.
Here’s the thing: we have a long, long history with technology. We know that it can be used for both good and bad. However, we also should have evolved in our thinking over the past 6-7 decades in terms of how technologies are being applied.
Nuclear reactors: Mostly good with negative side effects. Judgment on this needed longer terms study to understand it’s implication. Nuclear bombs? Clearly evil.
Cassette recorders, VCRs, CD Recorders: predominantly good, but open to bad uses (i.e., piracy). The balance: mostly good, minimal negative effects
AI? Potentially good, but immediately threw up huge red flags in terms of negative uses (deep fakes, revenge porn, etc.). Even AI researchers have expressed concerns over the direction of the research.
The thing is, technology is something that we’ve lived with since the industrial revolution. Every single technological invention since that time has had major implications for it’s impact on society. We can choose, on an individual basis, how that impact is shaped. If you chose to use a technology, then you are better that it’s uses will align with your values. Don’t cry when it’s used in ways that don’t align with your values, or is used against you.
XLE@piefed.social 4 hours ago
Well, judge not lest you too be judged…
There is no such thing as “ethical” AI coming from Big Tech. Google, Microsoft, Anthropic, Amazon, all of them built their machines without consent, all their machines have been subsidized with our taxes and resources, and Anthropic is a pro-Trump pro-foreign-dictator company that crossed every single red line until the very last one.
Anthropic was pro mass surveillance of foreigners.
It was okay with helping Trump plan criminal invasions.
It just doesn’t want to be held responsible for pushing the “go” button, but we know their software was one suggestion away from doing it anyway.
Unattributed@feddit.online 59 minutes ago
That’s no judgment on me. I don’t use AI. I tried it one night 3-4 years ago, realized that it wasn’t ready for widespread adoption, and haven’t touched it since.