Hazzard
@Hazzard@lemmy.zip
Migrated over from Hazzard@lemm.ee
- Comment on NSFW on Lemmy 11 hours ago:
Exactly what I’ve done. Set my settings to NSFW, blocked most of the “soft” communities like hot girls and moe anime girls and whatever else (blocking the Lemmy nsfw instance is a great place to start), and I use All frequently. That’s how I’ve found all the communities I’ve subscribed to, but frankly, my /all feed is small enough that I usually see all my subscribed communities anyway.
- Comment on Nvidia says no 'backdoors' in chips as China questions security 5 days ago:
Bold to assume this would even work. What on earth would “location tracking” even look like? Something that trusts the OS for a location? I imagine it could easily be tricked. An AirTag soldered to the board? Trivially removable.
Something like this sounds very ineffective, and would be devastating to Nvidia’s brand in global markets like China, of course they’re against it. It sounds like a stupid idea, frankly.
- Comment on Tech to protect images against AI scrapers can be beaten, researchers show 3 weeks ago:
Amen to that, here’s to hoping.
- Comment on Tech to protect images against AI scrapers can be beaten, researchers show 3 weeks ago:
Mhm, fair enough, I suppose this is a difference in priorities then. Personally, I’m not nearly as worried about small players, like hobbyists, who wouldn’t’ve already developed something like this in house.
And I keep bringing up “security through obscurity” because frankly, I’m somewhat optimistic that this can work out like encryption has, where tons of open source research was done into encryption and decryption, until we worked out encryption standards that we can run at home that are unbreakable before the heat death of the universe with current server farms.
Many of those people releasing decryption methods would’ve been considered villains, because it made hacking some previously private data easy and accessible, but that research was the only way to get to where we are, and I’m hopeful that one day we actually could make an unbeatable AI poison, so I’m happy to support research that pushes us towards that end.
I’m just not satisfied preventing Bill down the street from AI training on art without permission while knowingly leaving Google and OpenAI an easy way to bypass it.
- Comment on Tech to protect images against AI scrapers can be beaten, researchers show 3 weeks ago:
Exactly, it is an arms race. But if a few students can beat our current best weapons, it’d be terribly naive to think the multiple multi-billion dollar companies, sinking their entire futures into this, and also already amoral enough to be stealing content en masse from the entire internet, hadn’t already cracked this and locked everyone involved into serious NDAs.
Better to know what your enemy has then to just cross your fingers and hope that maybe they didn’t notice, and have just been letting us poison their precious AI models they’re sinking billions of dollars into.
- Comment on Tech to protect images against AI scrapers can be beaten, researchers show 3 weeks ago:
Eh, it’s a fair point. Not trying something like this is essentially “security by obscurity”, which has been repeatedly proven to be a mistake.
Wouldn’t surprise me if OpenAI or someone else already had something like this behind closed doors, but now the developers of tools like Nightshade can begin to work on developing AI poison that’s more resilient against these kinds of “cleanup” tools.