Heads up he talks about this specifically at 26:30 for those who didn’t take the time to watch the video.
Comment on AI content on Wikipedia - found via a simple ISBN checksum calculator (39C3)
EncryptKeeper@lemmy.world 21 hours agoHe used AI to write the anti-AI tool
jballs@sh.itjust.works 15 hours ago
Saapas@piefed.zip 21 hours ago
But wasn’t the issue with the AI stuff that it was false info, whereas the tool sounds like it worked as intended?
d00ery@lemmy.world 21 hours ago
Saapas@piefed.zip 20 hours ago
It just seems like the tool he is using is working though…
SpraynardKruger@lemmy.world 19 hours ago
Yes, but the specific type of irony that this situation fits the definition of does not come from whether or not the tool they used worked for the intended purpose. The irony comes from the fact that they are relying on the output from LLM-generated content (ISBN checksum calculator) to determine the reliability of other LLM-generated content (hallucinated ISBN numbers).
Irony is a word that has a somewhat vague meaning and is often interpreted differently. If the tool they used did not work as intended and flagged a bunch of real ISBNs as being AI generated, the situation would (I think) be more ironic. They are still using AI to try and police AI, but with the additional layer of the outcome being the opposite of their intention.
Chee_Koala@lemmy.world 19 hours ago
But how does that diminish the irony? The story is still ironic as a whole, even though he achieved his goals.
Passerby6497@lemmy.world 19 hours ago
Right, but it’s also the same/similar tool that’s being used to damage the article with bad information. Like the LLM said, this is using the poison for the cure (also, amusing that we’re using the poison to explain the situation as well).
Yes, he’s using the tool (arguably) how it was designed and is working as intended, but so are the users posting misinformation. The tool is designed to take a prompt and spit out a mathematically appropriate response using words and symbols we interpret as language - be it a spoken or programming language.
Any tool can be used properly and also for malicious/bad via incompetent methods.
Randomgal@lemmy.ca 16 hours ago
Yes. Why are you fixated on this? LLMs are tools and they work, but you have to understand their abilities and limitations to use them effectively.
The guy who needed the anti-ai tool, did. The Wikipedia editors, didn’t.
bossjack@lemmy.world 18 hours ago
I think the point is it would have been truly ironic if the AI itself was the authoritative fact checker instead of merely being a tool that built another tool.
If Claude was the fact checking tool instead of the ISBN validator, that’s the real irony.
If in a messed up future, only an AI could catch a fellow AI, what’s stopping the AI collective from returning false negatives? Who watches the watchers?
bytesonbike@discuss.online 14 hours ago
Which is kinda my favorite thing to do as of late, and what I prefer AI be used for.
I’m not taking about building a suite of black box tools. But tiny scripts to scrape, shape, and generate reports. Things I used to pull in a dozen node libraries to do and manually configure and patch up.
You know, busy work.
lauha@lemmy.world 20 hours ago
To be fair, humans are excellent at building anti-human tools
TheBat@lemmy.world 19 hours ago