But wasn’t the issue with the AI stuff that it was false info, whereas the tool sounds like it worked as intended?
Comment on AI content on Wikipedia - found via a simple ISBN checksum calculator (39C3)
EncryptKeeper@lemmy.world 3 weeks agoHe used AI to write the anti-AI tool
Saapas@piefed.zip 3 weeks ago
d00ery@lemmy.world 3 weeks ago
Saapas@piefed.zip 3 weeks ago
It just seems like the tool he is using is working though…
SpraynardKruger@lemmy.world 3 weeks ago
Yes, but the specific type of irony that this situation fits the definition of does not come from whether or not the tool they used worked for the intended purpose. The irony comes from the fact that they are relying on the output from LLM-generated content (ISBN checksum calculator) to determine the reliability of other LLM-generated content (hallucinated ISBN numbers).
Irony is a word that has a somewhat vague meaning and is often interpreted differently. If the tool they used did not work as intended and flagged a bunch of real ISBNs as being AI generated, the situation would (I think) be more ironic. They are still using AI to try and police AI, but with the additional layer of the outcome being the opposite of their intention.
Chee_Koala@lemmy.world 3 weeks ago
But how does that diminish the irony? The story is still ironic as a whole, even though he achieved his goals.
Passerby6497@lemmy.world 3 weeks ago
Right, but it’s also the same/similar tool that’s being used to damage the article with bad information. Like the LLM said, this is using the poison for the cure (also, amusing that we’re using the poison to explain the situation as well).
Yes, he’s using the tool (arguably) how it was designed and is working as intended, but so are the users posting misinformation. The tool is designed to take a prompt and spit out a mathematically appropriate response using words and symbols we interpret as language - be it a spoken or programming language.
Any tool can be used properly and also for malicious/bad via incompetent methods.
Randomgal@lemmy.ca 3 weeks ago
Yes. Why are you fixated on this? LLMs are tools and they work, but you have to understand their abilities and limitations to use them effectively.
The guy who needed the anti-ai tool, did. The Wikipedia editors, didn’t.
bossjack@lemmy.world 3 weeks ago
I think the point is it would have been truly ironic if the AI itself was the authoritative fact checker instead of merely being a tool that built another tool.
If Claude was the fact checking tool instead of the ISBN validator, that’s the real irony.
If in a messed up future, only an AI could catch a fellow AI, what’s stopping the AI collective from returning false negatives? Who watches the watchers?
jballs@sh.itjust.works 3 weeks ago
Heads up he talks about this specifically at 26:30 for those who didn’t take the time to watch the video.
bytesonbike@discuss.online 2 weeks ago
Which is kinda my favorite thing to do as of late, and what I prefer AI be used for.
I’m not taking about building a suite of black box tools. But tiny scripts to scrape, shape, and generate reports. Things I used to pull in a dozen node libraries to do and manually configure and patch up.
You know, busy work.
lauha@lemmy.world 3 weeks ago
To be fair, humans are excellent at building anti-human tools
TheBat@lemmy.world 3 weeks ago