Comment on AI content on Wikipedia - found via a simple ISBN checksum calculator (39C3)
Passerby6497@lemmy.world 21 hours agoRight, but it’s also the same/similar tool that’s being used to damage the article with bad information. Like the LLM said, this is using the poison for the cure (also, amusing that we’re using the poison to explain the situation as well).
Yes, he’s using the tool (arguably) how it was designed and is working as intended, but so are the users posting misinformation. The tool is designed to take a prompt and spit out a mathematically appropriate response using words and symbols we interpret as language - be it a spoken or programming language.
Any tool can be used properly and also for malicious/bad via incompetent methods.
Saapas@piefed.zip 21 hours ago
But in this case the tool actually works well for one thing and not so well for another. It doesn’t feel that ironic to use a hammer to remove nails someone has hammered to the wrong place, if some sort of analogy is required here. You’d use a hammer because it is good at that job.
Passerby6497@lemmy.world 21 hours ago
See, that’s where you’re wrong though. AI is about as competent at natural English as it is writing code.
I use it for both at times, since it can be an easy way to both rubber duck debug my code as well as summarize large projects/updates in ways that are easily digestible when I don’t have the time to write out a proper summary manually. But in either case, I have to go back and fix a good bit of what’s provided.
AI is not great at either option, and sucks at both in different ways. Saying AI is a hammer is not supwr helpful, because hammers have a defined use. LMMs are a solution looking for a problem. The difference between the posters and the researcher is that the researcher has an advantage that he both knows what he’s doing and knows how to fix the turds he’s provided to make it work, where the users are just trusting the output.
I don’t know how to explain the irony any better in this scenario, but it’s there. If the users actually fact checked their output, we wouldn’t be having this discussion. Same as if the researcher chose not to validate his output. The issue isn’t necessarily the use, but the usage. So this is akin to the posters using a hammer to put up a shelf, but they didn’t look at the directions and saying “yep, that looks right”
Image
Saapas@piefed.zip 21 hours ago
But it did create a working tool to identify AI contributions with fake ISBN, didn’t it? Are we assuming the tool from OP wasn’t working?
I_Clean_Here@lemmy.world 1 hour ago
So, how is your autism?
Passerby6497@lemmy.world 21 hours ago
Ok, but how do you know that other edits from Wikipedia aren’t AI generated, but had users who actually validated the output? And can you explain to me the difference between the users who validated the AI output before updating Wikipedia, and the researcher who validated his AI output before making his talk?
The point you’re missing is that both sides are using the same crappy tool, but you’re only seeing an example of one side doing it wrong and the other right, and using that to make a conclusion that is unfalsifiable. You appear to be saying it’s better for code than language because of the example in front of us and naively extrapolating that to mean ai works better in one task than the other, when the difference is how the user handled the output, not the output itself.