They needed time for their journalists to get there. They’re too busy on the beaches counting migrant boat crossings.
Comment on Trains cancelled over fake bridge collapse image
MagicShel@lemmy.zip 6 days ago
A BBC journalist ran the image through an AI chatbot which identified key spots that may have been manipulated.
What the actual fuck? You couldn’t spare someone to just go look at the fucking thing rather than asking ChatGPT to spin you a tale? What are we even doing here, BBC?
IcyToes@sh.itjust.works 6 days ago
BanMe@lemmy.world 6 days ago
I am guessing the reporter wanted to remind people tools exist for this, however the reporter isn’t tech savvy enough to realize ChatGPT isn’t one of them.
9bananas@feddit.org 6 days ago
afaik, there actually aren’t any reliable tools for this.
the highest accuracy rate I’ve seen reported for “AI detectors” is somewhere around 60%; barely better than a random guess…
rockerface@lemmy.cafe 6 days ago
The problem is any AI detector can be used to train AI to fool it, if it’s publicly available
9bananas@feddit.org 6 days ago
exactly!
using a “detector” is how (not all, but a lot of) AIs (LLMs, GenAI) are trained:
have 1 AI that’s a “student”, and 1 that’s a “teacher” and pit them against another until the student fools the teacher nearly 100% of the time. this is what’s usually called “training” an AI.
one can do very funny things with this tech!
for anyone that wants to see this process in action, here’s a great example:
Wren@lemmy.today 6 days ago
My best guess is SEO. Journalism that mentions ChatGPT gets more hits. It might be they did use a specialist or specialized software and the editor was like “Say it was ChatGPT, otherwise people get confused, and we get more views. No one’s going to fact check whether or not someone used ChatGPT.”
That’s just my wild, somewhat informed speculation.
Railcar8095@lemmy.world 6 days ago
Devils advocate, AI might be an agent that detects tapering with a NLP frontend.
Not all AI is LLMs.
MagicShel@lemmy.zip 6 days ago
A “chatbot” is not a specialized AI.
(I fell like maybe I need to put this boilerplate in every comment about AI, but I’d hate that.) I’m not against AI or even chatbots. They have their uses. This is not using them appropriately.
Railcar8095@lemmy.world 6 days ago
A chatbot can be the user facing side of a specialized agent.
That’s actually how original change bots were. Siri didn’t know how to get the weather, it was able to classify the question as a weather question, parse time and location and which APIs to cash on those cases.
MagicShel@lemmy.zip 6 days ago
Okay I get you’re playing devil’s advocate here, but set that aside for a moment. Is it more likely that BBC has a specialized chatbot that orchestrates expert APIs including for analyzing photos, or that the reporter asked ChatGPT?
My second point still stands. If you sent someone to look at the thing and it’s fine, I can tell you the photo is fake or manipulated without even looking at the damn thing.
Tuuktuuk@piefed.ee 6 days ago
There’s hoping that the reporter then looked at the image and noticed, “oh, true! That’s and obvious spot there!”
HugeNerd@lemmy.ca 6 days ago
But the stories of Russians under my bed stealing my washing machine’s CPU are totally real.
someguy3@lemmy.world 6 days ago
[deleted]ArcaneSlime@lemmy.dbzer0.com 6 days ago
This is true, but also there’s no way this wouldn’t have been reported rather quick, like not just online but within 5min someone would have been all:
“Oi 999? The bridge on Crumpet Lane 'as fallen down, I can’t get to me Chippy!”
Or
“Oi wot was that loud bang outside me flat?! Made me spill me vindaloo! Holy Smeg the bridge collapsed!”
Or like isn’t the UK the most surveiled country with their camera system? Is this bridge not on camera already? For that the AI telling location would probably be handy too I’d just be surprised they don’t have it on security cams.
riskable@programming.dev 6 days ago
Or like isn’t the UK the most surveiled country with their camera system?
Ahahah! That’s a good one!
You think all those cameras are accessible to everyone or even the municipal authorities? Think again!
All those cameras are mostly useless—even for law enforcement (the only ones with access). It’s not like anyone is watching them in real time and the recordings—if they even have any—are like any IT request: Open a ticket and wait. How long? I have no idea.
Try it: If you live in the UK, find some camera in a public location and call the police to ask them, “is there an accident at (location camera is directly pointing at)?”
They will ask you all sorts of questions before answering you (just tell them you heard it through the grapevine or something) but ultimately, they will send someone out to investigate because accessing the camera is too much of a pain in the ass.
It’s the same situation here in the US. I know because the UK uses the same damned cameras and recording tech. It sucks! They’re always looking for ways to make it easier to use and every rollout of new software actually makes it harder and more complicated!
How easy is the ticket system at your work? Now throw in dozens of extra government-mandated fields 🤣
Never forget: The UK invented bureaucracy and needles paperwork!
Deestan@lemmy.world 6 days ago
I tried the image of this real actual road collapse: www.tv2.no/nyheter/innenriks/…/12875776
I told ChatGPT it was fake and asked it to explain why. It assured me I was a special boy asking valid questions and helpfully made up some claims.
Image
Atropos@lemmy.world 6 days ago
God damn I hate this tool.
Thanks for posting this, great example
plantfanatic@sh.itjust.works 6 days ago
Wait, you’re surprised it did what you asked of it?
There’s a massive difference between asking if something is fake and telling it it is, and asking why.
A person would make the same type of guesses and explanations.
All this is showing is, you and ALOT of other people just don’t know enough about AI to be able to have a conversation about it.
sem@piefed.blahaj.zone 6 days ago
A person would have the agency to ask, " why do you think it’s fake?”
plantfanatic@sh.itjust.works 6 days ago
Why would it have to? It knows to do any task put in front of it.
A human would to.
Weslee@lemmy.world 5 days ago
I think if a person were asked to do the same they would actually look at the image and make genuine remarks, look at the points it has highlighted, the boxes are placed around random points and the references to those boxes are unrelated (ie. yellow talks about branches when there are no branches near the yellow box, red talks about bent guardrail when the red box on the guardrail is of an undamaged section)
It has just made up points that “sound correct”, anyone actually looking at this can tell there is no intelligence behind this
plantfanatic@sh.itjust.works 5 days ago
Yet that wasn’t the point they even made! Lmfao nice reaching there.
Deestan@lemmy.world 6 days ago
No. Stop making things up to complain about. Or at least leave me out of it.
plantfanatic@sh.itjust.works 6 days ago
Then what are doing? Complaining it did exactly what you instructed it to do?
What else did you expect?