A BBC journalist ran the image through an AI chatbot which identified key spots that may have been manipulated.
WTF?
Doesn’t the fucking BBC have at least 1 or 2 experts for spotting fakes? RAN THROUGH AN AI CHATBOT?? SERIOUSLY??
Submitted 3 weeks ago by fantawurstwasser@feddit.org to technology@lemmy.world
https://www.bbc.com/news/articles/cwygqqll9k2o
A BBC journalist ran the image through an AI chatbot which identified key spots that may have been manipulated.
WTF?
Doesn’t the fucking BBC have at least 1 or 2 experts for spotting fakes? RAN THROUGH AN AI CHATBOT?? SERIOUSLY??
They have vibe journalists now
don’t make me cry.
They do, they have like a daily article debunking shit.
People need to get that with the proliferation of AI the only way to build credibility is not by using it for trust but to go the exact opposite way: Grab your shoes and go places. Make notes. Take images.
As AI permeates the digital space - a process that is unlikely to be reversed - everything that’s human will need to get - figuratively speaking - analogue again.
I haven’t read it, but it could be to demonstrate how easy it was to identify it as a fake, without the ressources of BBC.
No, they only have transphobia experts.
Pr because it was between 0 and 2 in the night. Still, as an author I wouldn’t have mentioned it.
An “expert” could be anyone who convinces someone else to pay them. The “expert” is probably the one that ran it through the chatbot.
It’s a shame to see the journalist trusting an AI chat-bot to verify the trustworthiness of the image instead of asking a specialist. I feel like they should even have an AI detecting specialist in-house since we’re moving to having more generative AI material everywhere
If the part of the image that reveals the image was made by an AI is obvious enough, why contact a specialist? Of course, reporters should absolutely be trained to spot such things with their bare eyes without something telling them specifically where to look. But still, once the reporter can already see what’s ridiculously wrong in the image, it would be waste of the specialist’s time to call them to come look at the image.
Did they though? They mentioned a journalist ran it through a chat bot. They also mention it was verified by a reporter on the ground.
It’s like criticising a weather report because the reporter looked outside to see if it was raining, when they also consulted the simulation forecasting.
It’s not a shame. Have you tried this? Try it now! It only takes a minute.
Test a bunch of images against ChatGPT, Gemini, and Claude. Ask it if the image was AI-generated. I think you’ll be surprised.
Gemini is the current king of that sort of image analysis but the others should do well too.
What do you think the experts use? LOL! They’re going to run an image through the same exact process that the chatbots would use plus some additional steps if they didn’t find anything obvious on the first pass.
It is time to start holding social media sites liable for posting AI deceptions. FB is absolutely rife with them.
Disagree. Without Section 230 (or equivalent laws of their respective jurisdictions) your Fediverse instance would be forced to moderate even harder in fear of legal action.
It’s a threat to free speech.
Also, it would be trivial for big tech to flood every fediverse instance with deceptive content and get us all shut down
Just make the law so it only affects things with x-amount of millions of users or x-percent of the population number minimum. You could even have regulation tiers toed to amount of active users, so those over the billion mark are regulated the strictest, like Facebook.
That’ll leave smaller networks, forums, and businesses alone while finally giving some actually needed regulations to the large corporations messing with things.
YouTube has been getting much worse lately as well. Lots of purported late-breaking Ukraine war news that’s nothing but badly-written lies. Same with reports of Trump legal defeats that haven’t actually happened. They are flooding the zone with shit, and poisoning search results with slop.
The entire media universe is being captured by Sociopathic Oligarchs, and they intend to extend the Conservative Propaganda Machine to cover everything. They will NOT be amenable to efforts toward monitoring truth in media, unless they can be the sole determiners of what is the truth.
For some reason middle eastern and pakistani sources are cranking out fake disaster videos.
I think just the people need to held accountable as while I am no fan of Meta, it is not their responsibility to hold people legally accountable to what they choose to post. What we really need is zero knowledge proof tech to identity a person is real without having to share their personal information but that breaks Meta’s and other free business model so here we are.
Sites AND the people that post them. The age of consequence-less action needs to end.
Or more like, just the people that post them.
WTF? Why nothing like this ever happened during Photoshop times? Are people just dumber now?
Because the venn diagram of “people who would maliciously do something like this” and “people with good enough photoshop skills to make it look realistic” were nearly two separate circles. AI has added a third “people with access to AI image generators” circle, and it has a LOT of overlap with the second group simply because it is so large.
Really? I remember tons of nicely photoshoped pictures on Snopes. There was a lot of trolling by people with skills going on.
The thing is you actually need some skill to do it in Photoshop, but know every dumb fuck who knows how to read can do shit like this.
So? People with skill don’t troll? Clearly the dumb person here is the one who believed the fake. What does someone else’s skill has to do with it?
It doesn’t require skill anymore. AI has enabled children with the ability to pretend they have a skill, and to use it to fool people for fun.
These are more realistic and far far easier to make.
It took skill to do this before. Hardly anyone with that level of skill and time would do this. Now the dumb idiots have access to that skillset because of AI doing all the work for them.
People who post this stuff without identifying it as fake should be held liable.
For anyone outside the UK, the bridge in the picture is carrying the West Coast Mainline (WCML).
The UK basically has two major routes between Edinburgh and Glasgow (where most people live in Scotland) and London, the East Coast Mainline and the West Coast Mainline. They also connect several major cities and regions.
The person who posted this basically claimed that a bridge on one of the UK’s busiest intercity rail routes had started to collapse, which is not something you say lightly. It’s like saying all of New York’s airports had shut down because of three co-incidental sinkholes.
I’m surprised to see no one else mention that it only took them an hour and a half to get an inspection done, signed of on and the lines reopened? That seems pretty impressive for something as important as a rail bridge.
I mean, it’s the time to get an inspector off of bed, on the road, to the site, and for them to go “yup, bridge’s still there” and call back…
In reality though they’re responsible, so they’re going to do a proper assessment regardless.
For a “once in decades” event you would normally expect that people aren’t really on call to respond in a few minutes.
Wait until this shit starts an actual war.
It feels like a privilege escalation exploit: at a certain point the authority chain jumped from a random picture provided who knows where/when to a link in the chain that should be reliable enough to blindly trust in this subject.
I dunno, someone just throws this up on social media, and you’re the person in the position to say hey, halt the trains, don’t you do just that out of an abundance of caution?
lives are worth more than the dysfunction caused by the delay in services.
the only thing this did was to weaken the resolution of leadership when a real disaster happens.
the next time information like this comes forward, be it real or fake, it will cause a delayed reaction which will ultimately cost lives.
Isn’t doing this the equivalent of shouting fire in a theatre?
I mean, even if it isn’t true, better to be sure than to have a train derail and kill a bunch of people.
A BBC journalist ran the image through an AI chatbot which identified key spots that may have been manipulated.
This is terrifying. Does the BBC not have anyone on the team that understands why this does not, and will never work?
AI creating jobs by requiring more human intervention for validation of previously reliable forms of information?
Okay cool, I’m here for it.
MagicShel@lemmy.zip 3 weeks ago
What the actual fuck? You couldn’t spare someone to just go look at the fucking thing rather than asking ChatGPT to spin you a tale? What are we even doing here, BBC?
Deestan@lemmy.world 3 weeks ago
I tried the image of this real actual road collapse: www.tv2.no/nyheter/innenriks/…/12875776
I told ChatGPT it was fake and asked it to explain why. It assured me I was a special boy asking valid questions and helpfully made up some claims.
Image
Atropos@lemmy.world 3 weeks ago
God damn I hate this tool.
Thanks for posting this, great example
plantfanatic@sh.itjust.works 3 weeks ago
Wait, you’re surprised it did what you asked of it?
There’s a massive difference between asking if something is fake and telling it it is, and asking why.
A person would make the same type of guesses and explanations.
All this is showing is, you and ALOT of other people just don’t know enough about AI to be able to have a conversation about it.
IcyToes@sh.itjust.works 3 weeks ago
They needed time for their journalists to get there. They’re too busy on the beaches counting migrant boat crossings.
BanMe@lemmy.world 3 weeks ago
I am guessing the reporter wanted to remind people tools exist for this, however the reporter isn’t tech savvy enough to realize ChatGPT isn’t one of them.
9bananas@feddit.org 3 weeks ago
afaik, there actually aren’t any reliable tools for this.
the highest accuracy rate I’ve seen reported for “AI detectors” is somewhere around 60%; barely better than a random guess…
Wren@lemmy.today 3 weeks ago
My best guess is SEO. Journalism that mentions ChatGPT gets more hits. It might be they did use a specialist or specialized software and the editor was like “Say it was ChatGPT, otherwise people get confused, and we get more views. No one’s going to fact check whether or not someone used ChatGPT.”
That’s just my wild, somewhat informed speculation.
Railcar8095@lemmy.world 3 weeks ago
Devils advocate, AI might be an agent that detects tapering with a NLP frontend.
Not all AI is LLMs.
MagicShel@lemmy.zip 3 weeks ago
A “chatbot” is not a specialized AI.
(I fell like maybe I need to put this boilerplate in every comment about AI, but I’d hate that.) I’m not against AI or even chatbots. They have their uses. This is not using them appropriately.
Tuuktuuk@piefed.ee 3 weeks ago
There’s hoping that the reporter then looked at the image and noticed, “oh, true! That’s and obvious spot there!”
HugeNerd@lemmy.ca 3 weeks ago
But the stories of Russians under my bed stealing my washing machine’s CPU are totally real.
someguy3@lemmy.world 3 weeks ago
ArcaneSlime@lemmy.dbzer0.com 3 weeks ago
This is true, but also there’s no way this wouldn’t have been reported rather quick, like not just online but within 5min someone would have been all:
“Oi 999? The bridge on Crumpet Lane 'as fallen down, I can’t get to me Chippy!”
Or
“Oi wot was that loud bang outside me flat?! Made me spill me vindaloo! Holy Smeg the bridge collapsed!”
Or like isn’t the UK the most surveiled country with their camera system? Is this bridge not on camera already? For that the AI telling location would probably be handy too I’d just be surprised they don’t have it on security cams.