Comment on Why are people downvoting the MediaBiasFactChecker not?
finley@lemm.ee 3 months agoi’m not here to waste time trying to convince of you of something about which you’ve clearly made up your mind, since others have shared plenty of facts, made great arguments, and all you do is keep shifting the goalposts.
not to mention: it’s not for me to prove your claims-- that’s on you, and you haven’t. all i have claimed is that i’m satisfied, and the only proof you need of that is my word ont he matter.
so, once again, since you haven’t proven anything other than you disagree with it, i suggest you simply block it and move on with your life. you have no greater authority to decide what is or is not a “reliable source” than MBFC, but at least they show their work.
FuglyDuck@lemmy.world 2 months ago
I shift the goalposts but am just repeating myself? interesting.
In any case… as for my “claims” perhaps I’ve missed something. Again. From their own methodology page:
Okay. so that’s the highlevel sales pitch. emphasis mine.
Perhaps. just perhaps, I’ve missed where they dropped what those defined criteria are. lets keep reading.
Objective indicators? what indicators? Where? for you or me to understand how they’re arriving at their analysis, I need to understand what “objective indicators” they’re using. they’re not listed anywhere I can find. Perhaps I’ve missed it. I don’t think I have. but. perhaps I have.
Now, Skipping down to the specific categories…
Alright. now we’re getting to the stuff I’m asking for! maybe. uh. shit. The just “Biased Wording/Headlines” at that. So they have no list of common loaded words, (For example, is “Deadly Wildfire” okay but “Deadly Attack” not? both are describing events in which people presumably died. What you, I or anyone else perceives as “loaded” is going to be entirely different. You want to rigorously define criteria for bias? you’re gonna have to at least provide examples. And not on the individual ratings. Protip. the lack of strong or emotional language is also an indication of bias- for examples of that, watch reports surrounding any cops that killed a subject. you’re almost certainly going to be seeing the pro-cop news agencies shy away from language that evokes anger.
Then then get into their “comprehensive” analysis:
yeah. uhm. that’s not “comprehensive”. at all. MPR news, just from today, just the ones that get highlighted, Minnesota Public Radio news has 28 articles. from today. and that’s not even bothering to look at all of the massive amounts of MPR/NPR affiliated podcasts and such being pumped out sometimes 3 times a day.
Further, there’s no information on which articles are selected. Which can have a profound impact on whether or not they get a passing grade for factualness. If you’re only checking ten out of literal thousands of articles a year. or, even a hundred articles, out of thousands a year, how you select articles to review are going to have a profound impact. Is it random? is it by top rating? are they cherry picked? top headlines from random dates?
And lets draw attention to that last line. “This process can be time consuming or very simple, depending on the source”. meaning… it varies based on the source. Even if there’s more to work with for a given source… the process should probably not be any more or less simple- the process should be the process. that’s the purpose of a methodology.
Skipping the descriptions of their fact check ratings… all I’m going to say here is that there’s no objective standard for what “consistent” or “often” or any sort of miss-rate on being factual. I will submit that, for example VOA news probably should be given a low factual score based on this statement: >A “Low” rating indicates the source is often unreliable and should be fact-checked for fake news, conspiracy theories, and propaganda.
you know, considering VOA is literally a state media outlet. whose entire purpose is to pump out propaganda; yet it’s given a ‘high’ rating. but what do I know, they certainly weren’t forbidden from broadcasting inside US boarders because of their propagandist nature.
their critera for who they use as a factcheck service is useful:
IFCN is good. the date restriction is good. explaining how correct fact checks affect things… is good. I would like to see a comment about which fact checkers they always use, or always use when it’s relevant (for example, reviewing a french news service using, idunno, a taiwanese fact checker seems kinda sketchy.) Do they search all 115 current signatories and the other 54 that are in the renewal process? do they search only those from the source’s home country? when do they elect to expand beyond that? do they only use one service at all?
I’d assume they use some sort of aggregator service to look for fact checks across all of them at once. Personally, my preferred choice would be an aggregation service combining all of them, and searching for articles tagged as fact checking the specific source, rather than for each of the articles being reviewed. Then organize those by some sort of pass/mostly-pass/fail/epically-fail sort of metric. but that’s just me.
TL:DR? my goal post has always been that their methodology is opaque and not useful to determine that their method reasonably eliminates their bias. that has never changed. they don’t describe what acceptable error rates are for factualness (never mind severity of the error. reporting a person wore a green shirt when they wore a blue shirt might be factually incorrect, but does it really matter if the story isn’t about what shirt they wore?). they don’t describe even in brief detail what ‘loaded’ or ‘biased’ headlines actually look like. They describe a literal propaganda service as being “Least Biased”.
They cite newsguard as a competitor (i’m not sure about that, but they’re in the same space. from what I see on their website… they’re selling their service to different audiences. Like brands looking to advertise on a specific site, etc.) Lets look at their methodology page. I’m not going to go into detail. but you see how it’s broken down? how specific. each criterion is specifically listed, with reasons for it passing or failing a given criterion listed, as well as express explanations of what things mean. When you’re looking through it. not ‘we judge on bias… which means that we look for biased words…’. Like a phrase you see is ‘that a regular use would not likely see it on a daily basis’.
Check their scoring process. They have a researcher (described as a trained journalist), research the website, make a report, then they write the article. that article is then put on pause for comment from the company in question … then it is reviewed by a people (“at least one senior ediitor and Co-CEO”…) to check for factual accuracy and what have you. Only then is it published. I assume that MBFC has something similar, but that’s an assumption. no where does it describe the editorial process. for all we know, it really is just one guy in a cat suit working the one article, doing it his way while the lady in the dog suit is doing it her way and the editorial staff are in a two-person horse suit searching for organic oats. I’d rather assume not, but again. that is an assumption on my part.
finley@lemm.ee 2 months ago
I’m not reading that.
Ya knowing, I’ve had some great interactions with you here in the past, and generally we’re on the same page, but on this, we disagree. And I doubt we’re going to change each other’s minds, so I’m not really going to waste any more time on this discussion with you.
And, I know this is me repeating myself, but i again suggest that you just block the bot and move on. It’s not worth the energy you’re putting into it over a disagreement.
Peace, buddy