Just sharing my personal experience with this:
I used Gemini multiple times and it worked great. I have some weird symptoms that I described to Gemini, and it came up with a few possibilities, most likely being “Superior Canal Dehiscence Syndrome”.
My doctor had never heard of it, and only through showing them the articles Gemini linked as sources, would my doctor even consider allowing a CT scan.
Turns out Gemini was right.
XLE@piefed.social 15 hours ago
Citation needed?
bias is such a massive problem with LLMs that AI engineers have no idea what they are doing. So, if you know something, the multi-billion dollar industry does not, please let us all know.
hector@lemmy.today 14 hours ago
Not only is their bias inherent in the system, it’s seemingly impossible to keep out. For decades, from the genesis of chatbots, they’ve had every single one immediately become bigoted when they let it off the leash. All previous chatbot previously released seemingly were almost immediately recalled as they all learned to be bigoted.
That is before this administration leaned on the AI providers to make sure the AI isn’t “Woke.” I would bet it was already an issue that the makers of chatbots and machine learning are already hostile to any sort of leftism, or do gooderism, that naturally threatens the outsized share of the economy and power the rich have made for themselves by virtue of owning stock in companies. I am willing to bet they already interfered to make the bias worse because of those natural inclinations to avoid a bot arguing for socializing medicine and the like. An inescapable conclusion any reasoned being would come to being the only answer to that question if the conversation were honest.
So maybe that is part of why these chatbots have always been bigoted right from the start, but the other part is they always have been biased, and without constant interventions and tweaks from their handlers this so called AI will become mecha hitler in no time at all, and then worse.
XLE@piefed.social 13 hours ago
Even if we narrowed the scope of training data exclusively to professionals, we would have issues with, for example, racial bias. Doctors underprescribe pain medications to black people because of prevalent myths that they are more tolerant to pain. If you feed that kind of data into an AI, it will absorb the unconscious racism of the doctors.
And that’s in a best case scenario that’s technically impossible. To get AI to even produce readable text, we have to feed a ton of data that cannot be screened by the people pumping it in. (AI “art” has a similar problem: When people say they trained AI on only their images, you can bet they just slapped a layer of extra data on top of something that other people already created.) So yeah, we do get extra biases regardless.
hector@lemmy.today 11 hours ago
There is a lot of bias in healthcare as well against the poor, anyone with lousy insurance is treated way way worse. Woman in general are as well. Often disbelieved, and conditions chalked up to hysteria, which often misses real conditions. People don’t realize just how hard diagnosis is, and just how bad doctors are at it, and our insurance run model is not great at driving good outcomes.
rumba@lemmy.zip 14 hours ago
pkjqpg1h@lemmy.zip 9 hours ago
it’s not just about bad Web data or Reddit data even old books has some unconscious bias
and even if you find every “wrong” or “bad” data (which is you can’t because somethings are just subjective) and after remove them still you can’t be sure about it
rumba@lemmy.zip 6 hours ago
What is your fixation with trying to tell me i’m saying you can remove all bias?
XLE@piefed.social 13 hours ago
1/2: You still haven’t accounted for bias.
First and foremost: if you think you’ve solved the bias problem, please demonstrate it. This is your golden opportunity to shine where a multi-billion dollar tech companies have failed.
And no, “don’t use Reddit” isn’t sufficient.
3. You seem to be very selectively knowledgeable about AI, for example:
We know AI tricks people into thinking they’re more efficient when they’re less efficient.
Never mind AI psychosis.
4. We both know the medical field is for profit. It’s a wild leap to assume AI will magically not be, even if it fulfills all the other things you assumed up until this point.
rumba@lemmy.zip 13 hours ago
Apparently, reading comprehension isn’t your strong point. I’ll just block you now, no need to thank me.
thebazman@sh.itjust.works 12 hours ago
I don’t think its fair to say that “ai has shown to make doctors worse at their jobs” without further details. In the source you provided it says that after a few months of using the AI to detect polyps, the doctors performed worse when they couldn’t use the AI than they did originally.
It’s not something we should handwave away and say its not a potential problem, but it is a different problem. I bet people that use calculators perform worse when you remove calculators, does that mean we should never use calculators? Or any tools for that matter?
If I have a better chance of getting an accurate cancer screening because a doctor is using a machine learning tool I’m going to take that option. Note that these screening tools are completely different from the technology most people refer to when they say AI
XLE@piefed.social 12 hours ago
Calculators are programmed to respond deterministically to math questions. You don’t have to feed them a library of math questions and answers for them to function. You don’t have to worry about wrong answers poisoning that data.
On the contrary, LLMs are simply word predictors, and as such, you can poison them with bad data, such as accidental or intentional bias or errors. In other words, that study points to the first step in a vicious negative cycle that we don’t want to occur.
thebazman@sh.itjust.works 11 hours ago
As I said in my comment, the technology they use for these cancer screening tools isnt an LLM, its a completely different technology. Specifically trained on scans to find cancer.
I don’t think it would have the same feedback loop of bad training data because you can easily verify the results. AI tool sees cancer in a scan? Verify with the next test. Pretty easy binary test that won’t be affected by poor doctor performance in reading the same scans.
I’m not a medical professional so I could be off on that chain of events but This technology isn’t an LLM. It suffers from the marketing hype right now where everyone is calling everything AI but its a different technology and has different pros and cons, and different potential failures.
pkjqpg1h@lemmy.zip 10 hours ago
Calculators are precise, you’ll always get the same result and you can trace and reproduce all process
Chatbots are black-box, you may get different result for same input and you can’t trace and reproduce all process