Yeah I’m going to make sure I don’t take any new drugs for a few years. As it is I’m probably going to have to forgo vaccinations for a while because dipshit Kennedy has fucked with the vaccination board.
Comment on F.D.A. to Use A.I. in Drug Approvals to ‘Radically Increase Efficiency’
oh_@lemmy.world 3 days ago
People will die because of this.
buddascrayon@lemmy.world 3 days ago
SaharaMaleikuhm@feddit.org 3 days ago
Just check if the drug is approved in a proper country of your choice.
Olgratin_Magmatoe@startrek.website 3 days ago
If you can afford it, there is always the vaccines from other countries. It’s fucked up that it’s come to this and there’s even more of a price tag on health.
rottingleaf@lemmy.world 3 days ago
I’ll try arguing in the opposite direction for the sake of it:
An “AI”, if not specifically tweaked, is just a bullshit machine approximating reality same way human-produced bullshit does.
A human is a bullshit machine with an agenda.
Depending on the cost of decisions made, an “AI”, if it’s trained on properly vetted data and not tweaked for an agenda, may be better than a human.
If that cost is high enough, and so is the conflict of interest, a dice set might be better than a human.
There are positions where any decision except a few is acceptable, yet malicious humans regularly pick one of those few.
Eximius@lemmy.world 3 days ago
Your argument becomes idiotic once you understand the actual technology. The AI bullshit machine’s agenda is “give nice answer” (“factual” is not an idea that has neural center in the AI brain), and “make reader happy”. The human “bullshit” machine, has many agendas, but it would have not got so far if it was spouting just happy bullshit (but I guess America is a becoming a very special case).
rottingleaf@lemmy.world 3 days ago
It doesn’t. I understand the actual technology. There are applications of human decision making where it’s possibly better.
Eximius@lemmy.world 3 days ago
LLM does no decision making. At all. It spouts (as you say) bullshit. If there is enough training data for “Trump is divine”, the LLM will predict that Trump is divine, with no second thought (no first thought either). And it’s not even great to use as a language-based database.
Please don’t even consider LLMs as “AI”.
EncryptKeeper@lemmy.world 2 days ago
It kinda seems like you don’t understand the actual technology.
cupcakezealot@piefed.blahaj.zone 3 days ago
pretty sure that's the basis of it's appeal for them