Comment on F.D.A. to Use A.I. in Drug Approvals to ‘Radically Increase Efficiency’
Eximius@lemmy.world 2 days agoYour argument becomes idiotic once you understand the actual technology. The AI bullshit machine’s agenda is “give nice answer” (“factual” is not an idea that has neural center in the AI brain), and “make reader happy”. The human “bullshit” machine, has many agendas, but it would have not got so far if it was spouting just happy bullshit (but I guess America is a becoming a very special case).
rottingleaf@lemmy.world 2 days ago
It doesn’t. I understand the actual technology. There are applications of human decision making where it’s possibly better.
Eximius@lemmy.world 2 days ago
LLM does no decision making. At all. It spouts (as you say) bullshit. If there is enough training data for “Trump is divine”, the LLM will predict that Trump is divine, with no second thought (no first thought either). And it’s not even great to use as a language-based database.
Please don’t even consider LLMs as “AI”.
rottingleaf@lemmy.world 2 days ago
Even an RNG does decision-making.
I know what LLMs are, thank you very much!
If you wanted to even understand my initial point, you already would have.
Things have become really grim if people who can’t read a small message are trying to teach me on fundamentals of LLMs.
Eximius@lemmy.world 2 days ago
I wouldn’t define flipping coins as decision making. Especially when it comes to blanket governmental policy that has the potential to kill (or severely disable) millions of people.
You seem to not want any people to teach you anything. And are somehow completely dejected at such perceived actions.
EncryptKeeper@lemmy.world 1 day ago
It kinda seems like you don’t understand the actual technology.