can we really trust a “black box” algorithm with our lives?
No. That’s why we have clinical trials.
Submitted 1 day ago by MCasq_qsaCJ_234@lemmy.zip to technology@lemmy.world
can we really trust a “black box” algorithm with our lives?
No. That’s why we have clinical trials.
We don’t even trust experts without testing
We don’t even trust experts without testing
We certainly didn’t but I think the time of don’t is long past.
There really needs to be a rhetorical distinction between regular machine learning and something like an llm. I think people read this (or just the headline) and assume this is just asking grok “what interactions will my new drug flavocane have?” Where these are likely large models built on the mountains of data we have from existing drug trials
Reproduceability is always an issue.
?
Reproducibility of what we call LLM 's as opposed to what we call other firms of machine learning?
Life sciences are where this sort of thing will shine.
Those models will almost certainly be essentially the same transformer architecture as any of the llms use. Simply because they beat most other architectures in almost any field people have tried them. An llm is, after all, just classifier with an unusually large set of classes (all possible tokens) which gets applied repeatedly
A quick search turns up that alpha fold 3, what they are using for this, is a diffusion architecture, not a transformer. It works more the image generators than the GPT text generators. It isn’t really the same as “the LLMs”.
I’m not talking about the specifics of the architecture.
To the layman, AI refers to a range of general purpose language models that are trained on “public” data and possibly enriched with domain-specific datasets.
There’s a significant material difference between using probabilistic language completion and a model that directly predicts the results of complex processes (like what’s likely being discussed in the article).
It’s not specific to the article in question, but it is really important for people to not conflate these approaches.
I mean I hate AI in general… but to be honest… assuming no one is stupid enough to bypass the trials etc… I’m all for it, 90% of these problems already exist in the existing system, who owns it, can a corporation charge us to death.
The only reasonable fear is, if they come out with more than they can develop trials for, and they lobby to lower standards in trials. Even that honestly is a more acceptable risk in the context of terminal diseases/severe cancers.
assuming no one is stupid enough to bypass the trials etc…
Of course.
There’s a separate AI model dedicated to running the trials. Don’t worry.
Agreed, drug development is a very good use of AI.
Rfk Jr will use it for drug approvals too! Uh oh.
Okay here we go guys! Drink up!
Feel anything yet? Let’s try another… Hold up! Wow, I can see 360!
Dude! Gnarly! You got eyes in the back of y…dude I can see 360 too!
Nah, I only see 360 pills. Where do you see the other two?
Holy wakamoly! You got 360 eye balls!
Experiment 00000000001… Failure…
Did the machine test its first human drug yet?
Yes! LSD!..didn’t work.
Well Dr. Chich what will we try next?
We? Or the machine?
I’m sure all the savings from accelerated/cheaper R&D will be passed on to the consumer…right?
They will, just not in the US lol
There’s only one way to solve all diseases.
Did they test this on Mars first?
Sure it helps with a bottle neck but it is not the only one. Until you gain biological and biochemical understanding of the disease no amount of throwing neural networks will help you. I am really sick and tired of AI people hyping up their stuff to get more investments.
Lovely. Let me pencil “zombie apocalypse” back onto my 2026 BINGO card.
Is this how we all get AIDS?
Big Pharma says no the hell you aint
I think I lost neurons reading this.
Vanilla_PuddinFudge@infosec.pub 8 hours ago
Here, take this pill
“Will it cure me?”
You won’t have cancer anymore, that’s for sure!
“Welp, down the hatch!”
…
dies