Comment on Chatbots Make Terrible Doctors, New Study Finds

<- View Parent
Buddahriffic@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

Yeah, if you turn off randomization based on the same prompts, you can still end up with variation based on differences in the prompt wording. And who knows what false correlations it overfitted to in the training data. Like one wording might bias it towards picking medhealth data while another wording might make it more likely to use 4chan data. Not sure if these models are trained on general internet data, but even if it’s just trained on medical encyclopedias, wording might bias it towards or away from cancers, or how severe it estimates it to be.

source
Sort:hotnewtop