If you want an AI that can create cancer treatment, you need to train it on creating cancer treatment, and not just use one that is trained on general knowledge. Even if you train it on science publications, all it can now reliably do is mimic a science journal since it has not been trained on how to parse the knowledge in the journal itself.
Because if it’s able to crawl all of the science pubs, then it would be able to try different combos until it works. Isn’t that how it could/is being used, to test stuff?
stephen01king@lemmy.zip 1 year ago
Ranessin@feddit.de 1 year ago
It doesn’t check the stuff it generates other than on grammatical and orthographical errors. It’s not intelligent or has knowledge outside of how to create text. The text looks useful, but it doesn’t know what it contains in a way something intelligent would.
fsmacolyte@lemmy.world 1 year ago
Recent papers have shown that LLMs build internal world models but about a topic as niche and complicated as cancer treatment, a chatbot based on GPT-3.5 be woefully ill-equipped to do any kind of proper reasoning.
PeleSpirit@lemmy.world 1 year ago
It seems like it could check for that though, which is what chatgpt doesn’t do but we all assumed would. I’m sure there are ai programs that could and do check for possibilities on only information we know to be true.