Because if it’s able to crawl all of the science pubs, then it would be able to try different combos until it works. Isn’t that how it could/is being used, to test stuff?
imperator3733@lemmy.world 1 year ago
No duh - why would it have any ability to do that sort of task?
PeleSpirit@lemmy.world 1 year ago
Ranessin@feddit.de 1 year ago
It doesn’t check the stuff it generates other than on grammatical and orthographical errors. It’s not intelligent or has knowledge outside of how to create text. The text looks useful, but it doesn’t know what it contains in a way something intelligent would.
fsmacolyte@lemmy.world 1 year ago
Recent papers have shown that LLMs build internal world models but about a topic as niche and complicated as cancer treatment, a chatbot based on GPT-3.5 be woefully ill-equipped to do any kind of proper reasoning.
PeleSpirit@lemmy.world 1 year ago
It seems like it could check for that though, which is what chatgpt doesn’t do but we all assumed would. I’m sure there are ai programs that could and do check for possibilities on only information we know to be true.
stephen01king@lemmy.zip 1 year ago
If you want an AI that can create cancer treatment, you need to train it on creating cancer treatment, and not just use one that is trained on general knowledge. Even if you train it on science publications, all it can now reliably do is mimic a science journal since it has not been trained on how to parse the knowledge in the journal itself.
xkforce@lemmy.world 1 year ago
Part of the reason for studies like this is to debunk peoples’ expectations of AI’s capabilities. A lot of people are under tge impression that cgatGPT can do ANYTHING and can think and reason when in reality it is a bullshitter that does nothing more than mimic what it thinks a suitable answer looks like. Just like a parrot.