It does occasionally because it filters out sources which doesn’t fit that pattern, but it doesn’t guarantee anything (for a variety of reasons, like inevitable statistical cross contamination in the model, bad samples like overconfident answers, smaller number of samples to learn from, etc).
Comment on No love lost: AppLovin helpfully releases tool to switch from Unity to Godot or Unreal
YMS@kbin.social 1 year agoDoes ChatGPT's code get better if you include "You're an expert in that language" in the prompt?
Natanael@slrpnk.net 1 year ago
drislands@lemmy.world 1 year ago
Good question. Based on my limited understanding of LLMs, I don’t see how it could…I’m interested to hear if that’s not the case.
EncryptKeeper@lemmy.world 1 year ago
Because an LLM’s goal isn’t to always be the last correct at answering questions. It just says what it thinks you want it to say. It’s not that telling it that it’s an expert necessarily makes it smarter, you’re just specifying not to give you an answer as though it was an amateur, which otherwise it wouldn’t have any reason not to do.
Jerkface@lemmy.world 1 year ago
I use ChatGPT for math tutoring occasionally and when I started using the prompt “Suppose you are a professional mathematician,” I got fewer responses resembling those you might get from a classmate and more which were thorough and rigorous.
match@pawb.social 1 year ago
Well, it will get worse if you tell it they’re an absolute fuckup