This is exactly it. And it’s funny you’re getting downvoted.
We don’t truly know the depth of ML yet and how these general models could potential change when a few vectors in the equation change, and that’s the big unknown with it. I agree with you here that Gates’ opinion is just that and isn’t particularly well informed. Especially in comparison to what some of the industry and ML experts are saying about how far we can go with the models, how they will evolve as we change parameters/vectors/dependencies and the impact of that evolution on potential applications. It’s just too early.
grabyourmotherskeys@lemmy.world 1 year ago
Another way to think of this is feedback from humans will refine results. If enough people tell it that Toronto is not the capital of Canada it will start biasing toward Ottawa, for example. I have a feeling this is behind the search engine roll out.
raptir@lemdro.id 1 year ago
ChatGPT doesn’t learn like that though, does it? I thought it was “static” with its training data.
grabyourmotherskeys@lemmy.world 1 year ago
I was speculating about how you can overcome hallucinations, etc., by supplying additional training data. Not specific to ChatGPT or even LLMs…
HiggsBroson@lemmy.world 1 year ago
You can finetune LLMs using smaller datasets, or with RLHF (reinforcement learning from human feedback) wherein people can give ratings to responses and the model can be either “rewarded” or “penalized” based off of the ratings for a given output. This retrains the LLM to produce outputs that people prefer.
niisyth@lemmy.ca 1 year ago
Active Learning Models. Though public exposure can eaily fuck it up, without adult supervision. With proper supervision though, there’s promise.
Toes@ani.social 1 year ago
Toronto is Canadian New York. It wants to be the capital and probably should be but it doesn’t speak enough French.