The newer ones search the internet and generate from the results not their training and provide sources.
So that’s not such a worry now.
Anyone who used ChatGPT for information and not text generation was always using it wrong.
Comment on Researchers confirm what we already knew: Google results really are getting worse
chaogomu@kbin.social 10 months agoThe problem is, you can't trust ChatGPT to not lie to you.
And since generative AI is now being used all over the place, you just can't trust anything unless you know damn well that a human entered the info, and then that's a coin flip.
The newer ones search the internet and generate from the results not their training and provide sources.
So that’s not such a worry now.
Anyone who used ChatGPT for information and not text generation was always using it wrong.
Except people are using LLM to generate web pages on something to get clicks. Which means LLM’s are training off of information generated by other LLM’s. It’s an ouroboros of fake information.
But again if you use LLMs ability to understand and generate text via a search engine that doesn’t matter.
plus search engines don’t lecture me as much for typing naughty sex words
Get on the unfiltered LLM train, they’ll do anything GPT does and won’t filter anything. Bonus if you run it locally and share with the community.
However, I find it much easier to check if the given answer is correct, instead of having to find the answer myself.
lolcatnip@reddthat.com 10 months ago
OTOH, you also can’t trust humans not to lie to you.
chaogomu@kbin.social 10 months ago
That's the coin flip.