We have all seen AI-based searches available on the web like Copilot, Perplexity, DuckAssist etc, which scour the web for information, present them in a summarized form, and also cite sources in support of the summary.
But how do they know which sources are legitimate and which are simple BS ? Do they exercise judgement while crawling, or do they have some kind of filter list around the “trustworthyness” of various web sources ?
edgemaster72@lemmy.world 1 day ago
That’s the neat part, they don’t
toy_boat_toy_boat@lemmy.world 1 day ago
you’re absolutely right. they actually don’t know anything. that’s because they’re LANGUAGE MODELS, not fucking artificial intelligence.
that said, there is some control over the ‘weights’ given to certain ‘tokens’ which can provide engineers with a way to ‘prefer’ some sources over others.
tarknassus@lemmy.world 1 day ago
I believe every time a wrong answer becomes a laughing point, the LLM creators have to manually intervene and “retrain” the model.
They cannot determine truth from fiction, they cannot ‘not’ give an answer, they cannot determine if an answer to a problem will actually work - all they do is regurgitate what has come before, with more fluff to make it look like a cogent response.
harsh3466@lemmy.ml 1 day ago
Hahaha. Came to say exactly this. Verbatim.