For others here, I use kagi and turned the LLM summaries off recently because they weren’t close to reliable enough for me personally so give it a test. I use LLMs for some tasks but I’m yet to find one that’s very reliable for specifics
Comment on DuckDuckGo poll says 90% responders don't want AI
truthfultemporarily@feddit.org 3 hours agoI use kagi assistant. It does a search, summarizes, then gives references to the origin of each claim. Genuinely useful.
porcoesphino@mander.xyz 2 hours ago
Kyrgizion@lemmy.world 2 hours ago
You can set up any AI assistant that way with custom instructions. I always do, and I require it to clearly separate facts with sources from hearsay or opinion.
Warl0k3@lemmy.world 3 hours ago
How often do you check the summaries? Real question, I’ve used similar tools and the accuracy to what it’s citing has been hilariously bad.
Deebster@infosec.pub 2 hours ago
I also sometimes use the Kagi summaries and it’s definitely been wrong before. One time I asked what the term was for something in badminton and it came up with a different badminton term. When I looked at the cited source, it was a multiple choice quiz with the wrong term being the first answer.
It’s reliable that I still use it, although more often to quickly identify which search results are worth reading.
AmbitiousProcess@piefed.social 2 hours ago
I can’t speak for the original poster, but I also use Kagi and I sometimes use the AI assistant, mostly just for quick simple questions to save time when I know most articles on it are gonna have a lot of filler, but it’s been reliable for other more complex questions too. (I just would rather not rely on it too heavily since I know the cognitive debt effects of LLMs are quite real.)
It’s almost always quite accurate. Kagi’s search indexing is miles ahead of any other search I’ve tried in the past (Google, Bing, DuckDuckGo, Ecosia, StartPage, Qwant, SearXNG) so the AI naturally pulls better sources than the others as a result of the underlying index. There’s a reason I pay Kagi 10 bucks a month for search results I could otherwise get on DuckDuckGo. It’s just that good.
I will say though, on more complex questions with regard to like, very specific topics, such as a particular random programming library, specific statistics you’d only find from a government PDF somewhere with an obscure name, etc, it does tend to get it wrong. In my experience, it actually doesn’t hallucinate, as in if you check the sources there will be the information there… just not actually answering that question. (e.g. if you ask it about a stat and it pulls up reddit, but the stat is actually very obscure, it might accidentally pull a number from a comment about something entirely different than the stat you were looking for)
In my experience, DuckDuckGo’s assistant was extremely likely to do this, even on more well-known topics, at a much higher frequency. Same with Google’s Gemini summaries.
To be fair though, I think if you really, really use LLMs sparingly and with intention and an understanding of how relatively well known the topic is you’re searching for, you can avoid most hallucinations.
hayvan@piefed.world 2 hours ago
I use Perplexity for my searches, and it really depends on how much I care about the subject. I heard a name and don’t know who they are? LLM summary is good enough to have an idea. Doing research or looking up technical info? I open the cited sources.