I Prefer searx
Comment on DuckDuckGo poll says 90% responders don't want AI
setsubyou@lemmy.world 2 weeks ago
The article already notes that
privacy-focused users who don’t want “AI” in their search are more likely to use DuckDuckGo
But the opposite is also true. Maybe it’s not 90% to 10% elsewhere, but I’d expect the same general imbalance because some people who would answer yes to ai in a survey on a search web site don’t go to search web sites in the first place. They go to ChatGPT or whatever.
atropa@piefed.social 2 weeks ago
felixwhynot@lemmy.world 2 weeks ago
Do you also use Arch btw?
atropa@piefed.social 2 weeks ago
Arch based on laptop , phone is grapheneos and lineage🤔
hayvan@piefed.world 2 weeks ago
Nice
merc@sh.itjust.works 2 weeks ago
Yeah, this is why polling is hard.
Online polls are much more likely to be answered by people who like to answer polls than people who don’t. People who use Duck Duck Go are much more likely to be privacy-focused, knowledgeable enough to use a different search engine other than the default, etc.
This is also an echo chamber (The Fediverse) discussing the results of a poll on another similar echo chamber (Duck Duck Go). You won’t find nearly as many people on Lemmy or Mastodon who love AI as you will in most of the world. Still, I do get the impression that it’s a lot less popular than the AI companies want us to think.
A_norny_mousse@feddit.org 2 weeks ago
It still creeps me out that people use LLMs as search engines nowadays.
SendMePhotos@lemmy.world 2 weeks ago
That was the plan. That’s (I’m guessing) why the search results have slowly yet noticeably degraded since Ai has been consumer level.
They WANT you to use Ai so they can cater the answers. (tin foil hat)
I really do believe that though. Call me a conspiracy theorist but damn it, it fits.
RedstoneValley@sh.itjust.works 2 weeks ago
It’s not that wild of a conspiracy theory. Hard to get definite proof though because you would have to compare actual search results from the past with the results of the same search from today, and we unfortunately can’t travel back in time.
But there are indicators for your theory to be true:
Now, all of the points listed in above can be proven. If you put all of that together it seems at least highly likely that your “conspiracy theory” is in fact true.
Buddahriffic@lemmy.world 2 weeks ago
I’d argue that SEO was one of the biggests causes of search result degradation and consider any complaints coming from them as highly suspect due to conflicting interests. Eg, a change that makes it harder to game the search engine algorithms is good for searchers but bad for SEOs.
I hope the whole industry dies (or already is? I don’t hear much about it these days lol). They are just marketers whose whole job is to get you to look at their shit instead of the most relevant results.
msage@programming.dev 2 weeks ago
They WANT you to use Ai so they can
cater the answerssell you ads and stop you from using the internet.A_norny_mousse@feddit.org 2 weeks ago
You mean Google.
JustEnoughDucks@feddit.nl 2 weeks ago
And Bing, and searches that use google and Bing results (DDG, ecosia)
SendMePhotos@lemmy.world 2 weeks ago
All of them. I use DDG as a primary and even those results are worse.
Womble@piefed.world 2 weeks ago
Search results have been degrading for a lot longer than LLMs have been a thing. Peak usefulness for them was around a decade ago.
Honytawk@feddit.nl 2 weeks ago
SEO has been fucking up searches long before LLMs were a thing.
truthfultemporarily@feddit.org 2 weeks ago
I use kagi assistant. It does a search, summarizes, then gives references to the origin of each claim. Genuinely useful.
Warl0k3@lemmy.world 2 weeks ago
How often do you check the summaries? Real question, I’ve used similar tools and the accuracy to what it’s citing has been hilariously bad.
MaggiWuerze@feddit.org 2 weeks ago
Yeah, we were checking if school in our district was canceled due to icy conditions. Googles model claimed that a county wide school cancellation was in effect and cited a source. I opened, was led to our official county page and the very first sentence was a firm no.
It managed to summarize a simple and short text into its exact opposite
Deebster@infosec.pub 2 weeks ago
I also sometimes use the Kagi summaries and it’s definitely been wrong before. One time I asked what the term was for something in badminton and it came up with a different badminton term. When I looked at the cited source, it was a multiple choice quiz with the wrong term being the first answer.
It’s reliable that I still use it, although more often to quickly identify which search results are worth reading.
AmbitiousProcess@piefed.social 2 weeks ago
I can’t speak for the original poster, but I also use Kagi and I sometimes use the AI assistant, mostly just for quick simple questions to save time when I know most articles on it are gonna have a lot of filler, but it’s been reliable for other more complex questions too. (I just would rather not rely on it too heavily since I know the cognitive debt effects of LLMs are quite real.)
It’s almost always quite accurate. Kagi’s search indexing is miles ahead of any other search I’ve tried in the past (Google, Bing, DuckDuckGo, Ecosia, StartPage, Qwant, SearXNG) so the AI naturally pulls better sources than the others as a result of the underlying index. There’s a reason I pay Kagi 10 bucks a month for search results I could otherwise get on DuckDuckGo. It’s just that good.
I will say though, on more complex questions with regard to like, very specific topics, such as a particular random programming library, specific statistics you’d only find from a government PDF somewhere with an obscure name, etc, it does tend to get it wrong. In my experience, it actually doesn’t hallucinate, as in if you check the sources there will be the information there… just not actually answering that question. (e.g. if you ask it about a stat and it pulls up reddit, but the stat is actually very obscure, it might accidentally pull a number from a comment about something entirely different than the stat you were looking for)
In my experience, DuckDuckGo’s assistant was extremely likely to do this, even on more well-known topics, at a much higher frequency. Same with Google’s Gemini summaries.
To be fair though, I think if you really, really use LLMs sparingly and with intention and an understanding of how relatively well known the topic is you’re searching for, you can avoid most hallucinations.
truthfultemporarily@feddit.org 2 weeks ago
Depends on how important it is. Looking for a hint for a puzzle game: never. Trying to find out actually important info: always.
They make it easy though because after every statement it has these numbered annotations and you can just mouse over to read the text.
hayvan@piefed.world 2 weeks ago
I use Perplexity for my searches, and it really depends on how much I care about the subject. I heard a name and don’t know who they are? LLM summary is good enough to have an idea. Doing research or looking up technical info? I open the cited sources.
porcoesphino@mander.xyz 2 weeks ago
For others here, I use kagi and turned the LLM summaries off recently because they weren’t close to reliable enough for me personally so give it a test. I use LLMs for some tasks but I’m yet to find one that’s very reliable for specifics
Kyrgizion@lemmy.world 2 weeks ago
You can set up any AI assistant that way with custom instructions. I always do, and I require it to clearly separate facts with sources from hearsay or opinion.
TheOneCurly@feddit.online 2 weeks ago
lol, the random text generator does not understand what any of those things are.
Damorte@lemmy.world 2 weeks ago
A_norny_mousse@feddit.org 2 weeks ago
Thankfully Google is not the only search provider.
Damorte@lemmy.world 2 weeks ago
redditmademedoit@piefed.zip 2 weeks ago
But they all suck, or rather the Internet kinda sucks these days
gerryflap@feddit.nl 2 weeks ago
For some issues, especially related to programming and Linux, I feel like I kinda have to at this point. Google seems to have become useless, and DDG was never great to begin with but is arguably better than Google now. I’ve had some very obscure issues that I spent quite some time searching for, only to drop it into ChatGPT and get a link to some random forum post that discusses it. The biggest one was a Linux kernel regression that was posted on the same day in the Arch Linux forums somewhere. Despite having a hunch about what it could be and searching/struggling for over an hour, I couldn’t find anything. ChatGPT then managed to link me the post (and a suggested fix: switching to LTS kernel) in less than minute.
For general purpose search tho, hell no. If I want to know factual data that’s easy to find I’ll rely on the good old search engine. And even if I have to use an LLM, I don’t really trust it unless it gives me links to the information or I can verify that what it says is true.
A_norny_mousse@feddit.org 2 weeks ago
I’m seeing almost daily the fuck-ups resulting from somebody trying to fix something with ChatGPT, then coming to the forums because it didn’t work.
NewNewAugustEast@lemmy.zip 2 weeks ago
I agree that happens, but it has nothing to do with what op said. They didn’t want a solution, they wanted a link to where the problem was being discussed so they could work out a solution.
People seem to really confure the difference between asking an llm how to patch a boat vs where did people discuss ways to patch a boat.
Honytawk@feddit.nl 2 weeks ago
Most likely because if they came directly with their problem to whatever platform you are on, they would have been scolded at for not trying hard enough to solve it on their own. Or close the post because it has already been asked.
Cherry@piefed.social 2 weeks ago
Yup this is a great example. LLM for non opinion based stuff or for stuff that’s not essential for life. It’s great for finding a recipe but if you’re gonna rely on the internet or an LLM to help you form an opinion on something that requires objective thinking then no. If I said hey internet/LLM is humour good or bad, it would insert a swayed view.
It simply can’t be trusted. I can’t even trust it return shopping links so I have retreated back to real life. If it can’t play fair I no longer use it as a tool.
evol@lemmy.today 2 weeks ago
what makes it creepy?
IronBird@lemmy.world 2 weeks ago
it just makes it evermore obvious to them how many people in their life are sheep that believe anything the read online, i assume
evol@lemmy.today 2 weeks ago
So many people were already using tiktok or youtube as google search. I think AI is arguably better than those
CallMeAnAI@lemmy.world 2 weeks ago
What an absolutely arrogant attitude 🤣 You actually believe there is some gap here 🤣 just amazing.
Not using AI doesn’t mean your performing whatever task your doing better.
Kyrgizion@lemmy.world 2 weeks ago
First, its results are often simply wrong, so that’s no good. Second, the more people use the AI summaries, the easier it’ll be for the AI companies to subtly influence the results in their advantage. Think of advertising or propaganda.
CallMeAnAI@lemmy.world 2 weeks ago
So literally the same shit as before with search but wrapped up in a nice paragraph with citations you can follow up on?
evol@lemmy.today 2 weeks ago
Okay but its a search engine, they can literally just pick websites that align with a certain viewpoint and hide ones that don’t, Its not really a new problem. If they just make grokpedia the first result then its not like not having the AI give you a summary changed anything.
CosmoNova@lemmy.world 2 weeks ago
I know some of them personally and they usually claim to have decent to very good media literacy too. I would even say some of them are possibly more intelligent than me. Well, usually they are but when it comes to tech, they miss the forest for the trees I think.