One of the things that can get annoying about searxng is that often search engines will rate limit if a lot of people are using one searxng instance. Maybe a “federated” approach would be, if results are rate limited -> send query to another trusted searx instance -> receive the results and send back to user. That way, people can stick to their favorite searxng instance without having to manually change their instance if the search engines were rate limiting.
Comment on SearNGX should be a federated search engine
kbal@fedia.io 5 months ago
I think you are not a computer programmer. Trying to build an index of the web by querying other search engines is not an efficient or sensible way to do things. Using ActivityPub for it is insane. Sharing query results in the obvious way might help a little during events where everyone searches for the same thing all at once, but in a relatively small pool of relatively sophisticated Internet users I don't think that happens often enough to justify the enormous amount of work and complexity.
On the other hand a distributed web crawler that puts its results in a free and decentralized database (one appropriate to the task; not blockchain) might be interesting. If the load on each node could be made light enough and the software simple enough that millions of people could run it at home, maybe it could be one way to build a new search engine. If that needs doing and someone has several hundred hours of free time to get it started.
aldalire@lemmy.dbzer0.com 5 months ago
fmstrat@lemmy.nowsci.com 5 months ago
Well, I am, including products in the Fediverse. And I never said federate the search queries.
Trying to build an index of the web by querying other search engines is not an efficient or sensible way to do things.
Never made this suggestion.
On the other hand a distributed web crawler that puts its results in a free and decentralized database
Now you’re getting there.
kbal@fedia.io 5 months ago
Okay, sorry! Still a long way to go before the idea becomes sufficiently well-specified to make much sense to me though. Perhaps an examination of yacy could provide you a concrete example of the ways in which such things are complicated. One would need to do much better to end up with a suitable replacement for the ways many of us use searx.
It was wanting to use ActivityPub and the "I fail to see any downside" which led me to read the rest of your post in a way that might've been overly pessimistic about its merits.
fmstrat@lemmy.nowsci.com 5 months ago
Yea, another user has suggested passing along the request to other instances when API limits are hit. That sounds like a better model for SearXNG specifically.
hendrik@palaver.p3x.de 5 months ago
If you're looking for a distributed crawler and index:
https://en.m.wikipedia.org/wiki/YaCy
Yacy already exists and has been around for 2 decades.
fmstrat@lemmy.nowsci.com 5 months ago
This is close to what I was thinking, but rather than crawling independently, leverage the API results from queries to build a list of sites (and then perhaps crawl). Potentialy a tag index of sorts. I’m not solid on any idea as I haven’t investigated SearNGX enough to see how it works under the hood, but yes, on the same plane of thought.
Max_P@lemmy.max-p.me 5 months ago
I ran a YaCy instance for a while like a decade ago. It does federate index requests, and when you search it propagates the search request across a bunch of nodes. When my node came online it almost immediately started crawling stuff and it did get a bunch of search queries. But the network was still pretty small back then and the search results were… not great. That’s the price of independence from Google’s and Microsoft’s giant server farms, it’s hard to compete with that size.
But at the rate Google and Bing are enshittifying, I think it’s worth revisiting.
Using ActivityPub for this would be immensely wasteful. It’s just not feasable that all instances would have the whole index because it’s so large. Back when I tried it, the network still had several TBs worth of indexed pages. This is firmly in the realm of distributed P2P systems. One could have an ActivityPub plugin however to receive updates from social media near instantly and index those immediately with less overhead. But you still want to index wikipedia, forums, blogs, whatever the crawlers can find.
hendrik@palaver.p3x.de 5 months ago
Sure. SearX is a meta-search engine. It does (only) queries to other search engines to get results. YaCy on the other hand is itself a search engine. It has the data available and doesn't do queries to other engines. In theory you could combine the two concepts. Have a software that does both. But that requires some clever thinking. The returned (Google) ranking only applies to the exact search term. And it's questionable if you can store it and do anything useful with it except for when some other user searches for the exact same thing. And also the returned teaser texts are very short and tailored to the search query. So maybe also useless. It'd be hard.
One thing you could do is crawl the results that users actually click on. And I think YaCy already does that. AFAIK they had an browser add-on or a proxy or something to intercept visited pages (and hence search results).