I was thinking on something slightly different. It would be automatic; a bit more like “federated Google” and less like old style indexing sites. It’s something like this:
- there are central servers with info about pages on the internet
- you perform searches through a program or add-on (let’s call it “the software”)
- as you’re using the software, performing your search, it’ll also crawl the web and assign a “desirability” value to the pages, with that info being added to the server that you’re using
- the algorithm is open and, if you so desire, you can create your own server and fork the algorithm
It would be vulnerable to SEO, but less so than Google - because SEO tailored to the algorithm being used by one server won’t necessarily work well for another server.
Please, however, note that this is “ideas guy” tier. I wouldn’t be surprised if it’s unviable, for some reason that I don’t know.
SnotFlickerman@lemmy.blahaj.zone 6 months ago
It’s interesting that you mention MetaFilter, because they’re literally in the process of transitioning fully to a non-profit organization.
…metafilter.com/…/MeFi-Nonprofit-Update-March-26-…
They’re the only aggregator that still isn’t flooded with ads and has pretty decent moderation policies.
rollingFlint@lemmy.world 6 months ago
Wow, that’s really neat.
Thanks for letting me know about MetaFilter and its transition to NPO. This really seems like a great move for the site.
I’ve heard of the site before, but haven’t had the chance to try it before. Guess a bit late is better than never, right? :D