blob42
@blob42@lemmy.ml
- Comment on Gosuki: a cloudless, real time, multi-browser, extension-free bookmark manager with multi-device sync 2 weeks ago:
You’re welcome ! I updated the documentation to cover some use cases with mobile devices using p2p sync or just relying on syncthing
- Comment on Gosuki: a cloudless, real time, multi-browser, extension-free bookmark manager with multi-device sync 2 weeks ago:
Thanks for the feedback :)
Regarding mobile devices my plan on the short term is to integrate with Floccus.
In the meantime I have been using a workaround with Syncthing as following:
I have a folder synced between my mobile devices and gosuki. From time to time I export all the mobile browser bookmarks using the built-in export to html. On the Gosuki node you can setup the html-autoimport module which continuously watches the synced folder and imports the bookmarks. It works flawlessly.
- Gosuki: a cloudless, real time, multi-browser, extension-free bookmark manager with multi-device syncgithub.com ↗Submitted 2 weeks ago to selfhosted@lemmy.world | 4 comments
- Comment on 3 weeks ago:
Because it’s an open and decentralized protocol in the same vein as email. It is the most likely to survive in the longterm as it’s not tied to a single entity.
Fragmentation is inevitable in a decentralized protocol. Look at email or http servers, there is no standard mainstream app but a standard extensible protocol, that’s how the internet was originally designed to grow. Now that corporations are pushing their own protocols, they have an incentive to lock users in their ecosystem.
- Comment on 3 weeks ago:
Not at all, message archive management is one of the core XMPP extensions that almost any XMPP app supports.
Let me tell you an other huge advantage of XMPP for those who care about privacy: it’s called Omemo
- Comment on 3 weeks ago:
I use a self hosted XMPP stack with ejabberd as server and conversations.im for mobile apps. I have audio and video calls and tons of features built-in into xmp. There is a huge selection of apps for all platforms.
XMPP is a battle tested protocol that all major messaging apps use underneath.
I used Matrix a few years ago for a full year. I dropped it and never came back. It is a bloated solution to a problem that was already soled with xmpp.
For this example I programmed a bot that is shared with a private room that provides commands such as archiving websites with archiveit or yt videos with YouTubeArchiver
I am planning however to migrate from Ejabberd to Prosody as I would like to easilly hack on the source code or extension and Ejabberd is Erlang with a very rigid stack.
- Comment on Anubis is awesome! Stopping (AI)crawlbots 5 weeks ago:
We need a decentralized community owned cloudflare alternative. Anubis looks on good track.
- Comment on Anubis is awesome! Stopping (AI)crawlbots 1 month ago:
Right I must have just blanket banned the whole /8 to be sure alibaba cloud is included. Did some time ago so I forgot
- Comment on Anubis is awesome! Stopping (AI)crawlbots 1 month ago:
I am planning to use it. For caddy users I came up some time ago with a solution that works after being bombarded by AI crawlers for weeks.
It is a custom caddy CEL expression filter coupled with caddy-ratelimit and caddy-defender.
Now here’s the fun part, the defender plugin can produce garbage as response so when a matching AI crawler fits it will poison their training dataset.
Originally I only relied on the rate limited and noticed the AI bots kept trying whenever the rate limit was reset. Once I introduced data poisoning they all stopped :)
`git.blob42.xyz { @bot <<CEL header({‘Accept-Language’: ‘zh-CN’}) || header_regexp(‘User-Agent’, ‘(?i:(.bot.|.crawler.|.meta.|.google.|.microsoft.|.spider.))’) CEL
abort @bot defender garbage { ranges aws azurepubliccloud deepseek gcloud githubcopilot openai 47.0.0.0/8 } rate_limit { zone dynamic_botstop { match { method GET # to use with defender #header X-RateLimit-Apply true #not header LetMeThrough 1 } key {remote_ip} events 1500 window 30s #events 10 #window 1m } } reverse_proxy upstream.server:4242 handle_errors 429 { respond "429: Rate limit exceeded." }
}`