schmurnan
@schmurnan@lemmy.world
- Comment on Why I Lost Faith in Kagi 7 months ago:
Yeah I had SearXNG running via a Docker container and it was pretty good. I didn’t like having to use a domain name and expose it over the internet though, because Docker is running on my NAS. I guess I could give it another try using Cloudflare tunnels so I don’t have to open anything up.
Or else go back to Startpage.
- Comment on Why I Lost Faith in Kagi 7 months ago:
My 100-search trial expired this week and I was literally planning on subscribing later tonight. This has made me think twice.
But it takes me back to why I tried Kagi in the first place: What else can I use that respects privacy?
I don’t think any of them do completely. DuckDuckGo uses Bing, so is Microsoft; Google is… well, Google; Brave is apparently really shady; I’ve never thought much of the results from Bing directly.
What else?
- Comment on "Best" Mac browser: Your view 8 months ago:
Sorry, I wasn’t classing Chrome and Chromium as the same thing. I’m a software developer of 20 years so I understand they’re not the same thing. I guess I just took that opportunity to state that I don’t use Google services/products if I can help it.
In work we’re a Windows house, but I’ve managed to get my hands on an M2 MacBook Pro. For now I’m still using Edge but would like to get my iCloud exemption so I can use some of the apps on my personal MBP for work, and I’m wondering whether I should continue using Edge for work and A. N. Other browser for personal (and mirror this on my iPhone); or whether to use profiles, for example, on Safari and split it that way. I might be limited to what I can download on the work machine, but I’d like to synergies everything as much as I can where possible rather than having two completely different Mac experiences with my iPhone sort of thrown in the middle of both.
Which browser do you prefer? I assume a Chromium-based derivative?
- Comment on "Best" Mac browser: Your view 8 months ago:
I have/had a ProtonMail account, and whilst it was great, I believe it was only end-to-end encrypted when sending emails to other people using ProtonMail…? Or at least that was my understanding at the time.
The apps back then weren’t particularly polished, so I ended up migrating everything back to iCloud.
To be honest, I don’t seem to have any issues with iCloud and everything just works. But that’s the problem with Apple, and how they “get” you.
- Comment on "Best" Mac browser: Your view 8 months ago:
And this, my friend, is exactly what I came here for. Very insightful, informative and measured answer. Thank you for taking the time 👍🏻
- Comment on "Best" Mac browser: Your view 8 months ago:
Fair enough 👍🏻
- Comment on "Best" Mac browser: Your view 8 months ago:
Tried it, kinda liked it, but then read a lot of shady stuff about them not being as privacy-focused as they’re made out to be.
I might give Arc a go, not sure how good/popular it is though. But I think anything other than Safari will be a compromise because of the Apple Pay/Touch ID/Face ID integration.
- Comment on "Best" Mac browser: Your view 8 months ago:
Yeah I know they’re all based on one of three, but they are all subtly different in what they offer.
So whilst there are three main engines, there are definitely more than three choices.
Bottom of the pile for me is Chrome - I don’t use anything Google knowingly/willingly.
- Comment on "Best" Mac browser: Your view 8 months ago:
The Apple integration is probably the main reason I use Safari I think; the likes of Apple Pay, Touch ID/Face ID all just works. I’d love that ability in Firefox and then I’d probably use it exclusively.
- Comment on "Best" Mac browser: Your view 8 months ago:
That wasn’t the OP 😂
- Submitted 8 months ago to technology@lemmy.world | 59 comments
- Comment on Which privacy-focused search engine are you using? 1 year ago:
Yeah I have a self-hosted one but I’m struggling to get results. I posted under another comment on this thread, was just gonna ask for some support troubleshooting.
Completely agree though, self hosting over public instances all day long.
- Comment on Which privacy-focused search engine are you using? 1 year ago:
Do you use a self-hosted SearXNG, or one of the other hosted instances?
- Comment on Printers 1 year ago:
I see what you did there!
- Comment on Which privacy-focused search engine are you using? 1 year ago:
I replied to another comment on here saying that I’d tried this once before, via a Docker container, but just wasn’t getting any results back (kept getting timeouts from all the search engines).
I’ve just revisited it, and still get the timeouts. Reckon you’re able to help me troubleshoot it?
Below are the logs from Portainer:
File "/usr/local/searxng/searx/network/__init__.py", line 165, in get return request('get', url, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/searxng/searx/network/__init__.py", line 98, in request raise httpx.TimeoutException('Timeout', request=None) from e httpx.TimeoutException: Timeout 2023-08-06 09:58:13,651 ERROR:searx.engines.soundcloud: Fail to initialize Traceback (most recent call last): File "/usr/local/searxng/searx/network/__init__.py", line 96, in request return future.result(timeout) ^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/concurrent/futures/_base.py", line 458, in result raise TimeoutError() TimeoutError The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/searxng/searx/search/processors/abstract.py", line 75, in initialize self.engine.init(get_engine_from_settings(self.engine_name)) File "/usr/local/searxng/searx/engines/soundcloud.py", line 69, in init guest_client_id = get_client_id() ^^^^^^^^^^^^^^^ File "/usr/local/searxng/searx/engines/soundcloud.py", line 45, in get_client_id response = http_get("https://soundcloud.com") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/searxng/searx/network/__init__.py", line 165, in get return request('get', url, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/searxng/searx/network/__init__.py", line 98, in request raise httpx.TimeoutException('Timeout', request=None) from e httpx.TimeoutException: Timeout 2023-08-06 09:58:13,654 ERROR:searx.engines.soundcloud: Fail to initialize Traceback (most recent call last): File "/usr/local/searxng/searx/network/__init__.py", line 96, in request return future.result(timeout) ^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/concurrent/futures/_base.py", line 458, in result raise TimeoutError() TimeoutError The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/searxng/searx/search/processors/abstract.py", line 75, in initialize self.engine.init(get_engine_from_settings(self.engine_name)) File "/usr/local/searxng/searx/engines/soundcloud.py", line 69, in init guest_client_id = get_client_id() ^^^^^^^^^^^^^^^ File "/usr/local/searxng/searx/engines/soundcloud.py", line 45, in get_client_id response = http_get("https://soundcloud.com") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/searxng/searx/network/__init__.py", line 165, in get return request('get', url, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/searxng/searx/network/__init__.py", line 98, in request raise httpx.TimeoutException('Timeout', request=None) from e httpx.TimeoutException: Timeout 2023-08-06 10:02:05,024 ERROR:searx.engines.wikidata: engine timeout 2023-08-06 10:02:05,024 ERROR:searx.engines.duckduckgo: engine timeout 2023-08-06 10:02:05,024 ERROR:searx.engines.google: engine timeout 2023-08-06 10:02:05,024 ERROR:searx.engines.qwant: engine timeout 2023-08-06 10:02:05,024 ERROR:searx.engines.startpage: engine timeout 2023-08-06 10:02:05,024 ERROR:searx.engines.wikibooks: engine timeout 2023-08-06 10:02:05,024 ERROR:searx.engines.wikiquote: engine timeout 2023-08-06 10:02:05,024 ERROR:searx.engines.wikisource: engine timeout 2023-08-06 10:02:05,025 ERROR:searx.engines.wikipecies: engine timeout 2023-08-06 10:02:05,025 ERROR:searx.engines.wikiversity: engine timeout 2023-08-06 10:02:05,025 ERROR:searx.engines.wikivoyage: engine timeout 2023-08-06 10:02:05,025 ERROR:searx.engines.brave: engine timeout 2023-08-06 10:02:05,481 WARNING:searx.engines.wikidata: ErrorContext('searx/search/processors/online.py', 118, "response = req(params['url'], **request_args)", 'httpx.TimeoutException', None, (None, None, None)) False 2023-08-06 10:02:05,481 ERROR:searx.engines.wikidata: HTTP requests timeout (search duration : 6.457878380082548 s, timeout: 6.0 s) : TimeoutException 2023-08-06 10:02:05,482 WARNING:searx.engines.wikisource: ErrorContext('searx/search/processors/online.py', 118, "response = req(params['url'], **request_args)", 'httpx.TimeoutException', None, (None, None, None)) False 2023-08-06 10:02:05,484 ERROR:searx.engines.wikisource: HTTP requests timeout (search duration : 6.460748491808772 s, timeout: 6.0 s) : TimeoutException 2023-08-06 10:02:05,485 WARNING:searx.engines.brave: ErrorContext('searx/search/processors/online.py', 118, "response = req(params['url'], **request_args)", 'httpx.TimeoutException', None, (None, None, None)) False 2023-08-06 10:02:05,485 ERROR:searx.engines.brave: HTTP requests timeout (search duration : 6.461546086706221 s, timeout: 6.0 s) : TimeoutException 2023-08-06 10:02:05,487 WARNING:searx.engines.google: ErrorContext('searx/search/processors/online.py', 118, "response = req(params['url'], **request_args)", 'httpx.TimeoutException', None, (None, None, None)) False 2023-08-06 10:02:05,487 ERROR:searx.engines.google: HTTP requests timeout (search duration : 6.463769535068423 s, timeout: 6.0 s) : TimeoutException 2023-08-06 10:02:05,489 WARNING:searx.engines.wikiversity: ErrorContext('searx/search/processors/online.py', 118, "response = req(params['url'], **request_args)", 'httpx.TimeoutException', None, (None, None, None)) False 2023-08-06 10:02:05,489 ERROR:searx.engines.wikiversity: HTTP requests timeout (search duration : 6.466003180015832 s, timeout: 6.0 s) : TimeoutException 2023-08-06 10:02:05,490 WARNING:searx.engines.wikivoyage: ErrorContext('searx/search/processors/online.py', 118, "response = req(params['url'], **request_args)", 'httpx.TimeoutException', None, (None, None, None)) False 2023-08-06 10:02:05,490 ERROR:searx.engines.wikivoyage: HTTP requests timeout (search duration : 6.466597221791744 s, timeout: 6.0 s) : TimeoutException 2023-08-06 10:02:05,490 WARNING:searx.engines.qwant: ErrorContext('searx/search/processors/online.py', 118, "response = req(params['url'], **request_args)", 'httpx.TimeoutException', None, (None, None, None)) False 2023-08-06 10:02:05,490 ERROR:searx.engines.qwant: HTTP requests timeout (search duration : 6.4669976509176195 s, timeout: 6.0 s) : TimeoutException 2023-08-06 10:02:05,491 WARNING:searx.engines.wikibooks: ErrorContext('searx/search/processors/online.py', 118, "response = req(params['url'], **request_args)", 'httpx.TimeoutException', None, (None, None, None)) False 2023-08-06 10:02:05,491 ERROR:searx.engines.wikibooks: HTTP requests timeout (search duration : 6.4674198678694665 s, timeout: 6.0 s) : TimeoutException 2023-08-06 10:02:05,491 WARNING:searx.engines.wikiquote: ErrorContext('searx/search/processors/online.py', 118, "response = req(params['url'], **request_args)", 'httpx.TimeoutException', None, (None, None, None)) False 2023-08-06 10:02:05,492 WARNING:searx.engines.wikipecies: ErrorContext('searx/search/processors/online.py', 118, "response = req(params['url'], **request_args)", 'httpx.TimeoutException', None, (None, None, None)) False 2023-08-06 10:02:05,492 ERROR:searx.engines.wikiquote: HTTP requests timeout (search duration : 6.468321242835373 s, timeout: 6.0 s) : TimeoutException 2023-08-06 10:02:05,492 ERROR:searx.engines.wikipecies: HTTP requests timeout (search duration : 6.468797960784286 s, timeout: 6.0 s) : TimeoutException 2023-08-06 10:02:05,496 WARNING:searx.engines.duckduckgo: ErrorContext('searx/engines/duckduckgo.py', 98, 'res = get(query_url, headers=headers)', 'httpx.TimeoutException', None, (None, None, None)) False 2023-08-06 10:02:05,497 ERROR:searx.engines.duckduckgo: HTTP requests timeout (search duration : 6.47349306801334 s, timeout: 6.0 s) : TimeoutException 2023-08-06 10:02:05,511 WARNING:searx.engines.startpage: ErrorContext('searx/engines/startpage.py', 214, 'resp = get(get_sc_url, headers=headers)', 'httpx.TimeoutException', None, (None, None, None)) False 2023-08-06 10:02:05,511 ERROR:searx.engines.startpage: HTTP requests timeout (search duration : 6.487425099126995 s, timeout: 6.0 s) : TimeoutException 2023-08-06 10:04:27,475 ERROR:searx.engines.duckduckgo: engine timeout 2023-08-06 10:04:27,770 WARNING:searx.engines.duckduckgo: ErrorContext('searx/search/processors/online.py', 118, "response = req(params['url'], **request_args)", 'httpx.TimeoutException', None, (None, None, None)) False 2023-08-06 10:04:27,771 ERROR:searx.engines.duckduckgo: HTTP requests timeout (search duration : 3.2968566291965544 s, timeout: 3.0 s) : TimeoutException 2023-08-06 10:04:50,094 ERROR:searx.engines.duckduckgo: engine timeout 2023-08-06 10:04:50,187 WARNING:searx.engines.duckduckgo: ErrorContext('searx/engines/duckduckgo.py', 98, 'res = get(query_url, headers=headers)', 'httpx.ConnectTimeout', None, (None, None, 'duckduckgo.com')) False 2023-08-06 10:04:50,187 ERROR:searx.engines.duckduckgo: HTTP requests timeout (search duration : 3.0933595369569957 s, timeout: 3.0 s) : ConnectTimeout
The above is a simple search for “best privacy focused search engines 2023”, followed by the same search again but using the ddg! bang in front of it.
I can post my docker-compose if it helps?
- Comment on Which privacy-focused search engine are you using? 1 year ago:
What are your thoughts on Arc? I tried it a couple of months ago but couldn’t really get used to the layout, etc.
Sure it’s as good a browser as any, I just wasn’t feeling it.
- Comment on Which privacy-focused search engine are you using? 1 year ago:
Perhaps I need to go back to figuring out SearXNG. Although I did read that there’s a slight privacy compromise to use SearXNG over SearX.
- Comment on Which privacy-focused search engine are you using? 1 year ago:
I’ve always just used Safari as my browser on iOS and macOS so have never paid attention to reviews/opinions on the newer browsers such as Brave. Before I switched to Mac I always used Firefox on my Windows machines so know how privacy focused they’ve always been. But I’m hearing a lot of positives about Brave, and so far it seems pretty decent.
I’ve tried Arc but wasn’t entirely convinced. And in work I have a Windows machine so have been tied to Edge (although I’ve recently put in a request for Firefox and had it approved).
I guess it’d be nice to have a consistent search experience across the board, which the likes of DDG would give me. But definitely seeing good things about Brave and Brave Search.
- Comment on Which privacy-focused search engine are you using? 1 year ago:
Thanks, I’ll take a look. I didn’t know about that post.
- Submitted 1 year ago to technology@lemmy.world | 62 comments
- Comment on Route domain name to Docker containers on Synology NAS? 1 year ago:
Thanks, and yeah sorry, what I meant was to listen on both ports 80 and 443 and have a redirect in Traefik from 80 to 443 - I don’t plan on having anything directly accessible over port 80.
As per another post, I’ve hit a stumbling block:
OK so made a start with this. Spun up a Pi-hole container, added mydomain.com as an A record in Local DNS, and created a CNAME for traefik.mydomain.com to point to mydomain.com.
In Cloudflare, I removed the mydomain.com A record and the www CNAME record.
Doing an nslookup on mydomain.com I get
Non-authoritative answer: *** Can't find mydomain.com: No answer
Which I guess is to be expected.
However, when I then navigate to traefik.mydomain.com in my browser, I’m met with a Cloudflare error page: https://imgur.com/XhKOywo.
Below is the docker-compose of my traefik container:
traefik: container_name: traefik image: traefik:latest restart: unless-stopped networks: - medianet ports: - 80:80 volumes: - /etc/localtime:/etc/localtime:ro - /var/run/docker.sock:/var/run/docker.sock:ro - /volume1/docker/traefik:/etc/traefik - /volume1/docker/traefik/access.log:/logs/access.log - /volume1/docker/traefik/traefik.log:/logs/traefik.log - /volume1/docker/traefik/acme/acme.json:/acme.json environment: - TZ=Europe/London labels: - traefik.enable=true - traefik.http.routers.traefik.rule=Host(`$TRAEFIK_DASHBOARD_HOST`) && (PathPrefix(`/api`) || PathPrefix(`/dashboard`)) - traefik.http.routers.traefik.service=api@internal
My traefik.yml is also nice and basic at this point:
global: sendAnonymousUsage: false entryPoints: web: address: ":80" api: dashboard: true insecure: true providers: docker: endpoint: "unix:///var/run/docker.sock" watch: true exposedByDefault: false log: filePath: traefik.log level: DEBUG accessLog: filePath: access.log bufferingSize: 100
Any ideas what’s going wrong? I’m unclear on why the domain is still routing to Cloudflare.
- Comment on Route domain name to Docker containers on Synology NAS? 1 year ago:
I don’t plan on exposing any of this stuff to anybody other than me. I do plan on spinning up SearX but it’ll only be me using it. I’ve given up trying to convince my family to move away from Google to even DuckDuckGo or Startpage, so there’s no way I’ll convince them to use SearX!
I think, therefore, for accessing away from home I’ll perhaps setup a subdomain that points to the IP of my Tailscale container — that means it’ll be accessible externally but only when I turn on the VPN.
When I’m on my home network I have a VPN on my Mac anyway.
- Comment on Route domain name to Docker containers on Synology NAS? 1 year ago:
Before I was using Traefik I used to use plain NGINX and was pretty happy with it. I made the switch to Traefik after reading some good things about it on Reddit.
More than happy to switch to NPM and give it a try. At this point I have no reverse proxy running at all, so not even like I have to swap out Traefik — there’s nothing they’re to begin with.
- Comment on Route domain name to Docker containers on Synology NAS? 1 year ago:
Thanks, I’d like to know more about how to go about this approach.
I guess in my head, I want to achieve the following (however I go about it):
- Access mydomain.com from outside my network and hit some kind of blank page that wouldn’t necessarily suggest to the public that anything exists here
- Access mydomain.com from inside my network and hit a login page of some kind (Authelia or otherwise), to then gain access to the Homepage container running in Docker (essentially a dashboard to all my services)
- Access secure.mydomain.com from outside my network and route through to the same as above, only this would be via the Tailscale IP address/container running on my stack to allow for remote access
- Route all HTTP requests to HTTPS
- Use the added protection that Cloudflare brings (orange clouds where possible)
- SSL certificates for all services
- Ability to turn up extra Docker containers and auto-obtain SSL certs for them Ensure that everything else on my NAS and network is secure/inaccessible other than the services I expose through Traefik.
I have no idea where Cloudflare factors in (if at all), nor how Pi-hole factors in (if at all).
Internal stuff I’ve been absolutely fine with. Stick a domain name, a reverse proxy and DNS in front of me and it’s like I’m learning how to code a Hello World app all over again.
- Comment on Route domain name to Docker containers on Synology NAS? 1 year ago:
Thanks.
I guess the issue with this, though, is that I don’t always need to access it via Tailscale - I’d only do that when away from home. Perhaps there’s a way to point a subdomain to the Tailscale IP, and that’s only accessible when Tailscale is active? And then use an alternative subdomain to access it the rest of the time? Is that achievable?
- Comment on Route domain name to Docker containers on Synology NAS? 1 year ago:
Thanks. Yep, subdomains was what I’d planned on: traefik.mydomain.com to access the Traefik dashboard; home.mydomain.com to access the Homepage container. I was planning on spinning up an Authelia container as well to provide 2FA for the services I want protecting. I guess it’d also be nice to have some kind of landing page for traffic coming directly to www.mydomain.com or mydomain.com as well.
Ideally I don’t want to port forward, so would I need to rely on Traefik to redirect the traffic from port 80 to port 443, and then proxy from port 443 to the required container? How do I therefore stop traffic from hitting the DSM admin on ports 5000/5001 for example?
I need to figure out a starting point to get traffic from my domain into my NAS (safely) then start spinning up containers and have Traefik route them appropriately, then I can look at Pi-hole/local DNS and Tailscale. And then I guess SSL.
- Comment on Route domain name to Docker containers on Synology NAS? 1 year ago:
Interesting, I’ve never considered Cloudflare Tunnels. Thanks.
However I do remember seeing this video the other day, that suggests perhaps it’s not always the best solution? Not sure this applies here, though: https://www.youtube.com/watch?v=oqy3krzmSMA.
- Submitted 1 year ago to selfhosted@lemmy.world | 16 comments