hedgehog
@hedgehog@ttrpg.network
- Comment on Age verification and the enshitification of streaming will help reduce the decline in computer literacy in under 18s 1 week ago:
I’m a millennial and I did it more than once on hardware older than I was, but because I wanted to, not because there were no other options.
- Comment on How do I manage docker&Traefik behind a reverse proxy not on docker. 2 weeks ago:
This is what I would try first. It looks like 1337 is the exposed port, per github.com/nightscout/…/Dockerfile
x-logging: &default-logging options: max-size: '10m' max-file: '5' driver: json-file services: mongo: image: mongo:4.4 volumes: - ${NS_MONGO_DATA_DIR:-./mongo-data}:/data/db:cached logging: *default-logging nightscout: image: nightscout/cgm-remote-monitor:latest container_name: nightscout restart: always depends_on: - mongo logging: *default-logging ports: - 1337:1337 environment: ### Variables for the container NODE_ENV: production TZ: [removed] ### Overridden variables for Docker Compose setup # The `nightscout` service can use HTTP, because we use `nginx` to serve the HTTPS # and manage TLS certificates INSECURE_USE_HTTP: 'true' # For all other settings, please refer to the Environment section of the README ### Required variables # MONGO_CONNECTION - The connection string for your Mongo database. # Something like mongodb://sally:sallypass@ds099999.mongolab.com:99999/nightscout # The default connects to the `mongo` included in this docker-compose file. # If you change it, you probably also want to comment out the entire `mongo` service block # and `depends_on` block above. MONGO_CONNECTION: mongodb://mongo:27017/nightscout # API_SECRET - A secret passphrase that must be at least 12 characters long. API_SECRET: [removed] ### Features # ENABLE - Used to enable optional features, expects a space delimited list, such as: careportal rawbg iob # See https://github.com/nightscout/cgm-remote-monitor#plugins for details ENABLE: careportal rawbg iob # AUTH_DEFAULT_ROLES (readable) - possible values readable, denied, or any valid role name. # When readable, anyone can view Nightscout without a token. Setting it to denied will require # a token from every visit, using status-only will enable api-secret based login. AUTH_DEFAULT_ROLES: denied # For all other settings, please refer to the Environment section of the README # https://github.com/nightscout/cgm-remote-monitor#environment
- Comment on How do I manage docker&Traefik behind a reverse proxy not on docker. 2 weeks ago:
To run it with Nginx instead of Traefik, you need to figure out what port Nightscout’s web server runs on, then expose that port, e.g.,
services: nightscout: ports: - 3000:3000
You can remove the labels as those are used by Traefik, as well as the Traefik service itself.
Then just point Nginx to that port (e.g., 3000) on your local machine.
—-
Traefik has to know the port, too, but it will auto detect the port that a local Docker service is running on. It looks like your config is relying on that feature as I don’t see the label that explicitly specifies the port.
- Comment on YSK about StopICE.net to send and receive alerts about ICE raids in your area 3 weeks ago:
PSTN is wiretapped.
It’s a good thing that the website itself supports sending and receiving alerts, then.
- Comment on My reason for wanting HomeAssistant and a locked down VLAN... 5 weeks ago:
I thought Hue bulbs used Zigbee?
- Comment on My reason for wanting HomeAssistant and a locked down VLAN... 5 weeks ago:
The up arrow moves through the letters, e.g., A->B->C. The down arrow moves to the next character in the sequence, e.g., C->CA->CAA. If you click past the correct letter, you’ll have to click all the way through again. And if you submit the wrong letter, you have to start all over (after it takes twenty seconds attempting to connect with the wrong password and then alerts you that it didn’t work, of course).
- Comment on [deleted] 5 weeks ago:
Depends on your e-reader! If you have a Kindle, Kobo, or Nook, yes, that’s true. However:
Boox has e-readers that run Android and you can install Hoopla. The Palma 2 is phone sized which is great. The Page, Leaf2, and Go 7 are all in the 7” form factor, plus they have 6” versions. And they have tablet sizes, too. They have both traditional black&white and color e-ink displays.
I have the Boox Air 3C and the original Palma and both are great. I’ll likely get a Boox as my next standard sized e-reader, too (whenever I replace my Kindle Oasis). Though unless the technology drastically improves before then, it’ll be one with a black and white screen. (The color is nice in the tablet sizes, though.)
Some other options that I’m less familiar with include:
- Bigme has Android 7” color e-readers, as well as tablets and e-ink smartphones.
- Meebook has e-readers that run Android (and Android e-ink tablets)
- The MuSnap Aura C is a 10” Android e-ink tablet
- XPPen has an 11” Android e-ink tablet
- Comment on [deleted] 5 weeks ago:
It’s incredibly compatible. Capitalists want laborers to work hard. It encourages laborers to work hard so they can one day be capitalists themselves.
It also encourages them to vote for politicians who don’t serve them, but politicians, because someday they’ll benefit from their pro-business policies.
- Comment on [deleted] 5 weeks ago:
The American Dream is capitalist propaganda, not anticapitalist.
- Comment on We need to stop pretending AI is intelligent 5 weeks ago:
The products currently on the marketplace have architectures that are far more sophisticated than just an LLM. Even something as simple as “Deep Research,” which both Anthropic and Claude have available, is using multiple interconnected systems to provide a single response.
Consider Agentic AI, like Claude Code, where they’re using tools, analyzing the results of those tools, iterating, possibly calling out to MCP servers to do other things, etc… The tools allow them to do things like read or modify files in the working directory, execute programs (i.e., your linter, installing dependencies, running your app), querying against your app itself, and so on.
And of course note that the single “Claude” box in that diagram has an architecture that’s more sophisticated than just being an LLM. At minimum, consumer facing LLMs generally have a supervisor that censors problematic inputs and outputs; this doesn’t make the system more competent but the same concept can be applied to any other sort of transparent wrapper.
It seems to me that we already have consumer systems that are doing what you described, and we’re already working on enhancing their architectures further.
- Comment on Plex has paywalled my server! 1 month ago:
OP is also in the allegedly ultra rare camp of “successfully configured Jellyfin and lived to tell the tale.” Not what I’d expect of someone unable to configure Plex correctly. I’ve not set up a Plex server myself but my guess is it wasn’t clear that it was misconfigured - it did work previously, after all.
- Comment on Plex has paywalled my server! 1 month ago:
If they’re calling it remote streaming when you’re on the same (local) network, that’s not exactly intuitive. I’d say OP’s phrasing was fair.
- Comment on Would alcohol be as popular if it weren't a beverage? 1 month ago:
You got the idea!
- Comment on Would alcohol be as popular if it weren't a beverage? 1 month ago:
We’re in c/shower_thoughts. “What if my grandma was a bike?” would fit right in
- Comment on Social nuke 2 months ago:
It was already known before the whistleblower that:
- Siri inputs (all STT at that time, really) were processed off device
- Siri had false activations
The “sinister” thing that we learned was that Apple was reviewing those activations to see if they were false, with the stated intent (as confirmed by the whistleblower) of using them to reduce false activations.
There are also black box methods to verify that data isn’t being sent and that particular hardware (like the microphone) isn’t being used, and there are people who look for vulnerabilities as a hobby. If the microphones on the most/second most popular phone brand (iPhone, Samsung) were secretly recording all the time, evidence of that would be easy to find and would be a huge scoop - why haven’t we heard about it yet?
Snowden and Wikileaks dumped a huge amount of info about governments spying, but nothing in there involved always on microphones in our cell phones.
To be fair, an individual phone is a single compromise away from actually listening to you, so it still makes sense to avoid having sensitive conversations within earshot of a wirelessly connected microphone. But generally that’s not the concern most people should have.
Advertising tracking is much more sinister and complicated and harder to wrap your head around than “my phone is listening to me” and as a result makes for a much less glamorous story, but there are dozens, if not hundreds or thousands, of stories out there about how invasive advertising companies’ methods are, about how they know too much, etc… Think about what LLMs do with text. The level of prediction that they can do. That’s what ML algorithms can do with your behavior.
If you’re misattributing what advertisers know about you to the phone listening and reporting back, then you’re not paying attention to what they’re actually doing.
So yes - be vigilant. Just be vigilant about the right thing.
- Comment on Social nuke 2 months ago:
proven by a whistleblower from apple
Assuming you have an iPhone. And even then, the whistleblower you’re referencing was part of a team who reviewed utterances by users with the “Hey Siri” wake word feature enabled. If you had Siri disabled entirely or had the wake word feature disabled, you weren’t impacted at all.
This may have been limited to impacting only users who also had some option like “Improve Siri and Dictation” enabled, but it’s not clear. Today, the Privacy Policy explicitly says that Apple can have employees review your interactions with Siri and Dictation (my understanding is the reason for the settlement is that they were not explicit that human review was occurring). I strongly recommend disabling that setting, particularly if you have a wake word enabled.
If you have wake words enabled on your phone or device, your phone has to listen to be able to react to them. At that point, of course the phone is listening. Whether it’s sending the info back somewhere is a different story, and there isn’t any evidence that I’m aware of that any major phone company does this.
- Comment on It's easier to inform language with language than with experience. 2 months ago:
Sure - Wikipedia says it better than I could hope to:
As English-linguist Larry Andrews describes it, descriptive grammar is the linguistic approach which studies what a language is like, as opposed to prescriptive, which declares what a language should be like.[11]: 25 In other words, descriptive grammarians focus analysis on how all kinds of people in all sorts of environments, usually in more casual, everyday settings, communicate, whereas prescriptive grammarians focus on the grammatical rules and structures predetermined by linguistic registers and figures of power. An example that Andrews uses in his book is fewer than vs less than.[11]: 26 A descriptive grammarian would state that both statements are equally valid, as long as the meaning behind the statement can be understood. A prescriptive grammarian would analyze the rules and conventions behind both statements to determine which statement is correct or otherwise preferable. Andrews also believes that, although most linguists would be descriptive grammarians, most public school teachers tend to be prescriptive.[11]: 26
- Comment on It's easier to inform language with language than with experience. 2 months ago:
You might be interested in reading up on the debate of “Prescriptive vs Descriptive” approaches in a linguistics context.
- Comment on What do I actually need? 2 months ago:
You can run a NAS with any Linux distro - your limiting factor is having enough drive storage. You might want to consider something that’s great at using virtual machines (e.g., Proxmox) if you don’t like Docker, but I have almost everything I want running in Docker and haven’t needed to spin up a single virtual machine.
- Comment on How do I securely host Jellyfin? (Part 2) 3 months ago:
Wow, there isn’t a single solution in here with the obvious answer?
You’ll need a domain name. It doesn’t need to be paid - you can use DuckDNS. Note that whoever hosts your DNS needs to support dynamic DNS. I use Cloudflare for this for free (not their other services) even though I bought my domains from Namecheap.
Then, you can either set up Let’s Encrypt on device and have it generate certs in a location Jellyfin knows about (not sure what this entails exactly, as I don’t use this approach) or you can do what I do:
- Set up a reverse proxy - I use Traefik but there are a few other solid options - and configure it to use Let’s Encrypt and your domain name.
- Your reverse proxy should have ports 443 and 80 exposed, but should upgrade http requests to https.
- Add Jellyfin as a service and route in your reverse proxy’s config.
On your router, forward port 443 to the outbound secure port from your PI (which for simplicity’s sake should also be port 443). You likely also need to forward port 80 in order to verify Let’s Encrypt.
If you want to use Jellyfin while on your network and your router doesn’t support NAT loopback requests, then you can use the server’s IP address and expose Jellyfin’s HTTP ports (e.g., 8080) - just make sure to not forward those ports from the router. You’ll have local unencrypted transfers if you do this, though.
Make sure you have secure passwords in Jellyfin. Note that you are vulnerable to a Jellyfin or Traefik vulnerability if one is found, so make sure to keep your software updated.
If you use Docker, I can share some config info with you on how to set this all up with Traefik, Jellyfin, and a dynamic dns services all up with docker-compose services.
- Comment on YSK famous youtuber "math sorcerer" is selling ai generated books 4 months ago:
Why should we know this?
Not watching that video for a number of reasons, namely that ten seconds in they hadn’t said anything of substance, their first claim was incorrect (Amazon does not prohibit use of gen ai in books, nor do they require its use be disclosed to the public, no matter how much you might wish it did), and there was nothing in the description of substance, which in instances like this generally means the video will largely be devoid of substance.
What books is the Math Sorcerer selling? Are they the ones on Amazon linked from their page? Are they selling all of those or just promoting most of them?
Why do we think they were generated with AI?
When you say “generated with AI,” what do you mean?
- Generated entirely with AI, without even editing? Then why do they have so many 5 star reviews?
- Generated with AI and then heavily edited?
- Written partly by hand with some pieces written by unedited GenAI?
- Written partly by hand with some pieces written by edited GenAI?
- AI was used for ideation?
- AI was used during editing? E.g., Grammarly?
- GenAI was used during editing?E.g., “ChatGPT, review this chapter and give me any feedback. If sections need rewritten go ahead and take a first pass.”
- AI might have been used, but we don’t know for sure, and the issue is that some passages just “read like AI?”
And what’s the result? Are the books misleading in some way? That’s the most legitimate actual concern I can think of (I’m sure the people screaming that AI isn’t fair use would disagree, but if that’s the concern, settle it in court).
- Comment on Consumer GPUs to run LLMs 4 months ago:
Look up “LLM quantization.” The idea is that each parameter is a number; by default they use 16 bits of precision, but if you scale them into smaller sizes, you use less space and have less precision, but you still have the same parameters. There’s not much quality loss going from 16 bits to 8, but it gets more noticeable as you get lower and lower. (That said, there’s are ternary bit models being trained from scratch that use 1.58 bits per parameter and are allegedly just as good as fp16 models of the same parameter count.)
If you’re using a 4-bit quantization, then you need about half that number in VRAM. Q4_K_M is better than Q4, but also a bit larger. Ollama generally defaults to Q4_K_M. If you can handle a higher quantization, Q6_K is generally best. If you can’t quite fit it, Q5_K_M is generally better than any other option, followed by Q5_K_S.
For example, Llama3.3 70B, which has 70.6 billion parameters, has the following sizes for some of its quantizations:
- q4_K_M (the default): 43 GB
- fp16: 141 GB
- q8: 75 GB
- q6_K: 58 GB
- q5_k_m: 50 GB
- q4: 40 GB
- q3_K_M: 34 GB
- q2_K: 26 GB
This is why I run a lot of Q4_K_M 70B models on two 3090s.
Generally speaking, there’s not a perceptible quality drop going to Q6_K from 8 bit quantization (though I have heard this is less true with MoE models). Below Q6, there’s a bit of a drop between it and 5 and then 4, but the model’s still decent. Below 4-bit quantizations you can generally get better results from a smaller parameter model at a higher quantization.
TheBloke on Huggingface has a lot of GGUF quantization repos, and most, if not all of them, have a blurb about the different quantization types and which are recommended. When Ollama.com doesn’t have a model I want, I’m generally able to find one there.
- Comment on It is deeply bad that a moderator can remove any post or reply. 4 months ago:
You said, and I quote “Find a better way.” I don’t agree with your premise - this is the better way - but I gave you a straightforward, reasonable way to achieve something important to you… and now you’re saying that “This is a discussion of principle.”
You’ve just proven that it doesn’t take a moderator to turn a conversation into a bad joke - you can do it on your own.
- Comment on It is deeply bad that a moderator can remove any post or reply. 4 months ago:
It’s a discussion of principle.
This is a foreign concept?
It appears to be a foreign concept for you.
I don’t believe that it’s a fundamentally bad thing to converse in moderated spaces; you do. You say “giving somebody the power to arbitrarily censor and modify our conversation is a fundamentally bad thing” like it’s a fact, indicating you believe this, but you’ve been given the tools to avoid giving others the power to moderate your conversation and you have chosen not to use them. This means that you are saying “I have chosen to do a thing that I believe is fundamentally bad.” Why would anyone trust such a person?
For that matter, is this even a discussion? People clearly don’t agree with you and you haven’t explained your reasoning. If a moderator’s actions are logged and visible to users, and users have the choice of engaging under the purview of a moderator or moving elsewhere, what’s the problem?
It is deeply bad that…
Why?
Yes, I know, trolls, etc…
In other words, “let me ignore valid arguments for why moderation is needed.”
But such action turns any conversation into a bad joke.
It doesn’t.
And anybody who trusts a moderator is a fool.
In places where moderator’s actions are unlogged and they’re not accountable to the community, sure - and that’s true on mainstream social media. Here, moderators are performing a service for the benefit of the community.
Have you never heard the phrase “Trust, but verify?”
Find a better way.
This is the better way.
- Comment on It is deeply bad that a moderator can remove any post or reply. 4 months ago:
Then why are you doing that, and why aren’t you at least hosting your own instance?
- Comment on It is deeply bad that a moderator can remove any post or reply. 4 months ago:
Yes, I know, trolls etc. But such action turns any conversation into a bad joke. And anybody who trusts a moderator is a fool.
Not just trolls - there’s much worse content out there, some of which can get you sent to jail in most (all?) jurisdictions.
And even ignoring that, many users like their communities to remain focused on a given topic. Moderation allows this to happen without requiring a vetting process prior to posting. Maybe you don’t want that, but most users do.
Find a better way.
Here’s an option: you can code a fork or client that automatically parses the modlog, finds comments and posts that have been removed, and makes them visible in your feed. You could even implement the ability to reply by hosting replies on a different instance or community.
For you and anyone who uses your fork, it’ll be as though they were never removed.
Do you have issues with the above approach?
- Comment on It is deeply bad that a moderator can remove any post or reply. 4 months ago:
As a user, you can:
- Review instance and community rules prior to participating
- Review the moderator logs to confirm that moderation activities have been in line with the rules
- If you notice a discrepancy, e.g., over-moderation, you can hold the mods accountable and draw attention to it or simply choose not to engage in that instance or community
- Host your own instance
- Create communities in an existing instance or your own instance
If you host your own instance and communities within that instance, then at that point, you have full control, right? Other instances can de-federate from yours.
- Comment on Consumer GPUs to run LLMs 4 months ago:
I recommend a used 3090, as that has 24 GB of VRAM and generally can be found for $800ish or less (at least when I last checked, in February). It’s much cheaper than a 4090 and while admittedly more expensive than the inexpensive 24GB Nvidia Tesla card (the P40?) it also has much better performance and CUDA support.
I have dual 3090s so my performance won’t translate directly to what a single GPU would get, but it’s pretty easy to find stats on 3090 performance.
- Comment on Trouble keeping a top-heavy TPE part on the bed 4 months ago:
To be clear, I’m measuring the relative humidity of the air in the drybox at room temp (72 degrees Fahrenheit / 22 degrees Celsius), not of the filament directly. You can use a hygrometer to do this. I mostly use the hygrometer that comes bundled with my dryboxes (I use the PolyDryer and have several extra PolyDryer Boxes, but there are much cheaper options available) but you can buy a hygrometer for a few bucks or get a bluetooth / wifi / connected one for $15-$20 or so.
If you put filament into a sealed box, it’ll generally - depending on the material - end up in equilibrium with the air. So the measurement you get right away will just show the humidity of the room, but if the filament and desiccant are both dry, it’ll drop; if the desiccant is dry and the filament is wet, it’ll still drop, but not as low.
Note also that what counts as “wet” varies by material. For example, from what I’ve read, PLA can absorb up to 1% or so of its mass as moisture, PETG up to 0.2%, Nylon up to 7-8%… silica gel desiccant beads up to 40%. So when I say they’ll be in equilibrium, I’m referring to the percentage of what that material is capable of absorbing. It isn’t a linear relationship as far as I know, but if it were, that would mean that: if the humidity of the air is 10% and the max moisture the material could retain is 1%, then the material is currently retaining 0.1% moisture by mass. If my room’s humidity is kept at 40%, it’ll absorb moisture until it’s at 0.4% moisture by mass.
That said, this doesn’t measure it perfectly, since while most filament materials absorb moisture from the air when the humidity is higher, they don’t release it as easily. Heating it both allows the air to hold more moisture and allows the filament (and desiccant) to release more moisture.
- Comment on Potpie : Open source prompt-to-agent for your codebase. 4 months ago:
The above post says it has support for Ollama, so I don’t think this is the case… but the instructions in the Readme do make it seem like it’s dependent on OpenAI.