hedgehog
@hedgehog@ttrpg.network
- Comment on YSK famous youtuber "math sorcerer" is selling ai generated books 4 hours ago:
Why should we know this?
Not watching that video for a number of reasons, namely that ten seconds in they hadn’t said anything of substance, their first claim was incorrect (Amazon does not prohibit use of gen ai in books, nor do they require its use be disclosed to the public, no matter how much you might wish it did), and there was nothing in the description of substance, which in instances like this generally means the video will largely be devoid of substance.
What books is the Math Sorcerer selling? Are they the ones on Amazon linked from their page? Are they selling all of those or just promoting most of them?
Why do we think they were generated with AI?
When you say “generated with AI,” what do you mean?
- Generated entirely with AI, without even editing? Then why do they have so many 5 star reviews?
- Generated with AI and then heavily edited?
- Written partly by hand with some pieces written by unedited GenAI?
- Written partly by hand with some pieces written by edited GenAI?
- AI was used for ideation?
- AI was used during editing? E.g., Grammarly?
- GenAI was used during editing?E.g., “ChatGPT, review this chapter and give me any feedback. If sections need rewritten go ahead and take a first pass.”
- AI might have been used, but we don’t know for sure, and the issue is that some passages just “read like AI?”
And what’s the result? Are the books misleading in some way? That’s the most legitimate actual concern I can think of (I’m sure the people screaming that AI isn’t fair use would disagree, but if that’s the concern, settle it in court).
- Comment on Consumer GPUs to run LLMs 22 hours ago:
Look up “LLM quantization.” The idea is that each parameter is a number; by default they use 16 bits of precision, but if you scale them into smaller sizes, you use less space and have less precision, but you still have the same parameters. There’s not much quality loss going from 16 bits to 8, but it gets more noticeable as you get lower and lower. (That said, there’s are ternary bit models being trained from scratch that use 1.58 bits per parameter and are allegedly just as good as fp16 models of the same parameter count.)
If you’re using a 4-bit quantization, then you need about half that number in VRAM. Q4_K_M is better than Q4, but also a bit larger. Ollama generally defaults to Q4_K_M. If you can handle a higher quantization, Q6_K is generally best. If you can’t quite fit it, Q5_K_M is generally better than any other option, followed by Q5_K_S.
For example, Llama3.3 70B, which has 70.6 billion parameters, has the following sizes for some of its quantizations:
- q4_K_M (the default): 43 GB
- fp16: 141 GB
- q8: 75 GB
- q6_K: 58 GB
- q5_k_m: 50 GB
- q4: 40 GB
- q3_K_M: 34 GB
- q2_K: 26 GB
This is why I run a lot of Q4_K_M 70B models on two 3090s.
Generally speaking, there’s not a perceptible quality drop going to Q6_K from 8 bit quantization (though I have heard this is less true with MoE models). Below Q6, there’s a bit of a drop between it and 5 and then 4, but the model’s still decent. Below 4-bit quantizations you can generally get better results from a smaller parameter model at a higher quantization.
TheBloke on Huggingface has a lot of GGUF quantization repos, and most, if not all of them, have a blurb about the different quantization types and which are recommended. When Ollama.com doesn’t have a model I want, I’m generally able to find one there.
- Comment on It is deeply bad that a moderator can remove any post or reply. 23 hours ago:
You said, and I quote “Find a better way.” I don’t agree with your premise - this is the better way - but I gave you a straightforward, reasonable way to achieve something important to you… and now you’re saying that “This is a discussion of principle.”
You’ve just proven that it doesn’t take a moderator to turn a conversation into a bad joke - you can do it on your own.
- Comment on It is deeply bad that a moderator can remove any post or reply. 23 hours ago:
It’s a discussion of principle.
This is a foreign concept?
It appears to be a foreign concept for you.
I don’t believe that it’s a fundamentally bad thing to converse in moderated spaces; you do. You say “giving somebody the power to arbitrarily censor and modify our conversation is a fundamentally bad thing” like it’s a fact, indicating you believe this, but you’ve been given the tools to avoid giving others the power to moderate your conversation and you have chosen not to use them. This means that you are saying “I have chosen to do a thing that I believe is fundamentally bad.” Why would anyone trust such a person?
For that matter, is this even a discussion? People clearly don’t agree with you and you haven’t explained your reasoning. If a moderator’s actions are logged and visible to users, and users have the choice of engaging under the purview of a moderator or moving elsewhere, what’s the problem?
It is deeply bad that…
Why?
Yes, I know, trolls, etc…
In other words, “let me ignore valid arguments for why moderation is needed.”
But such action turns any conversation into a bad joke.
It doesn’t.
And anybody who trusts a moderator is a fool.
In places where moderator’s actions are unlogged and they’re not accountable to the community, sure - and that’s true on mainstream social media. Here, moderators are performing a service for the benefit of the community.
Have you never heard the phrase “Trust, but verify?”
Find a better way.
This is the better way.
- Comment on It is deeply bad that a moderator can remove any post or reply. 1 day ago:
Then why are you doing that, and why aren’t you at least hosting your own instance?
- Comment on It is deeply bad that a moderator can remove any post or reply. 1 day ago:
Yes, I know, trolls etc. But such action turns any conversation into a bad joke. And anybody who trusts a moderator is a fool.
Not just trolls - there’s much worse content out there, some of which can get you sent to jail in most (all?) jurisdictions.
And even ignoring that, many users like their communities to remain focused on a given topic. Moderation allows this to happen without requiring a vetting process prior to posting. Maybe you don’t want that, but most users do.
Find a better way.
Here’s an option: you can code a fork or client that automatically parses the modlog, finds comments and posts that have been removed, and makes them visible in your feed. You could even implement the ability to reply by hosting replies on a different instance or community.
For you and anyone who uses your fork, it’ll be as though they were never removed.
Do you have issues with the above approach?
- Comment on It is deeply bad that a moderator can remove any post or reply. 1 day ago:
As a user, you can:
- Review instance and community rules prior to participating
- Review the moderator logs to confirm that moderation activities have been in line with the rules
- If you notice a discrepancy, e.g., over-moderation, you can hold the mods accountable and draw attention to it or simply choose not to engage in that instance or community
- Host your own instance
- Create communities in an existing instance or your own instance
If you host your own instance and communities within that instance, then at that point, you have full control, right? Other instances can de-federate from yours.
- Comment on Consumer GPUs to run LLMs 1 day ago:
I recommend a used 3090, as that has 24 GB of VRAM and generally can be found for $800ish or less (at least when I last checked, in February). It’s much cheaper than a 4090 and while admittedly more expensive than the inexpensive 24GB Nvidia Tesla card (the P40?) it also has much better performance and CUDA support.
I have dual 3090s so my performance won’t translate directly to what a single GPU would get, but it’s pretty easy to find stats on 3090 performance.
- Comment on Trouble keeping a top-heavy TPE part on the bed 3 days ago:
To be clear, I’m measuring the relative humidity of the air in the drybox at room temp (72 degrees Fahrenheit / 22 degrees Celsius), not of the filament directly. You can use a hygrometer to do this. I mostly use the hygrometer that comes bundled with my dryboxes (I use the PolyDryer and have several extra PolyDryer Boxes, but there are much cheaper options available) but you can buy a hygrometer for a few bucks or get a bluetooth / wifi / connected one for $15-$20 or so.
If you put filament into a sealed box, it’ll generally - depending on the material - end up in equilibrium with the air. So the measurement you get right away will just show the humidity of the room, but if the filament and desiccant are both dry, it’ll drop; if the desiccant is dry and the filament is wet, it’ll still drop, but not as low.
Note also that what counts as “wet” varies by material. For example, from what I’ve read, PLA can absorb up to 1% or so of its mass as moisture, PETG up to 0.2%, Nylon up to 7-8%… silica gel desiccant beads up to 40%. So when I say they’ll be in equilibrium, I’m referring to the percentage of what that material is capable of absorbing. It isn’t a linear relationship as far as I know, but if it were, that would mean that: if the humidity of the air is 10% and the max moisture the material could retain is 1%, then the material is currently retaining 0.1% moisture by mass. If my room’s humidity is kept at 40%, it’ll absorb moisture until it’s at 0.4% moisture by mass.
That said, this doesn’t measure it perfectly, since while most filament materials absorb moisture from the air when the humidity is higher, they don’t release it as easily. Heating it both allows the air to hold more moisture and allows the filament (and desiccant) to release more moisture.
- Comment on Potpie : Open source prompt-to-agent for your codebase. 3 days ago:
The above post says it has support for Ollama, so I don’t think this is the case… but the instructions in the Readme do make it seem like it’s dependent on OpenAI.
- Comment on Trouble keeping a top-heavy TPE part on the bed 4 days ago:
What have you done to clean the bed? From the link to the textured sheet, you should be cleaning it between every print - after it cools - with 90% IPA, and if you still have adhesion issues, you should clean it with warm water and a couple drops of dish soap.
Has the TPU been dried? I don’t normally print with TPU but my understanding is that it needs to be lower humidity than PLA; I use dryboxes for PLA and target a humidity of 15% or lower and don’t use them if they raise above 20%. The recommendation I saw for TPU was to dry it for 7 hours at 70 degrees Celsius, to target 10% humidity (or at least under 20%) and to print directly from a drybox. Note that compared to other filaments, TPU can’t recover as well from having absorbed moisture - if the filament has gotten too wet, it’ll become too brittle if you dry it out as much as is needed. At that point you would need to start with a fresh roll, which would ideally go into a dryer and then drybox immediately.
You should be able to set different settings for the initial layer to avoid stringing, i.e., slower speeds and longer retraction distance. It’s a bit more complicated but you can also configure the speed for a specific range of layers to be slower - i.e., setting it to slow down again once you get to the top of the print. For an example of that, see …prusa3d.com/…/bed-flinger-slower-y-movement-as-f…
What’s the max speed you’re printing at? My understanding is that everything other than travel should all be the same speed at a given layer, and no higher than 25 mm/s. And with a bed slinger I wouldn’t recommend a much higher travel, either.
In addition to a brim, have you tried adding supports?
- Comment on Do I really need a firewall for my server? 1 week ago:
Are you saying that NAT isn’t effectively a firewall or that a NAT firewall isn’t effectively a firewall?
- Comment on Someone help me understand the sonarr to jellyfin workflow 1 week ago:
Is there a way to use symlinks instead? I’d think it would be possible, even with Docker - it would just require the torrent directory to be mounted read-only in the same location in every Docker container that had symlinks to files on it.
- Comment on Plex is locking remote streaming behind a subscription in April 1 week ago:
Depending on setup this can be true with Jellyfin, too. I have a domain registered, use dynamic DNS, and have Traefik direct a subdomain to my Jellyfin server. My mobile clients are configured using that. My local clients use the local static IP.
If my internet goes down, my mobile clients can’t connect, even on the LAN.
- Comment on Getting Nicole-ed feels way more awesome than how getting scammer spam usually feels. 2 weeks ago:
Apparently there’s a vulnerability with sending messages with images in them and “she” might be logging people’s IP addresses through that.
If the images are hosted on your instance, this wouldn’t be relevant. If they’re links to an image hosted somewhere, this is possible, but there’d be a lot of noise and not much value. To link accounts to IPs the URLs would themselves need to be different
I checked the urls to the images in my PMs and they’re all hosted on Lemmy.
- Comment on [deleted] 4 weeks ago:
Under notes, where you said my name, did you mean “Hedgedoc?”
- Comment on Docker Hub limiting unauthenticated users to 10 pulls per hour 5 weeks ago:
local docker hub proxy
Do you mean a Docker container registry? If so, here are a couple options:
- Use the official Docker registry: www.docker.com/…/how-to-use-your-own-registry-2/
- Self-host forgejo or gitea and use the included package registry, which is automatically enabled. Details: forgejo.org/docs/latest/user/packages/
- Comment on [deleted] 1 month ago:
You cannot encrypt email End to End.
Incorrect.
…mozilla.org/…/introduction-to-e2e-encryption
It has to be stored in plaintext somewhere.
- It doesn’t.
- Even if it did, that wouldn’t mean it wasn’t E2EE.
Yahoo does not offer encrypted email.
It doesn’t need to. support.mozilla.org/…/thunderbird-and-yahoo
- Comment on Microsoft Bing is trying to spoof Google UI when people search Google.com 2 months ago:
You can control that with a setting. In Settings - Privacy, turn on “Query in the page’s title.”
My instance has a magnifying glass as the favicon.
- Comment on In 2025, People Will Try Living in This Underwater Habitat 2 months ago:
Giant squids are the bears of the ocean
- Comment on [deleted] 2 months ago:
There’s no need to bond with your own child?
- Comment on Selfhosted alternative to Spotify 5 months ago:
Do you only experience the 5-10 second buffering issue on mobile? If not, then you might be able to fix the issue by tuning your NextCloud instance - upping the memory limit, disabling debug mode and dropping log level back to warn if you ever changed it, enabling memory caching, etc…
Check out docs.nextcloud.com/server/…/server_tuning.html and docs.nextcloud.com/…/php_configuration.html#ini-v… for docs on the above.
- Comment on Concerns Raised Over Bitwarden Moving Further Away From Open-Source 5 months ago:
Your Passkeys have to be stored in something, but you don’t have to store them all in the same thing.
If you store them with Microsoft’s Windows Hello, Apple Keychain, or Google Password Manager, all of which are closed source, then you have to trust MS/Apple/Google. However, Keychain is end to end encrypted (according to Apple) and Windows Hello is currently not synced to the cloud, so if you trust those claims, you don’t need to trust that they won’t misuse your data. I don’t know if Google’s offering is end to end encrypted, but I wouldn’t trust it either way.
You can also store Passkeys in a password manager. Bitwarden is open source (though they did recently introduce a proprietary, source available SDK), as is KeepassXC. 1Password isn’t open source but can store Passkeys as well.
And finally, you can store Passkeys in a compatible security key, like the YubiKey 5 series keys, which can each store 100 Passkeys. This makes them basically immune to being stolen. Note that if your primary interest in Passkeys is in the phishing resistance (basically nearly perfect immunity to MitM attacks) then you can get that same benefit by using WebAuthn as a second factor. However, my experience has been that Passkey support is broader.
Revoking keys involves logging into the particular service and revoking them, just like changing your password. There isn’t a centralized way to do it as far as I’m aware. Each Passkey is only used for a single service, after all. However, in the same way that some password managers will offer to automatically change your passwords, they might develop a similar for passkeys.
- Comment on Concerns Raised Over Bitwarden Moving Further Away From Open-Source 5 months ago:
Do any of the iOS or Android apps support passkeys? I looked into this a couple days ago and didn’t find any that did. (KeePassXC does.)
- Comment on Concerns Raised Over Bitwarden Moving Further Away From Open-Source 5 months ago:
You have your link formatted backwards. It should be Vaultwarden, with the link in the parentheses.
- Comment on If I was selling a bag of flower and sugar to a CI who thought it was meth or coke can I get in trouble? How or why when I am selling a legal substance? 5 months ago:
Nah, the idea is that anyone not buying it thinks it looks like drugs, not to convince the people buying that they’re buying drugs.
You could also call it “Spice” and make it a blend of different spices, salt, etc…
Either way, all you need is a bunch of people who are all in on the same joke.
- Comment on Is there any privacy-friendly way to use Facebook on iOS? 5 months ago:
I recommend checking out Friendly Social Browser.
- Comment on That hurts a little 5 months ago:
I assume this was supposed to say “more noticeable,” not “less”:
but of course for example the difference between 21 and 30 FPS is less noticeable than the one between 231 and 240 FPS
- Comment on Is it possible to run a reverse proxy only on a specific service or port? 5 months ago:
I made a typo in my original question: I was afraid of taking the services offline, not online.
Gotcha, that makes more sense.
If you try to run the reverse proxy on the same server and port that an existing service is using (e.g., port 80), then you’ll run into issues. You could also run into conflicts with the ports the services themselves use. Likewise if you use the same outbound port from your router. But IME those issues will mostly stop the new services from starting - you’d have to stop the services or restart your machine for the new service to have a chance to grab the ports while they were unused. Otherwise I can’t think of any issues.
- Comment on Is it possible to run a reverse proxy only on a specific service or port? 5 months ago:
I’m afraid that when I install a reverse proxy, it’ll take my other stuff online and causes me various headaches that I’m not really in the headspace for at the moment.
If you don’t configure your other services in the reverse proxy then you have nothing to worry about. I don’t know of any proxy that auto discovers services and routes to them by default. (Traefik does something like this with Docker services, but they need Docker labels and to be on the same Docker network as Traefik, and you’re the one configuring both of those things.)
Are you running this on your local network? If so, then unless you forward a port to your server on the port your reverse proxy is serving from, it’ll only be accessible from the local network. This means you can either keep it that way (and VPN in to access it) or test it by connecting directly to your server on that port and confirm that it’s working as expected before forwarding the port.