glizzyguzzler
@glizzyguzzler@lemmy.blahaj.zone
- Comment on How good are amphetamines for brain fog? 14 hours ago:
Not a doctor, but based on research I’ve seent brain fog (in likely many cases) seems to be due to inflammation. autoimmuneinstitute.org/…/brain-fog-likely-caused…
Have your friend try inflammation-reducing drugs like metformin. Metformin specifically, maybe there’s others, I’m sadly not a doctor. Metformin is a magic drug that’s not just for diabetius.
It won’t be immediate, but maybe it could help your friend recover. Idk if cranking yourself will break through when it’s a blocking mechanism causing the problem.
- Comment on I'm looking for an article showing that LLMs don't know how they work internally 1 day ago:
Indeed I did not, we’re at a stalemate because you and I do not believe what the other is saying! So we can’t move anywhere since it’s two walls. Buuuut Tim Apple got my back for once, just saw this now!: lemmy.blahaj.zone/post/27197259
I’ll leave it at that, as thanks to that white paper I win! Yay internet points!
- Comment on I'm looking for an article showing that LLMs don't know how they work internally 1 day ago:
It’s wild, we’re just completely talking past each other at this point! I don’t think I’ve ever gotten to a point where I’m like “it’s blue” and someone’s like “it’s gold” so clearly. And like I know enough to know what I’m talking about and that I’m not wrong (unis are not getting tons of grants to see “if AI can think”, no one but fart sniffing AI bros would fund that (see OP’s requested source is from an AI company about their own model), research funding goes towards making useful things not if ChatGPT is really going through it like the rest of us), but you are very confident in yourself as well. Your mention of information theory leads me to believe you’ve got a degree in the computer science field. The basis of machine learning is not in computer science but in stats (math). So I won’t change my understanding based on your claims since I don’t think you deeply know the basis just the application. The focus on using the “right words” as a gotchya bolsters that vibe. I know you won’t change your thoughts based on my input, so we’re at the age-old internet stalemate! Anyway, just wanted you to know why I decided not to entertain what you’ve been saying - I’m sure I’m in the same boat from your perspective ;)
- Comment on I'm looking for an article showing that LLMs don't know how they work internally 4 days ago:
You can, but the stuff that’s really useful (very competent code completion) needs gigantic context lengths that even rich peeps with $2k GPUs can’t do. And that’s ignoring the training power and hardware costs to get the models.
Techbros chasing VC funding are pushing LLMs to the physical limit of what humanity can provide power and hardware-wise. Way less hype and letting them come to market organically in 5/10 years would give the LLMs a lot more power efficiency at the current context and depth limits. But that ain’t this timeline, we just got VC money looking to buy nuclear plants and fascists trying to subdue the US for the techbro oligarchs womp womp
- Comment on I'm looking for an article showing that LLMs don't know how they work internally 4 days ago:
No, they’re right. The “research” is biased by the company that sells the product and wants to hype it. Many layers don’t make think or reason, but they’re glad to put them in quotes that they hope peeps will forget were there.
- Comment on I'm looking for an article showing that LLMs don't know how they work internally 4 days ago:
So close, LLMs work via matrix multiplication, which is well understood by many meat bags and matrix math can’t think. If a meat bag can’t do matrix math, that’s ok, because the meat bag doesn’t work via matrix multiplication. lol imagine forgetting how to do matrix multiplication and disappearing into a singularity or something
- Comment on I'm looking for an article showing that LLMs don't know how they work internally 4 days ago:
They do not, and I, a simple skin-bag of chemicals (mostly water tho) do say
- Comment on I'm looking for an article showing that LLMs don't know how they work internally 4 days ago:
I was channeling the Interstellar docking computer (“improper contact” in such a sassy voice) ;)
There is a distinction between data and an action you perform on data (matrix maths, codec algorithm, etc.). It’s literally completely different.
An audio codec (not a pipeline) is just actually doing math - just like the workings of an LLM. There’s plenty of work to be done after the audio codec decodes the m4a to get to tunes in your ears. Same for an LLM, sandwiching those matrix multiplications that make the magic happen are layers that crunch the prompts and assemble the tokens you see it spit out.
LLMs can’t think, that’s just the fact of how they work. The problem is that AI companies are happy to describe them in terms that make you think they can think to sell their product! I literally cannot be wrong that LLMs cannot think or reason, there’s no room for debate, it’s settled long ago. AI companies will string the LLMs together and let them chew for a while to try make themselves catch when they’re dropping bullshit. It’s still not thinking and reasoning though. They can be useful tools, but LLMs are just tools not sentient or verging on sentient
- Comment on I'm looking for an article showing that LLMs don't know how they work internally 1 week ago:
Improper comparison; an audio file isn’t the basic action data it is the data, the audio codec is the basic action on the data
“An LLM model isn’t really an LLM because it’s just a series of numbers”
But the action of turning the series of numbers into something of value (audio codec for an audio file, matrix math for an LLM) are actions that can be analyzed
And clearly matrix multiplication cannot reason any better than an audio codec algorithm. It’s matrix math, it’s cool we love matrix math. Really big matrix math is really cool and makes real sounding stuff. But it’s just matrix math, that’s how we know it can’t think
- Comment on I'm looking for an article showing that LLMs don't know how they work internally 1 week ago:
It’s literally tokens. Doesn’t matter if it completes the next word or next phrase, still completing the next most likely token 😎😎 can’t think can’t reason can witch’s brew facsimile of something done before
- Comment on I'm looking for an article showing that LLMs don't know how they work internally 1 week ago:
You can prove it’s not by doing some matrix multiplication and seeing its matrix multiplication. Much easier way to go about it
- Comment on I'm looking for an article showing that LLMs don't know how they work internally 1 week ago:
Too deep on the AI propaganda there, it’s completing the next word. You can give the LLM base umpteen layers to make complicated connections, still ain’t thinking.
The LLM corpos trying to get nuclear plants to power their gigantic data centers while AAA devs aren’t trying to buy nuclear plants says that’s a straw man and you simultaneously also are wrong.
Using a pre-trained and memory-crushed LLM that can run on a small device won’t take up too much power. But that’s not what you’re thinking of. You’re thinking of the LLM only accessible via ChatGPT’s api that has a yuge context length and massive matrices that needs hilariously large amounts of RAM and compute power to execute. And it’s still a facsimile of thought.
It’s okay they suck and have very niche actual use cases - maybe it’ll get us to something better. But they ain’t gold, they ain’t smart, and they ain’t worth destroying the planet.
- Comment on I'm looking for an article showing that LLMs don't know how they work internally 1 week ago:
Can’t help but here’s a rant on people asking LLMs to “explain their reasoning” which is impossible because they can never reason (not meant to be attacking OP, just attacking the “LLMs think and reason” people and companies that spout it):
LLMs are just matrix math to complete the most likely next word. They don’t know anything and can’t reason.
Anything you read or hear about LLMs or “AI” getting “asked questions” or “explain its reasoning” or talking about how they’re “thinking” is just AI propaganda to make you think they’re doing something LLMs literally can’t do but people sure wish they could.
In this case it sounds like people who don’t understand how LLMs work eating that propaganda up and approaching LLMs like there’s something to talk to or discern from.
If you waste egregiously high amounts of gigawatts to put everything that’s ever been typed into matrices you can operate on, you get a facsimile of the human knowledge that went into typing all of that stuff.
It’d be impressive if the environmental toll making the matrices and using them wasn’t critically bad.
TLDR; LLMs can never think or reason, anyone talking about them thinking or reasoning is bullshitting, they utilize almost everything that’s ever been typed to give (occasionally) reasonably useful outputs that are the most basic bitch shit because that’s the most likely next word at the cost of environmental disaster
- Comment on 3-2-1 Backups: How do you do the 1 offsite backup? 4 weeks ago:
I got my parents to get a NAS box, stuck it in their basement. They need to back up their stuff anyway. I put in 2 18 TB drives (mirrored) from server part deals (peeps have said that site has jacked their prices, look for alts). They only need like 4 TB at most. I made a backup samba share for myself. It’s the cheapest symbology box possible, their software to make a samba share with a quota.
I then set up a wireguard connection on an RPi, taped that to the NAS, and wireguard to the local network with a batch script. Mount the samba share and then use restic to back up my data. It works great. Restic is encrypted, I don’t have to pay for storage monthly, their electricity is cheap af, they have backups, I keep tabs on it, everyone wins.
Next step is to go the opposite way for them, but no rush on that goal, I don’t think their basement would get totaled in a fire and I don’t think their house (other than the basement) would get totaled in a flood.
If you don’t have a friend or relative to do a box-at-their-house (peeps might be enticed with reciprocal backups), restic still fits the bill. Destination is encrypted, has simple commands to check data for validity.
Rclone crypt is not good enough. Too many issues (path length limits, password “obscured” but otherwise there, file structure preserved even if names are encrypted). On a VPS I use rclone to be a pass-through for restic to backup a small amount of data to a goog drive. Works great. Just don’t fuck with the rclone crypt for major stuff.
Lastly I do use rclone crypt to upload a copy of the restic binary to the destination, as the crypt means the binary can’t be fucked with and the binary there means that is all you need to recover the data.
- Comment on GitHub - SinTan1729/chhoto-url: A simple, blazingly fast, selfhosted URL shortener with no unnecessary features; written in Rust. 2 months ago:
Odd, I’ll try to deploy this when I can and see!
I’ve never had a problem with a volume being on the host system, except with user permissions messed up. But if you haven’t given it a user parameter it’s running as root and shouldn’t have a problem. So I’ll see sometime and get back to you!
- Comment on GitHub - SinTan1729/chhoto-url: A simple, blazingly fast, selfhosted URL shortener with no unnecessary features; written in Rust. 2 months ago:
That’s pretty damn clever
- Comment on GitHub - SinTan1729/chhoto-url: A simple, blazingly fast, selfhosted URL shortener with no unnecessary features; written in Rust. 2 months ago:
I try to slap anything I’d face the Internet with with the read_only to further restrict exploit possibilities, would be abs great if you could make it work! I just follow all reqs on the security cheat sheet, with read_only being one of them: …owasp.org/…/Docker_Security_Cheat_Sheet.html
With how simple it is I guessed that running as a user and restricting cap_drop: all wouldn’t be a problem.
For read_only many containers just need tmpfs: /tmp in addition to the volume for the db. I think many containers just try to contain temporary file writing to one directory to make applying read_only easier.
So again, I’d abs use it with read_only when you get the time to tune it!!
- Comment on GitHub - SinTan1729/chhoto-url: A simple, blazingly fast, selfhosted URL shortener with no unnecessary features; written in Rust. 2 months ago:
Looks awesome and very efficient, does it also run with
read_only: true
(with a db volume provided, of course!)? Many containers just need a /tmp, but not always - Comment on Making sure restic backups are right 2 months ago:
I trust the check
restic -r ‘/path/to/repo’ --cache-dir ‘/path/to/cache’ check --read-data-subset=2000M --password-file ‘/path/to/passfile’ --verbose
. The-read-data-subset
also does the structural integrity while also checking an amount of data. If I had more bandwidth, I’d check more.When I set up a new repo, I restore some stuff to make sure it’s there with
restic -r ‘/path/to/repo’ --cache-dir ‘/path/to/cache’ --password-file ‘/path/to/passfile’ restore latest --target /tmp/restored --include ‘/some/folder/with/stuff’
.You could automate that and make sure some essential-but-not-often-changing files match regularly by restoring them and comparing them. I would do that if I wasn’t lazy I guess, just to make sure I’m not missing some key-but-slowly-changing files. Slowly/not often changing because a diff would fail if the file changes hourly and you backup daily, etc.
Or you could do as others have suggested and mount it locally and just traverse it to make sure some key stuff works and is there
sudo mkdir -p ‘/mnt/restic’; sudo restic -r ‘/path/to/repo’ --cache-dir ‘/path/to/cache’ --password-file ‘/path/to/passfile’ mount ‘/mnt/restic’
. - Comment on [deleted] 2 months ago:
I have my router (opnsense) redirect all DNS requests to pihole/adguardhome. AdGuard home is easier for this since you can have it redirect wildcard *.local.domain while pihole wants every single one individually (uptime.local.domain, dockage.local.domain). With that combo of router not letting DNS out to upstream servers and my local DNS servers set up to redirect *.local.domain to the correct location(s), my DNS requests inside my local network never get out where an upstream DNS can tell you to kick rocks.
I combined the above with a (hella cheap for 10yr) paid domain, wildcard certified the domain without exposure to the wan (no ip recorded, but accepted by devices), and have all *.local.domain requests redirect to a single server caddy instance that does the final redirecting to specific services.
I’m not fully sure what you’ve got cooking but I hope typing out what works for me can help you figure it out on your end! Basically the router doesn’t let anything DNS get by to be fucked with by the ISP.
- Comment on Proxmox vs. Debian: Running media server on older hardware 2 months ago:
I’m surprised no one’s mentioned Incus, it’s a hypervisor like Proxmox but it’s designed to install onto Debian no prob. Does VMs and containers just like Proxmox, and snapshots too. The web UI is essential, you add a repo for it.
Proxmox isn’t reliable if you’re not paying them, the free people are the test people - and a bit back there was a bad update they pushed that broke shit. If I’d have updated before they pulled it, I’d have been hosed.
Basically you want a device that you don’t have to worry about updates, because updates are good for security. And Proxmox ain’t that.
On top of their custom kernel and stuff, it’s just less eyes than, say, the kernel Debian ships. Proxmox isn’t worth the lock-in and brittleness for just making VMs.
So to summarize, Debian and Incus installed. BTRFS if you’re happy with 1 drive or 2 RAID 1 drives. BTRFS gets scrubbing and bitrot detection (protection with RAID 1). ZFS for more drives. Toss on Cockpit too.
If you want less hands-on, do to OpenMediaVault. No room for Proxmox in my view, esp. for no clustering.
Also the iGPU on the 6600K likely is good enough for whatever transcoding you’d do (esp. if it’s rare and 1080p, it’ll do 4k no prob and multiple streams at once). The Nvidia card is just wasting power.
- Comment on How to get a unique MAC/DHCP IP for a Docker/Podman container without MACVLAN? 2 months ago:
I see, do you know of a way in Docker (or Podman) to bind to a specific network interface on the host? (So that a container could use a macvlan adapter on the host)
Or are you more advocating for putting the Docker/Podman containers inside of a VM/LXC that has the macvlan adapter (or fancy incus bridge adapter) attached?
- Comment on How to get a unique MAC/DHCP IP for a Docker/Podman container without MACVLAN? 2 months ago:
Confused at this sentiment, Docker includes a MACVLAN driver so clearly it’s intended to be used. Do you eschew any networking in Docker beyond the default bridge for some reason?
- Comment on How to get a unique MAC/DHCP IP for a Docker/Podman container without MACVLAN? 2 months ago:
With the default Docker bridge networking the container won’t have a unique IP/MAC address on the local network, as far as I am aware. Communication with external clients will have to contact the host server’s IP at the port the container is tied to in order to interact. If there’s a way to specify a specific parent interface, let me know!
- Comment on How to get a unique MAC/DHCP IP for a Docker/Podman container without MACVLAN? 2 months ago:
This was very insightful and I’d like to say I groked 90% of it meaningfully!
For an Incus container with its unique MAC interface, yes if I run a Docker container in that Incus container and leave the Docker container in its default bridge mode then I get the desired feature set (with the power of onions).
And thanks for explaining CNI, I’ve seen it referenced but didn’t fully get how it’s involved. I see that podman uses it to make a MACVLAN interface that can do DHCP (until 5.0, but the replacement seems to be feature-compatible for MACVLAN), so podman will sidestep the pain point of having to assign a no-go-zone on the DHCP server for a Docker swath of IPv4s, as you mentioned. Close enough for containers that the host doesn’t need to talk to.
So in summary:
-
I’ve got Docker doing the extent it can manage with MACVLAN and there’s no extra magicks to be done on it.
-
Podman will still use MACVLAN (no host to container comms still) but it’s able to use DHCP to get an address for the MACVLAN container.
-
If the host must talk to the container with MACVLAN, I can either use the MACVLAN bypass as you linked to above or put the Docker/Podman container inside an Incus container with its bridge mode.
-
Kubernutes continues to sound very powerful and flexible but is definitely beyond my reach yet. (Womp womp)
Thanks again for taking the time to type and explain all of that!
-
- Comment on How to get a unique MAC/DHCP IP for a Docker/Podman container without MACVLAN? 2 months ago:
Thanks for taking the time to reply!
The host setup has
eth0
as the physical interface to the rest of the network, withbr0
replacing it completely.br0
has the same MAC as theeth0
interface andeth0
just forwards tobr0
which then does the bridging internally.br0
being a bridge means that incus is able to split it off without MACVLAN but rather its nic device in bridge mode which “Uses an existing bridge on the host (br0
) and creates a virtual device pair to connect the host bridge to the instance.” That results in a network interface that has its own MAC and is assigned a local IP by the DHCP server on the network while also being able to talk to the host.Incus accomplishes the same goal as Proxmox (Proxmox has similar bridge network devices for its containers/VMs) just without Incus needing to be your OS/distro like Proxmox does, it’s just a package.
As for the Docker, the parent interface is
br0
which has supplantedeth0
. MACVLAN is working as it is intended to in Docker, as far as I can tell. The container has a networking device with its own MAC address, and after supplying the MACVLAN network device with my network’s subnet and gateway and static IP address in the Docker compose file it works as expected. If I don’t supply a static IP in the Docker compose file, Docker just assigns it the first IP in the given subnet - no DHCP interaction. This docker-net-dhcp plugin (I linked to the issue about it not working on the latest version of Docker anymore) was made to give Docker network devices the ability to use DHCP to get an IP address, but it’s clearly not something to rely on.If I’m missing something about MACVLAN that makes DHCP work for Docker, let me know! Hardcoding an IP into a docker-compose file adds an extra step to remember compared to everything else being configured on the centralized DHCP server - hence the shoddy implementation claim for Docker.
Thanks for the link to using another MACVLAN and routing around the host<-/->container connection issue inherent to MACVLAN. I’ll keep it in mind as an alternate to Incus container around another container! I do wish there could be something like Incus’ hassle-free solution for Docker or Podman.
- Submitted 2 months ago to selfhosted@lemmy.world | 13 comments