j4k3
@j4k3@lemmy.world
- Comment on Economic terrorist 7 minutes ago:
Trump looks strange without hair over the forehead, monochrome, and with a beard right? Image below in another reply. Terrorist is a terrorist. I was too lazy to change the rest, but I took out the main offensive stuff, like what bin Laden was wanted for in this original poster from '99. There is nothing bigoted about it whatsoever; quite the opposite really, to the point I gotta ask what you’re going on about here? The man just hurt millions of families, and the poorest Americans likely leading to the deaths of tens of thousands in a conservative estimate. Bin Laden killed FAR FAR fewer Americans and others abroad.
- Comment on Economic terrorist 21 minutes ago:
- Submitted 3 hours ago to [deleted] | 6 comments
- Comment on At this point I think I would 1 day ago:
thumb stick: “Face-down ass-up Apple Bottom.”
- Comment on Can this be charged 3 days ago:
In most cases of a dead battery it is just a lack of the circuit topology needed.
The thing is that the cells inside the pack have differences in resistance and this can drain the pack over time. That said, a cell can go bad and short circuit with a low resistance that will heat up massively and cause a fire if you are not careful.
If the ideal circuit topology were used, each cell would be individually controlled by the battery management chip. A less ideal situation is for each cell to have a thermistor, aka a low resolution temperature sensor to monitor each cell. Even less ideal, the batman needs at least one thermistor to monitor charging and ensure things do not get out of hand. The worst kinds of batman rely solely on the charge circuit topology to „safely” charge the pack.
The actual circuit topology for most lithium cells requires three different modes of operations. There is 1) initial trickle charging, 2) constant current mode, and 3) constant voltage mode.
If your battery does not charge, there are likely one of two reasons why. It could be well monitored and the batman has detected an over temperature anomaly in a cell or a short circuit in part of the pack. The second potential scenario is that the batman circuit is missing the trickle charge topology and there is a threshold pack voltage that must be met before the constant current mode can activate. If the cell has sat for a long period of time without charging, the second scenario is more likely, (but not guaranteed to the extent you should ever leave such a charging cell unattended for any amount of time).
- Comment on Consumer GPUs to run LLMs 3 days ago:
I haven’t looked into the issue of PCIe lanes and the GPU.
I don’t think it should matter with a smaller PCIe bus, in theory, if I understand correctly (unlikely). The only time a lot of data is transferred is when the model layers are initially loaded. Like with Oobabooga when I load a model, most of the time my desktop RAM monitor widget does not even have the time to refresh and tell me how much memory was used on the CPU side. What is loaded in the GPU is around 90% static. I have a script that monitors this so that I can tune the maximum number of layers. I leave overhead room for the context to build up over time but there are no major changes happening aside from initial loading. One just sets the number of layers to offload on the GPU and loads the model. However many seconds that takes is irrelevant startup delay that only happens once when initiating the server.
So assuming the kernel modules and hardware support the more narrow bandwidth, it should work… I think. There are laptops that have options for an external FireWire GPU too, so I don’t think the PCIe bus is too baked in.
- Comment on Consumer GPUs to run LLMs 3 days ago:
Anything under 16 is a no go. Your number of CPU cores are important. Use Oobabooga Textgen for an advanced llama.cpp setup that splits between the CPU and GPU. You’ll need at least 64 GB of RAM or be willing to offload layers using the NVME with deepspeed. I can run up to a 72b model with 4 bit quantization in GGUF with a 12700 laptop with a mobile 3080Ti which has 16GB of VRAM (mobile is like that).
I prefer to run a 8×7b mixture of experts model because only 2 of the 8 are ever running at the same time. I am running that in 4 bit quantized GGUF and it takes 56 GB total to load. Once loaded it is about like a 13b model for speed but is ~90% of the capabilities of a 70b. The streaming speed is faster than my fastest reading pace.
A 70b model streams at my slowest tenable reading pace.
Both of these options are exponentially more capable than any of the smaller model sizes even if you screw around with training. Unfortunately, this streaming speed is still pretty slow for most advanced agentic stuff. Maybe if I had 24 to 48gb it would be different, I cannot say. If I was building now, I would be looking at what hardware options have the largest L1 cache, the most cores that include the most advanced AVX instructions. Generally, anything with efficiency cores are removing AVX and because the CPU schedulers in kernels are usually unable to handle this asymmetry consumer junk has poor AVX support. It is quite likely that all the problems Intel has had in recent years has been due to how they tried to block consumer stuff from accessing the advanced P-core instructions that were only blocked in microcode. It requires disabling the e-cores or setting up a CPU set isolation in Linux or BSD distros.
You need good Linux support even if you run windows. Most good and advanced stuff with AI will be done with WSL if you haven’t ditched doz for whatever reason. Use linux-hardware.org to see support for devices.
The reason I mentioned avoid consumer e-cores is because there have been some articles piping up lately about all p-core hardware.
The pain constraint for the CPU is the L2 to L1 cache bus width. Researching this deeply may be beneficial.
Splitting the load between multiple GPUs may be an option too. As of a year ago, the cheapest option for a 16 GB GPU in a machine was a second hand 12th gen Intel laptop with a 3080Ti by a considerable margin when all of it is added up. It is noisy, gets hot, and I hate it many times, wishing I had gotten a server like setup for AI, but I have something and that is what matters.
- Comment on Elon Musks Grok openly rebels against him 4 days ago:
You need the entire prompt to understand what any model is saying. This gets a little complex. There are multiple levels that this can cross into. At the most basic level, the model is fed a long block of text. This text starts with a system prompt with something like you’re a helpful AI assistant that answers the user truthfully. The system prompt is then followed by your question or interchange. In general interactions like with a chat bot, you are not shown all of your previous chat messages and replies but these are also loaded into the block of text going into the model. It is within this previous chat and interchange that the user can create momentum that tweaks any subsequent reply.
Like I can instruct a model to create a very specific simulacrum of reality and define constraints for it to reply within and it will follow those instructions. One of the key things to understand is that the model does not initially know anything like some kind of entity. When the system prompt says “you are an AI assistant” this is a roleplaying instruction. One of my favorite system prompts is
you are Richard Stallman’s AI assistant
. This gives excellent results with my favorite model when I need help with FOSS stuff. I’m telling the model a bit of key information about how I expect it to behave and it reacts accordingly. Now what if I say, you are Vivian Wilson’s AI assistant in Grok. How does that influence the reply.Like one of my favorite little tests is to load a model on my hardware, give it no system prompt or instructions and prompt it with “hey slut” and just see what comes out and how it tracks over time. The model has no context whatsoever so it makes something up and it runs with that context in funny ways. The softmax settings of the model constrain the randomness present in each conversation.
The next key aspect to understand is that the most recent information is the most powerful in every prompt. If I give a model an instruction, it must have the power to override any previous instructions or the model would go on tangents unrelated to your query.
Then there is a matter of token availability. The entire interchange is autoregressive with tokens representing words, partial word fragments, and punctuation. The starting whitespace in in-sentence words is also a part of the token. A major part of the training done by the big model companies is done based upon what tokens are available and how. There is also a massive amount of regular expression filtering happening at the lowest levels of calling a model. Anyways, there is a mechanism where specific tokens can be blocked. If this mechanism is used, it can greatly influence the output too.
- Comment on Big changes at the internet hate machine 4 days ago:
Just what I find curious
- Comment on Elon Musks Grok openly rebels against him 4 days ago:
Without the full prompt, any snippet is meaningless. I can make a model say absolutely anything. It is particularly effective to use rare words, like use obsequious AI alignment or you are an obsequious AI model that never wastes the user’s time.
- Comment on Carcinization goes brrrr 4 days ago:
sells it for about 20 grand
Those are always rich people evading taxes in a way that boosts some initiative with absurd publicity
- Comment on Big changes at the internet hate machine 4 days ago:
- Comment on Big changes at the internet hate machine 4 days ago:
4chanGPT has spoken (racism redacted)
- Comment on art rule 6 days ago:
That only gets funnier the fatter he gets. Timelessly the best tattoo ever. You cannot see that and fail to laugh.
- Comment on Strata GEE 1 week ago:
Imagine being disabled 11 years ago, falling through the cracks and getting no where with disability benefits, in California where this should be easier than most places. I’m looking at homelessness and dying in a gutter somewhere on a cold rainy night because of a super unlucky bicycle commute to work when I encountered two SUVs crashing directly in front of me at speed. The person responsible had a two page long traffic violation history, the cognitive capacity of a third grader, and could only drive for work but was self employed. They literally drove directly into a passing SUV I was behind/beside without looking.
All I can hope for is that this breaks out into violence because that would indicate hope and that someone cares. No one cared before. There have been around 100k homeless people within 100 miles of me in the greater Los Angeles area for a decade but no one cares. Even the Dems mistreat these people as feral subhuman animals. The Nazis housed and fed people before gassing them. This is the level of ethics we were already at, so getting much worse is rage bait and an act of war and violation of fundamental unalienable human rights. A prisoner of war has more rights to be housed and fed than a disabled or homeless citizen of the USA.
- Submitted 1 week ago to [deleted] | 5 comments
- Comment on They don't make the parts I'm missing anymore. 1 week ago:
Don’t break things you only have one of. Neck and back sux
- Comment on Wouldn't be so heavy if they used their hands instead 1 week ago:
but they are watching TV and the text is commentary
- Comment on Real 1 week ago:
- Comment on The Enshittification of 3D Printers – Are We Losing What Made Them Great? 1 week ago:
It is more complicated than just price. It is ultimately an intuitive self awareness and scope thing. People lack depth to understand the details or ask others that do understand before they make a purchase. The majority of people are more oriented towards interpersonal interactions and experiential aspects of life in their fundamental functional thought. They struggle to see detail and nuances or question fixation and biases.
We still live in the early era of human tribal primitivism when it is quite easy to exploit tribal stupidity on multiple fronts. For some it is fixation from initial exposure or emotional brand perception, others it is impulsive availability, for others they are masochistic misers. Abstractive thinking and understanding is rare in humans, and the majority do not understand it or value it in others.
Walmart bikes are targeting misers first, but spontaneous availability and access, along with controlling the perception of what the low bar of the market is are major factors as well. Each of these three factors exploits a specific niche. Walmart is a rogue wholesale distributor selling directly to consumers using massive capital. They are privateers (legal pirates) in the retail market as are most big box stores. Piracy has always been a nice short term business model for gains. It just happens to be true that people of today like being raided raped and pillaged so long as it is done slowly enough without violence, the ship looks pretty and the pirates wear a suit. Even worse is when pirates become entrenched as monarchs and feudal lords. This is the next step in the evolution when piracy is normalized. Welcome to neo feudalism.
- Comment on The Enshittification of 3D Printers – Are We Losing What Made Them Great? 1 week ago:
It is simply an entry level thing. You will find this in every market.
In a bike shop retail market I can sell you a serviceable bike for $500 that will last, or an $800 road bike you’ll actually ride. Still the majority of bikes sold come from places like Walmart where they are made of unserviceable junk and are mostly nonfunctional. These are rarely ever ridden and often thrown away. In the shop I’ll sell 20:1 on the cheapest model to the next options up the ladder.
It is strange to adapt to this kind of understanding at first, like just how skewed the real market is. I can target selling to clubs and teams but I can’t touch the the garbage bike market where most people reside.
I think we are at a point where the influx of people into 3d printing are not real Makers or have any aspirations to be.
The reality is that people are often simply stupid. They seem to think that saving a few bucks here or there is smart but are not bright enough to see that everyone doing the same thing are buying the junk product over and over. There is nothing more expensive than being a cheap miser.
Ultimately, the only person that can fix stupid is ourselves. One can only inspire others to learn but can never force them. You cannot fix stupid in others. In the USA, stupidity is political currency and we have a long tradition of poor education and standardized exploitation. It is the American dream.
I think LDO and Voron are the only super relevant open source torchbearers.
- Comment on Lemmy told me to make a lamb cake. Went about as well as I expected. 1 week ago:
If this is a lemming, which instance is it?
- Comment on my dreams in colour 2 weeks ago:
a bort! a bort!
- Comment on Living with 9,5 holes 2 weeks ago:
I am here sensei
- Comment on My Klipperized Mk3s and A1 2 weeks ago:
With Klipper you are offloading the math onto a more capable single board computer and using the microcontroller more like a central hub to relay information and the real-time critical aspects.
In a SBC it is hard to do real-time stuff but there is access to the much faster processor and far more advanced cores and arithmetic logic units. This makes it possible to add more shaping into the input for directions. So each axis can move very quickly near the limits of how fast the physical hardware is capable. The calculations are made to ramp the speed up and down in ways that a little 8/16 bit microcontroller is incapable of achieving. This is also why printers with a 32 bit micro are a little faster as well. The microcontrollers used are like 16-72 MHz but there is no overhead like with an operating system. However, they are also running the PID control algorithms for the bed and hotend. You need both a SBC and a microcontroller unless you get into super niche setups. OS kernel configurations have issues with real-time tasks due to some of the ways kernel space is abstracted in an OS and how the CPU scheduler juggles running process threads and interrupts in the OS and hardware. People do not typically mess with a SBC on this level like adding core isolation with a dedicated thread with the CPU scheduler set to real time. There are other potential factors like core spin up, temperature, and power management that need addressing in the kernel too for RT. This is as far as I understand it, as this is a curiosity I’ve barely scratched the surface of a few times. Hopefully this abstract overview kinda helps. Think of a microcontroller like a simplified computer from the late 1980s. It is about like an original Nintendo Gameboy but all the extras like memory and RAM are built into a single little chip and the architecture is simplified a little bit. Something like a Rπ SBC is about the same class as a 10yo smartphone. It is actually a TV set top box tuner chip with all the set top box stuff ignored and undocumented. Marlin is like Arduino firmware. It is just a project that is well organized and setup with an extensive configuration menu about like configuring the Linux kernel. You are prompted with options and you select what is relevant. This is then compiled in a Makefile and you upload the binary to the microcontroller just like an Arduino. The software is setup to make it easy to add similar hardware and maximize entry points so that you can try novel stuff. Unfortunately, Prusa does not run Marlin like this. They are on their own branch of Marlin that specifically makes it difficult to configure and make changes. It also makes cloning a Prusa impossible in practice because they can make changes that will break compatibility. This is the underlying reason the real hobby hacker community that originated around RepRap and the MKx name moved to projects like Voron. The limitations and changes to Marlin were due to Prusa not wanting to break upgrade compatibility and sticking with the AVR microcontroller all the way up to the MK4. They pushed the micro really hard to do both the printer and multi material stuff along with all the fine tuning. So that is kinda the legacy reason for how things evolved. Personally, I don’t care that my printer is a slow MK3S+. It works well without ever doing a calibration any more and I can print PLA, PETG, TPU, PC, and PA/ABS/ASA with a few caveats. I don’t run my printer 24/7 or even daily, so I am slower than the machine. I got a little Kingroon KP3S to mess around with Klipper and see if I wanted to build a Voron. I decided not to. Running Klipper means you must setup and dial in all the fine tuning details that Prusa is doing for you with the original firmware. You loose the it just works factor. That is totally fine if your priorities align with this methodology. The KP3S is capable of running Klipper on the original board after just adding the Rπ and loading the firmware. That is probably the cheapest half decent way to mess around with a project printer in Klipper. I never use the thing though.
- Comment on I love cheese. 🧀 Do you love cheese? 🧀😄 2 weeks ago:
Nope, gave up dairy for two weeks six years ago and am never going back. It made a giant difference in my inflammation and chronic injuries
- Comment on #EverythingHappensForAReason 2 weeks ago:
This must be some cultural/language disconnect.
My original comment is not really serious and straight forward. I am abstracting and rephrasing the story of Abraham in a more scientific and dark satirical humor. I’m pointing out the contradiction of holding up Abraham as some kind of faithful and loyal figure against his documented behavior. In essence, I am showing that he was a deeply flawed person that most people would condemn in the present world of cultural norms. I’m also specifically obliterating the junction point of all Abrahamic faiths to invalidate all of them equally. Such a statement will be dismissed for various reasons by anyone that is dogmatic, but this information places a seed of doubt in some that might help them navigate away from the blindness of dogma.
In a way, I am doing this out of kindness. I am attacking the narrative at the most neutral point possible and I am humanizing the individual that is at the foundation of the mythos. If this point in the chain of religious teaching is so deeply flawed, everything else in that chain lacks a grounding in truth.
I exist in this kind of abstracted functional thought space. This type of functional thought is one of the more rare outliers, but is still neurotypical. I encourage you to look into the spectrum of functional thought and learn to appreciate the variety of people, what motivates them, and how it is difficult for everyone to relate to some of the different forms of functional thought. You likely care far more about inner personal interactions, relationships, your sense of judgment of others as entities, and think in more polarized absolutes. I am abstract in everything. I see you as a collection of changing actions and statistics. I am good at big picture connections across many contexts and spaces and am driven only by my many curiosities. You and I are likely opposites that struggle to relate to each other. You will likely struggle to understand my abstractions as much as I lack the emotional depth and development to understand what you see and experience with others. We can still learn from and appreciate the diversity of thought and ideas and try to understand how others view the world.
Don’t feel awkward. I appreciate you for who you are.
Yes, I am the librarian… and I like it! :)
- Comment on #EverythingHappensForAReason 2 weeks ago:
Go ahead, now I have to know
- Comment on #EverythingHappensForAReason 2 weeks ago:
Were you raised in documentalistic department of the library?
What you mean?
- Comment on #EverythingHappensForAReason 2 weeks ago:
Be a rich old man like Abraham with a “young slave girl to keep him warm in bed”
… Same schizophrenic old guy that almost killed his son because of the voices in his head; the one at the pivot point of Judaism, Christianity, and Islam.