thinkercharmercoderfarmer
@thinkercharmercoderfarmer@slrpnk.net
- Comment on New kink unlocked 1 week ago:
The grant application practically writes itself.
- Comment on I feel like half the time someone is accused of being a bot, the accuser is the bot. 1 week ago:
- Comment on h8ers gonna h8 1 week ago:
Yeah, I know a bunch of people who grow zucchini and they frequently harvest more than they could possibly use. they literally can’t give it away sometimes. maybe that’s why people who don’t like zucchini don’t like gardeners? Because they don’t want the produce zucchini growers are constantly trying to offload? Bit of a thinker.
- Comment on Nvidia CEO Jensen Huang says ‘I think we’ve achieved AGI’ 1 week ago:
Right, i mean if you made the context window enormous, such that you can include the entire set of embeddings and a set of memories (or maybe, an index of memories that can be “recalled” with keywords) you’ve got a self-observing loop that can learn and remember facts about itself. I’m not saying that’s AGI, but I find it somewhat unsettling that we don’t have an agreed-upon definition. If a for-profit corporation made an AI that could be considered a person with rights, I imagine they’d be reluctant to be convincing about it.
- Comment on Nvidia CEO Jensen Huang says ‘I think we’ve achieved AGI’ 1 week ago:
Yeah I think for it to be a proper strange loop (if that is indeed a useful proxy for consciousness-- I think there’s room for debate on that) it would need to be able to take it’s entire “self” i.e. the whole model, weights, and all memories, as input in order to iterate on itself. I agree that it probably wouldn’t work for the current commercial applications of LLMs, but it not what being what commercial LLMs do, doesn’t mean it couldn’t be done for research purposes.
- Comment on Nvidia CEO Jensen Huang says ‘I think we’ve achieved AGI’ 1 week ago:
There’s no reason an LLM couldn’t be hooked up to a database, where it can save outputs and then retrieve them again to “think” further about them. In fact, any LLM that can answer questions about previous prompts/responses has to be able to do this. If you prompted an LLM to review all of it’s database entries, generate a new response based on that data, then save that output to the database and repeat at regular intervals, I could see calling that a kind of thinking. If you do the same process but with the whole model and all the DB entries, that’s in the region of what I’d call a strange loop. Is that AGI? I don’t think so, but I also don’t know how I would define AGI, or if I’d recognize it if someone built it.
- Comment on h8ers gonna h8 1 week ago:
I don’t care for zucchini and I’m in full support of people being able to grow their own food. I’m not sure how the two are related.
- Comment on MEGA FLAG 1 week ago:
symlink: !vexillology@lemmy.world
- Comment on Knock knock, no one’s there. Study finds scientists’ jokes mostly fall flat 1 week ago:
Someone sat through one too many corny presentation jokes and decided to do something about it.
- Comment on Harmony - Yet Another Discord Alternative 2 weeks ago:
That’s true, maybe “Yet Another Discourse Alternative”? Discussion Alternative? I just like the idea of a chat platform whose acronym is YADA.
- Comment on Harmony - Yet Another Discord Alternative 2 weeks ago:
IME, code projects either die or live long enough that you think of a better name for them long after a name change becomes not worth the effort. Naming things is hard 🤷♂️
- Comment on Trampolines are socially-accepted pens for kids 2 weeks ago:
I haven’t broken many bones, but every bone I’ve broken broke during an unplanned trampoline dismount.
- Comment on Harmony - Yet Another Discord Alternative 2 weeks ago:
Yet Another Discord Alternative would be a better name than Harmony.
- Comment on [deleted] 4 weeks ago:
If we learned nothing else from Cavemen we learned that anything can be a sitcom, you just have to believe in it hard enough.
- Comment on Dear Faith IV 4 weeks ago:
I’m invested. I was on board with the rainbow tables but now I’m having a crisis. A Crisis of Faith.
- Comment on Large-scale online deanonymization with LLMs 4 weeks ago:
This is in some ways an easier problem than classifying LLM vs non-LLM authorship. That only has two possible outcomes, and it’s pretty noisy because LLMs are trained to emulate the average human. Here, you can generate an agreement score based on language features per comment, and cluster the comments by how they disagree with the model. Comments that disagree in particular ways (never uses semicolons, claims to live in Canada, calls interlocutors “buddy”, writes run-on sentences, etc.) would be clustered together more tightly. The more comments two profiles have in the same cluster(s), the more confident the match becomes. I’m not saying this attack is novel or couldn’t be accomplished without an LLM, but it seems like a good fit for what LLMs actually do.
- Comment on Large-scale online deanonymization with LLMs 4 weeks ago:
Why not? if LLMs are good at predicting mean outcomes for the next symbol in a string, and humans have idiosyncrasies that deviate from that mean in a predictable way, I don’t see why you couldn’t detect and correlate certain language features that map to a specific user. You could use things like word choice, punctuation, slang, common misspellings sentence structure… For example, I started with a contradicting question, I used “idiosyncrasies”, I wrote “LLMs” without an apostrophe, “language features” is a term of art, as is “map” as a verb, etc. None of these are indicative on their own, but unless people are taking exceptional care to either hyper-normalize their style, or explicitly spiking their language with confounding elements, I don’t see why an LLM wouldn’t be useful for this kind of espionage.
- Comment on xkcd #3212: Little Red Dots 4 weeks ago:
This is what we’ve been training for.
- Comment on DI.DAY is a Movement to Encourage People to Ditch Big Tech 1 month ago:
The people yearn for IoC
- Comment on How many containers are you all running? 1 month ago:
It depends a lot on what you want to do and a little on what you’re used to. It’s some configuration overhead so it may not be worth the extra hassle if you’re only running a few services (and they don’t have dependency conflicts). IME once you pass a certain complexity level it becomes easier to run new services in containers, but if you’re not sure how they’d benefit your setup, you’re probably fine to not worry about it until it becomes a clear need.
- Comment on How many containers are you all running? 1 month ago:
It’s fun in a way that defies comparison.
- Comment on One-Third of U.S. Video Game Industry Workers Were Laid Off Over the Last Two Years, GDC Study Reveals 1 month ago:
As someone who was recently laid off if anyone wants to front the cash I’m currently available for cheap.
- Comment on How many containers are you all running? 1 month ago:
That’s why I have one host called
theBarreland it’s just 100 Chaos Monkeys and nothing else - Comment on The Trump administration has secretly rewritten nuclear safety rules 1 month ago:
They aren’t a waste of money if investors can assume that there won’t be party changes in the future.
- Comment on There should be smell museums 2 months ago:
When I went to buy fancy cologne for a wedding they had little bowls of coffee beans that were supposed to be palate cleansers. I cannot vouch for how well they worked, I felt like my nose was blown out after a few samples.
- Comment on It's barely a science. 2 months ago:
Ah, I’m glad you clarified. I think there are some magics that don’t have a specific requirement for belief, e.g. casting a spell on a non-believing target, or, depending on how broadly you define magic, gravity (in that, while we have robust theories about how gravity works, we still don’t have a broadly accepted theory about why gravity does what it does). But I do think it’s an interesting type of magic and it can absolutely be subjected to scientific testing. There are a lot of things in that category that aren’t traditionally called magic, like fiat currency, placebos, nation-states (for that matter, laws), human racial categorizations. The impact of belief on a fiat currency (or, belief in the value of that currency) is, I think, pretty well studied though I’m not enough of an economist to know what, if any, theoretical model predicts the fluctuation (or collapse) of a currency’s value.
I’m curious to know what your take is on behavioral economics. It essentially tries to incorporate human fallibility into classical economics. Thaler’s concept of “nudging” is the kind of sleight-of-hand trick that a magician might use to create the illusion of choice.
Also, I’m not a mathematician but they can’t be uniquely responsible for ignoring human fallibility with money. That’s a human problem and capitalists profit by exploiting that tendency, which is why econ (specifically, investments in economic research) tends to focus on research that enables capitalism. The same thing happens in chemistry, pharmaceuticals, anthropology, history, art. Any area of human endeavor can be distorted for personal gain. It just happens that the science of capital, particularly the jargon of economics, is useful for legitimizing and entrenching capitalistic nonsense. Mathematicians are (broadly speaking) more interested in scientific endeavor, at least as much as researchers in any other field.
- Comment on It's barely a science. 2 months ago:
I think magic does get called technology, once we construct a sufficiently rigorous way to test its predictions and those predictions are validated. The first thing that comes to mind is the old folk remedy of using willow bark to treat fever. I don’t know if that specific treatment was ever described as “magic” per se, but for a broad swath of human history it was a rule: if fever, then willow bark. It was also used in a bunch of other remedies that didn’t work, and there were (still are) a ton of folk remedies for fever that either didn’t work or actively worsened the situation, but the combination of willow bark and fevers was eventually validated, salicin was identified as the active agent, and it became a technological commodity. Some magics, like homeopathy, have been scientifically _in_validated, and therefore get relegated to outside the domain of scientific inquiry. Some, like phrenology, gain broad acceptance within a scientific establishment before they are convincingly invalidated and discarded. Some, like astrology, are broadly scientifically rejected but still have a broad lay appeal for non-scientific reasons.
I think the testing of any magical effect is the same as the testing of anything non-magical. The Chaos Magick Servitor sounds like a useful mental model for “learning a new thing”. If it is proven an effective therapy in clinical trials for apnea, is it no longer magic? I just don’t find the question of whether it’s magic an interesting one in that case. I still want to understand the underlying mechanisms, possibly by conducting trials on which skills can be taught via the “Chaos Magick Servitor” method vs. a control, call it the “Mundane Learning of a Brain Technique” method. You could control for faith by surveying participants before sorting them into groups and blinding testers until the test is complete. If faith in Chaos Magick, or the Servitor technique, is predictive of being able to control apnea via that method, I would expect strong believers in the “Chaos Magick Servitor” method to get better results than their non-believing cohorts, and relatively little difference between believers and non-believers in the control group. One potential downside is that I don’t really know of a good method for measuring “faith” other than self-reporting, but I think if the participant pool is large enough you could probably still get some convincing results as long as you’re content to measure effectiveness vs “self-reported faith” rather than “actual faith”. I don’t know that there’s a reliable way to know someone’s innermost heart so that might be the best you can do with our current technology.
In addition to surveying for current faith strength, you could additionally poll for faith-adjacent wants or beliefs, e.g. “In general, do you want your faith in Chaos Magick to be stronger, weaker, or stay the same?” This would give you an additional dimension: instead of just having high faith and low faith, you could have six groups: high-aspirational, high-avoidant, high-content, low-aspirational, low-avoidant, and low-content. If these groups show significant variation in how well they use the Chaos Magick Servitor method, that could illuminate how one’s current faith and their belief about what their faith “should” be affect the treatment. I’d also be curious to see if there would be any differences among the different faith groups in the control group. It could well be that low faith individuals show no benefit, or that they show more improvement with a more scientific sounding presentation of the same concept.
- Comment on It's barely a science. 2 months ago:
I’m not sure what realness has to do with it. Magic tends to have some kind of theoretical framework to explain observable phenomena (god(s), the planets, “energies”, etc.) the same way scientific theories do, they even have some experimental frameworks (e.g. my church growing up had a cadre of old ladies who were touted as “good at praying” because they apparently had a good track record with the man upstairs. To my knowledge these claims were never validated in a properly controlled laboratory environment against a random sample of similar parishioners. They also happened to be voracious gossips who wielded private information as a weapon, which is a funny coincidence.) The phenomena that magic explains are “real” insofar as they are experiences that humans have, but the underpinning theories are often unfalsifiable and/or contradictory (“prayer works” and “god’s plan is unknowable and perfect, eternal and unchanging”). That’s what I mean about coherent theories and predictable results. I guess you could say that theories that make accurate predictions are “more real” but I don’t think it makes sense to think about the realness of a scientific theory. It’s either proven false or not proven false so far.
- Comment on It's barely a science. 2 months ago:
I mean, yeah. We don’t have a unified theory of quantum gravity because at least one of our assumptions is off. Science is just figuring out precisely which assumptions are wrong and how wrong our they are.
- Comment on Humans on average get 2 hours of battery life for every hour they charge 2 months ago:
oh yeah if your engine timing is off it can make the whole system run really rough, even if it’s in otherwise superb condition. That throws a lot of newbies who don’t understand why none of their performance tuning seems to have any effect.