Hackworth
@Hackworth@piefed.ca
- Comment on smh 19 hours ago:
Thank you. I’ve only ever seen “mi.” for miles.
- Comment on How?!? 3 days ago:
Oompa Loompa doompety doo.
- Comment on Denominator, go Mercator 4 days ago:
- Comment on LLMs are already doing fascists a favor by ensuring that anything that is reasonably eloquently formulated on social media is automatically suspected of having been written by LLMs. 2 weeks ago:
I’ve looked into it a little. If all you want to do is listen, I don’t think ya need a cert. And the transmit one isn’t that hard to get. They removed the Morse requirement, though you can still get a higher tier certification for learning it. There are a surprising number of ham antennas and generators in my neighborhood.
- Comment on Bandcamp bans purely AI-generated music from its platform 2 weeks ago:
Suno.com is basically this. It even allows users to comment on the songs.
- Comment on LLMs are already doing fascists a favor by ensuring that anything that is reasonably eloquently formulated on social media is automatically suspected of having been written by LLMs. 2 weeks ago:
I downloaded 17 years worth of my comments before overwriting and deleting my old reddit account. Been thinking about QLoRA fine-tuning Qwen on those comments. Not for use on the internet or anything, just so I can streamline the process of arguing with myself.
- Comment on Where are the marketing volunteers? 2 weeks ago:
As a practitioner of that dark art, I fear you know not what you summon. You don’t really want Lemmy to be popular, not in a way that traditional marketing is going to make it popular.
- Comment on The Death of DeviantArt and the art-site shaped hole haunting the Internet -- Multi-hyphenate 3 weeks ago:
[image of Clippy]
- Comment on When I was a kid, computers expanded your mind and your freedoms, bringing power to the individual. With AI, now it does the thinking for you, takes your job, gives power only to a few billionaires. 3 weeks ago:
If you put [brackets] around the word before your (parened link), it’ll make it an actual link.
- Comment on When I was a kid, computers expanded your mind and your freedoms, bringing power to the individual. With AI, now it does the thinking for you, takes your job, gives power only to a few billionaires. 3 weeks ago:
LLMs are both deliberately and unwittingly programmed to be biased.
I mean, it sounds like you’re mirroring the paper’s sentiments too. A big part of Clark’s point is that interactions between humans and generative AI need to take into account the biases of the human and the AI.
The lesson is that it is the detailed shape of each specific human-AI coalition or interaction that matters. The social and technological factors that determine better or worse outcomes in this regard are not yet fully understood, and should be a major focus of new work in the field of human-AI interaction. […] We now need to become experts at estimating the likely reliability of a response given both the subject matter and our level of skill at orchestrating a series of prompts. We must also learn to adjust our levels of trust
And as I am not, Clark is not really calling Plato a crank. That’s not the point of using the quote.
And yet, perhaps there was an element of truth even in the worries raised in the Phaedrus. […] Empirical studies have shown that the use of online search can lead people to judge that they know more ‘in the biological brain’ than they actually do, and can make people over-estimate how well they would perform under technologically unaided quiz conditions.
I don’t think anyone is claiming that new technology necessarily leads to progress that is good for humanity.
- Comment on When I was a kid, computers expanded your mind and your freedoms, bringing power to the individual. With AI, now it does the thinking for you, takes your job, gives power only to a few billionaires. 3 weeks ago:
I talked about the way in which Plato’s concerns were valid and expressed similar fears about misuse. The linked article is about how to approach the specific technology.
- Comment on When I was a kid, computers expanded your mind and your freedoms, bringing power to the individual. With AI, now it does the thinking for you, takes your job, gives power only to a few billionaires. 3 weeks ago:
This discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality. - Plato on the invention of writing in The Phaedrus
Every notable invention associated with language (and communication in general) has elicited similar reactions. And I don’t think Plato is wholly wrong, here. With each level of abstraction from the oral tradition, the social landscape of meaning is further externalized. But that doesn’t mean the personal landscape of meaning must be. AI only does the thinking for you if that’s what you use it for.
- Comment on Google sues web scraper for sucking up search results ‘at an astonishing scale’ 5 weeks ago:
<Three Spidermen Point>
- Comment on Larian CEO Responds to Divinity Gen AI Backlash: 'We Are Neither Releasing a Game With Any AI Components, Nor Are We Looking at Trimming Down Teams to Replace Them With AI' 5 weeks ago:
As I understand it, CLIP (and other text encoders in diffusion models) aren’t trained like LLMs, exactly. They’re trained on image/text pairing, which ya get from the metadata creators upload with their photos in Adobe Stock. That said, Adobe hasn’t published their entire architecture.
- Comment on Larian CEO Responds to Divinity Gen AI Backlash: 'We Are Neither Releasing a Game With Any AI Components, Nor Are We Looking at Trimming Down Teams to Replace Them With AI' 5 weeks ago:
The Firefly image generator is a diffusion model, and the Firefly video generator is a diffusion transformer. LLMs aren’t involved in either process. I believe there are some ChatGPT integrations with Reader and Acrobat, but that’s unrelated to Firefly.
- Comment on New Ways to Corrupt LLMs: The wacky things statistical-correlation machines like LLMs do – and how they might get us killed
1 month ago:
Here’s a metaphor/framework I’ve found useful but am trying to refine, so feedback welcome.
Visualize the deforming rubber sheet model commonly used to depict masses distorting spacetime. Your goal is to roll a ball onto the sheet from one side such that it rolls into a stable or slowly decaying orbit of a specific mass. You begin aiming for a mass on the outer perimeter of the sheet. But with each roll, you must aim for a mass further toward the center. The longer you roll, the more masses sit between you and your goal, to be rolled past or slingshot-ed around. As soon as you fail to hit a goal, you lose. But you can continue to play indefinitely.
The model’s latent space is the sheet. The prompt is your rolling of the ball. The response is the path the ball takes. And the good (useful, correct, original, whatever your goal was) response is the orbit of the mass you’re aiming for. As the context window grows, there are more pitfalls the model can fall into. Until you lose, there’s a phase transition, and the model starts going way off the rails. This phase transition was formalized mathematically in this paper from August.
The masses are attractors that have been studied at different levels of abstraction. And the metaphor/framework seems to work at different levels as well, as if the deformed rubber sheet is a fractal with self-similarity across scale.
One level up: the sheet becomes the trained alignment, the masses become potential roles the LLM can play, and the rolled ball is the RLHF or fine-tuning. So we see the same kind of phase transition in both prompting (from useful to hallucinatory) and in training.
Two levels down: the sheet becomes the neuron architecture, the masses become potential next words, and the rolled ball is the transformer process.
In reality, the rubber sheet has like 40,000 dimensions, and I’m sure a ton is lost in the reduction.
- Comment on Larian CEO Responds to Divinity Gen AI Backlash: 'We Are Neither Releasing a Game With Any AI Components, Nor Are We Looking at Trimming Down Teams to Replace Them With AI' 1 month ago:
Adobe’s image generator (Firefly) is trained only on images from Adobe Stock.
- Comment on Larian CEO Responds to Divinity Gen AI Backlash: 'We Are Neither Releasing a Game With Any AI Components, Nor Are We Looking at Trimming Down Teams to Replace Them With AI' 1 month ago:
Coincidentally, this paper published yesterday indicates that LLMs are worse at coding the closer you get to the low level like assembly or binary. Or more precisely, ya stop seeing improvements pretty early on in scaling up the models. If I’m reading it right, which I’m probably not.
- Comment on Larian CEO Responds to Divinity Gen AI Backlash: 'We Are Neither Releasing a Game With Any AI Components, Nor Are We Looking at Trimming Down Teams to Replace Them With AI' 1 month ago:
There are AI’s that are ethically trained. There are AI’s that run on local hardware. We’ll eventually need AI ratings to distinguish use types, I suppose.
- Comment on Larian CEO Responds to Divinity Gen AI Backlash: 'We Are Neither Releasing a Game With Any AI Components, Nor Are We Looking at Trimming Down Teams to Replace Them With AI' 1 month ago:
Yup! Certifying a workflow as AI-free would be a monumental task now. First, you’d have to designate exactly what kinds of AI you mean, which is a harder task than I think people realize. Then, you’d have to identify every instance of that kind of AI in every tool you might use. And just looking at Adobe, there’s a lot. Then you, what, forbid your team from using them, sure, but how do you monitor that? Ya can’t uninstall generative fill from Photoshop. Anyway, that’s why anything with a complicated design process marked “AI-Free” is going to be the equivalent of greenwashing, at least for a while. But they should be able to prevent obvious slop from being in the final product just in regular testing.
- Comment on It Only Takes A Handful Of Samples To Poison Any Size LLM, Anthropic Finds 1 month ago:
There’s a lot of research around this. So, LLM’s go through phase transitions when they reach the thresholds described in Multispin Physics of AI Tipping Points and Hallucinations. That’s more about predicting the transitions between helpful and hallucination within regular prompting contexts. But we see similar phase transitions between roles and behaviors in fine-tuning presented in Weird Generalization and Inductive Backdoors: New Ways to Corrupt LLMs.
This may be related to attractor states that we’re starting to catalog in the LLM’s latent/semantic space. It seems like the underlying topology contains semi-stable “roles” (attractors) that the LLM generations fall into (or are pushed into in the case of the previous papers).
Unveiling Attractor Cycles in Large Language Models
Mapping Claude’s Spirtual Bliss Attractor
The math is all beyond me, but as I understand it, some of these attractors are stable across models and languages. We do, at least, know that there are some shared dynamics that arise from the nature of compressing and communicating information.
Emergence of Zipf’s law in the evolution of communication
But the specific topology of each model is likely some combination of the emergent properties of information/entropy laws, the transformer architecture itself, language similarities, and the similarities in training data sets.
- Comment on Oracle made a $300 billion bet on OpenAI. It's paying the price. 1 month ago:
Copilot is just an implementation of GPT. Claude’s the other main one.
- Comment on Biblically accurate tree angel 1 month ago:
- Comment on Do you ever feel like your life is "scripted"? Like everything is written by some entity controlling your life? Like you live in a fictional universe? Is this feeling normal/common? 1 month ago:
- Comment on Make me feel like a man 1 month ago:
Æsahættr has entered the chat.
- Comment on Why don't compasses have just two Cardinal directions (North, East, -North, -East)? 1 month ago:
Double plus ungood
- Comment on AI Slop Is Ruining Reddit for Everyone 1 month ago:
Westworld
- Comment on Google’s AI model is getting really good at spoofing phone photos 1 month ago:
Yeah, a more honest take would discuss the strengths & weakness of the model. Flux is still better at text than Nano Banana, for instance. There’s no “one model to rule them all,” as much as tech journalism seems to wants to write like that.
- Comment on Google’s AI model is getting really good at spoofing phone photos 1 month ago:
Directly, generating higher res stuff requires way more compute. But there are plenty of AI upscalers out there, some better, some worse. These are also built into Photoshop now. The difference between an AI image that is easy to spot and hard to spot is using good models. The difference between an AI image that is hard to spot and nearly impossible to spot is another 20 min of work in post.
- Comment on Google’s AI model is getting really good at spoofing phone photos 1 month ago:
Nano Banana Pro’s built into Photoshop now.