Hackworth
@Hackworth@sh.itjust.works
- Comment on Dawg... 6 days ago:
More weed use among teens → more hot dogs sold in the U.S.
- Comment on Punch Time 6 days ago:
Just for fun, this was Opus’s attempt at translation with the prompt: “I saw this in a commercial for tea. Please translate: 「美味ちい」Keep an eye out for wordplay or cultural difference that would matter in the translation.”
This is clever Japanese wordplay! 「美味ちい」 is a pun that combines:
- 美味しい (oishii) = "delicious"
- 茶 (cha) = “tea”
They’ve replaced the し (shi) in “oishii” with ち (chi), creating a portmanteau that sounds like “oichii.” This blends “delicious” with a phonetic hint at “tea” (since ち/chi is close to ちゃ/cha).
Translation: “Deli-tea-ous” or “Tea-licious”
In English, we’d need a similar pun to capture the playful spirit. The commercial is essentially making a tea-specific version of the word “delicious” - it’s a fun bit of marketing language that suggests the tea is so good it deserves its own special adjective!
Interestingly, Anthropic’s newer (but smaller) model, Sonnet 4.5, doesn’t catch on without following up with something to the effect of, “What is tea in Japanese?” LLMs showed pretty amazing multilingual capabilities before they even started intentionally training them with more languages. That said, you definitively still need someone who speaks both languages to check/guide em. As it is their way to bullshit.
- Comment on Sora AI Slop is here 1 week ago:
Been in video production for 30 years. We cannot rely on spotting problems to identify AI video. Selective attention alone means we miss major details in traditional video. Streaming services are testing AI upscaling, adding AI artifacts to pre-AI media. I regularly see comments pointing out artifacts from other sources (interlacing, morph cuts, moire, stabilization, compression, data moshing, etc) as evidence that the video is AI generated.
- Comment on No more Mr. Nice guy 1 week ago:
I get that too. But many people who are going to do harm won’t care what you do to ingratiate yourself. They’re more likely to see it as a sign of weakness that makes an easier target. We can’t service our way to safety. And on an even more fundamental level, the never-ending grasping for security through impression management is a way to distract us from the fear of death. “Memento mori,” in part is meant to short circuit this drive, which can turn malignantly toward sacrificing humanity and freedom for illusions of immortality and invincibility.
- Comment on No more Mr. Nice guy 1 week ago:
Probably not, but this is what I try to remember. People pleasing is a weird kind of arrogance. It feels like a selfless sacrifice, but it’s fundamentally an attempt at control. We cannot control the internal state of others. People are not going to always be pleased, nor should they be. And on the other side, those that encourage people pleasing behavior are (perhaps unintentionally) trying to control or exploit me.
- Comment on Don't forget to turn purple and remove your arms 1 week ago:
That’s not what’s meant by “ears over shoulders,” rather they’re over when standing. So in-line with shoulders when laying down, not what’s pictured here. But whether or not you want the neck in flection depends on whether or not there’s been an injury. The kinetic chain isn’t simple.
- Comment on Unified Theory of American Reality 1 week ago:
Curiosity is as important as raw intelligence, and curiosity can be encouraged. Mostly it’s discouraged.
- Comment on Advocates raise alarm over Pfas pollution from datacenters amid AI boom 1 week ago:
- Comment on [deleted] 2 weeks ago:
Funny, that was the first thing I thought of. Then the Tom Petty music video where Alice is a cake.
- Comment on Notepad gets AI features like Summarize, Write, and Rewrite, using local and cloud models. 3 weeks ago:
More tech that does not need the internet at all would be great.
- Comment on Whether you use AI, think it's a "fun stupid thing for memes", or even ignore it, you should know it's already polluting worse than global air travel. 4 weeks ago:
Need is a strong word. There are much more efficient ways to cool data centers. They’ve just chosen the most wasteful way because it’s the cheapest (for them).
- Comment on Toxic “forever chemicals” found in 95% of beers tested in the U.S. 4 weeks ago:
Oh, PFAS, yeah. We passed the planetary boundary with that a few years ago.
levels of PFOA and PFOS in rainwater often greatly exceed US Environmental Protection Agency (EPA) Lifetime Drinking Water Health Advisory levels
Here’s a map of the known concentrations. But they’re in everything now, including our blood.
- Comment on Exactly Six Months Ago, the CEO of Anthropic Said That in Six Months AI Would Be Writing 90 Percent of Code 4 weeks ago:
I’m a video producer who occasionally needs to code. I find it much more useful to write the code myself, then have AI identify where things might be going wrong. I’ve developed a decent intuition for when it will be helpful and when it will just run in circles. It has definitely helped me out of some jams. Generative images/video are in much the same boat. I almost never use a fully AI shot/image in professional work. But generative fill and generative extend are extremely useful.
- Comment on Trump's video on the shooting of Kirk appears to be AI 4 weeks ago:
Yeah, that’s morph cut in Premiere, which they added a decade ago. There’s also generative extend, which is the more recent AI addition.
- Comment on "Behavioral Conditioning Methods to Stop my Boyfriend from Playing The Witcher 3" 5 weeks ago:
Like Cyberpunk 2077
- Comment on Vimeo is getting acquired by Bending Spoons, the parent company of Evernote 5 weeks ago:
Been uploading to Vimeo for 20 years. This does not sound like good news.
- Comment on AI Startup Flock Thinks It Can Eliminate All Crime In America 5 weeks ago:
Great resources! I’d like to add the ALPR Map of Flock Cameras, DeFlock.
- Comment on How could AI be better than an encyclopedia? 5 weeks ago:
In much the same way people think of digital storage as external memory, I think of generative A.I. as external imagination. Of course, human memory doesn’t work like a hard drive, and LLMs don’t work like our imaginations. But as a guiding metaphor, it seems to work well for identifying good/bad use cases.
- Comment on AI startup Anthropic agrees to pay $1.5bn to settle book piracy lawsuit 5 weeks ago:
Anthropic’s news page is educational in every sense of the word.
- Comment on AI startup Anthropic agrees to pay $1.5bn to settle book piracy lawsuit 5 weeks ago:
- Comment on Oppression.jpg 5 weeks ago:
- Comment on Big Surprise—Nobody Wants 8K TVs 5 weeks ago:
It’s such a shame that UHD isn’t easier to find. Even the ones you can find are poorly mastered half the time. But a good UHD on an OLED is chef’s kiss just about the closest you can get to having a 35mm reel/projector at home.
You are absolutely on point with 4k streaming being a joke. Most 4k streams are 8-20 Mbps. A UHD runs at 128 Mbps.
- Comment on MIT Study Finds AI Use Reprograms the Brain, Leading to Cognitive Decline 1 month ago:
- Comment on ChatGPT offered bomb recipes and hacking tips during safety tests 1 month ago:
- Comment on Are you not entertained? 1 month ago:
Cyberpunk Circuses
- Comment on The family of teenager who died by suicide alleges OpenAI's ChatGPT is to blame 1 month ago:
One of the few reliable uses of an LLM is brainstorming, as a wall to bounce ideas off of, or more accurately a semantic mirror. In low-stakes situations (like a writer thinking about their story from a different perspective), you’re essentially probing the higher dimensional latent space for connections between meetings. But training usually pushes an LLM to respond with the most generic shit you can think of. Well, it’s generic because it’s common. It has an oft-traveled path of meaning, so those connections are the first to surface. If the writer wants to tease through more surprising possibilities, they’ll quickly learn to direct the model to less well-worn territories. It rarely even requires anything approaching jailbreaking methods like U$1||G 7117 5P34K.
The Childlike Empress makes no distinction between good and evil beings of Fantastica, as they all must live in the imaginations of mankind. In high-stakes situations, this kind of imaginitive freedom can have (and does have) enormous consequences. If we think of an LLM as something akin to an external imagination, we can interpret interactions with it with some maturity and honesty. If we think of an LLM as an oracle, or a friend, or a lover, or what have you - we’re signing a contract with the Fae Folk.
I see some similarities in the way that the “Doom Caused Columbine” conversation happened early on. And just as that resulted in the establishment of the ESRB, hopefully this incident (and others like it) will lead to some reform. But I don’t know exactly what that reform needs to look like. I think education is helpful, but I don’t think it’s enough. We largely know about the harms of social media and it is no less of an issue. Guardrails can kind of be set up, but the only way to do it presently (technically speaking) is hamfisted and ineffective. And adults are no more immune to the potential harms of abusing an LLM than they’re immune to being influenced by advertisements.
- Comment on All while the skeletal, crumbling, dusty bones of an econ major pulls business backwards into hell. 1 month ago:
One may draw upon the dark arts with any degree. -BA in Film, make ads
- Comment on If AI “hallucinates,” doesn’t that make it more human than we admit? 1 month ago:
The similarities between (particularly early) image generation and dream imagery probably aren’t coincidental. Maybe it’s just that they’re both generated from latent spaces.
- Comment on They'd just appear out of nowhere 1 month ago:
- Comment on Lemmy be like 2 months ago:
Generative inpainting doesn’t typically employ an LLM. Only a few even use attention transformers. It costs in the range of $100,000 - $10 million to train a new diffusion or flow image model. Not cheap, but nothing crazy like training Opus or GPT 5.