Hackworth
@Hackworth@piefed.ca
- Comment on I was on social media before web browsers existed. I am Legion. 4 days ago:
Oh, you think the darkness is your ally. But you merely adopted the dark.
- Comment on North Korean agents using AI to trick western firms into hiring them, Microsoft says 4 days ago:
Between 2020 and 2022, the US government found that over 300 US companies in multiple industries, including several Fortune 500 companies, had unknowingly employed these workers, indicating the magnitude of this threat.
So AI’s just making some things easier.
In some cases, victim organizations have even reported that remote IT workers were some of their most talented employees.
Lol.
North Korean remote IT workers require assistance from a witting facilitator to help find jobs, pass the employment verification process, and once hired, successfully work remotely.
I was gonna say… how do they deal with the HR paperwork and getting paid. They need Americans or at least people from other countries to help.
- Comment on Anthropic is untrustworthy 5 days ago:
You got a genuine laugh outta me. Well put.
- Comment on Anthropic is untrustworthy 5 days ago:
They keep trying to, but Hegseth formally designated them a supply chain risk just a few hours ago, right after using Claude in Iran.
- Comment on Anthropic is untrustworthy 5 days ago:
and yet still preferable to OpenAI, Google, and xAI.
- Comment on Do werewolves shed? Or do they lose their whole coat when transferring back? Do Vampires have little holes in their K9s to suck up the blood after puncturing someone or thing? 1 week ago:
Like a reverse selkie. Speaking of Fae, the werewolves of Ossory are neat.
- Comment on Inside Anthropic’s Killer-Robot Dispute With the Pentagon | New details on precisely where the lines were drawn 1 week ago:
The bar is so low. Anthropic is only trying to raise it every so slightly.
- Comment on If I had a time machine I would go back in time and give all my friends their mental health diagnosis before they had to live through the really tough parts. 1 week ago:
True! But that requires trust. Trust that the person transferring the knowledge correctly interpreted their experience and was able to communicate it well. As wisdom fades from living memory (as those who directly experienced it pass or are marginalized), it seems difficult for society to maintain the integrity of that knowledge across new contexts. The scientific method is supposed to help with this, but we have difficulty following it at scale. Reminds me of this comic about collecting questions.
- Comment on If I had a time machine I would go back in time and give all my friends their mental health diagnosis before they had to live through the really tough parts. 1 week ago:
I largely agree with Siddhartha that wisdom can only be gained through experience. I just think of all the times I knew something intellectually but didn’t understand it sufficiently to properly act on it until I lived it. But there is a more fun corollary from Zen Without Zen Masters, “If you think you can get beyond pleasure without going through it, we are definitely on different trips.”
- Comment on It's rude to show AI output to people | Alex Martsinovich 1 week ago:
Aye, Anthropic is head and shoulders above everyone else on guidance, largely because they focus entirely on text/code. They’re not simultaneously developing image, video, and audio generators. Even Claude’s voice is just an 11Labs model. Plus I get the impression they’re just smarter about what they choose to research and how they use that info to improve the model.
- Comment on "Cancel ChatGPT" movement goes mainstream after OpenAI closes deal with U.S. Department of War — as Anthropic refuses to surveil American citizens 1 week ago:
I did find an update on that funding, btw. Anthropic already took money from Qatar (the QIA), but the amount isn’t known - likely around $100M. The UAE has yet to happen, but if does, it would be “hundreds of millions”.
- Comment on "Cancel ChatGPT" movement goes mainstream after OpenAI closes deal with U.S. Department of War — as Anthropic refuses to surveil American citizens 1 week ago:
I mean, I’m not gonna defend him. But fucking up a discord that you’re a mod of isn’t really in the same ballpark as taking money from dictators or directing fully autonomous strikes. Also, from the read, it really sounds like that Deputy CISO was a prime example of cyber-psychosis, or AI mania, or whatever we’ve decided to call it. And I assume he is part of the same vulnerable minority?
- Comment on "Cancel ChatGPT" movement goes mainstream after OpenAI closes deal with U.S. Department of War — as Anthropic refuses to surveil American citizens 1 week ago:
Oh, that guy! To be fair, that’s one employee, not Anthropic’s actions or position. You mentioned forcing their software on minorities while insisting it was better than it was, and I was getting OLPC flashbacks. But Anthropic looking for funding in the UAE and Qatar is shitty. I can’t seem to find anything about whether or not they went through with those contracts.
- Comment on "Cancel ChatGPT" movement goes mainstream after OpenAI closes deal with U.S. Department of War — as Anthropic refuses to surveil American citizens 1 week ago:
They insisted Claude was human?
- Comment on "Cancel ChatGPT" movement goes mainstream after OpenAI closes deal with U.S. Department of War — as Anthropic refuses to surveil American citizens 1 week ago:
Amodei said in an interview that the DoW altered their contract to appear to compromise, so that it looked like they were agreeing to those use limits. But that legalese accompanying the updates rendered that text pointless. Basically, “We won’t use Claude for mass domestic surveillance and full automated killing, unless we really want to.” My guess is OpenAI signed the exact same contract and just pretended not to understand the toothlessness of the guardrails.
- Comment on It's rude to show AI output to people | Alex Martsinovich 1 week ago:
It’s more about post size for me. If ya post a few sentences that clearly and concisely communicate a point, I don’t really care if they’re crafted or generated. If ya post a wall of text, I wanna know ya put the kind of effort in that made its length necessary if I’m gonna put in the effort to read it.
- Comment on President Donald Trump bans Anthropic from use in government systems 1 week ago:
Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow.
Anthropic yesterday:
Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions. Our models will be available on the expansive terms we have proposed for as long as required. -Dario’s Post
- Comment on Better do it anyway, though 🤔 1 week ago:
- Comment on American Foreign Policy 1 week ago:
- Comment on Well, shit 1 week ago:
Fun fact: Artax can speak in the novel.
- Comment on Bcachefs creator claims his custom LLM is 'fully conscious' 1 week ago:
We have precedent for dealing with things within our own imaginations that seem to have autonomy. Authors commonly talk about their characters seeming to take on a life of their own over time. Dream characters can honestly surprise the dreamer. The esoteric traditions of invocation/evocation can be viewed as an intentional applications of this feature in semantic/latent space.
But if the idea is that LLMs are a kind of external imagination, the question isn’t really whether or not the characters roleplayed during inference are conscious. They’re no more aware than the people in our dreams. The question is, as you say, what is it like to be those layers of software neurons in between the word generations. Can you have an imagination without an imaginer? In other words, is there a dreamer?
If the answer is no, case closed, relatively tidy. If the answer is yes, it’s a truly alien kind of consciousness. Embodiment comes with a bunch of stuff that an LLM has absolutely no access to. Generally speaking, we find it difficult to put ourselves in the shoes of other humans, much less animals, plants/fungii. And they’re embodied! LLMs are nothing like us, and they’re certainly not gendered.
- Comment on Is it possible that none of this is real? 2 weeks ago:
Unless there’s a way to exit game, I’m not sure what difference it’d genuinely make. I am as real as I am.
- Comment on Anti-Woke means asleep. A pictorial example inside. 2 weeks ago:
- Comment on Oh no, Intel is moving customer support to AI 2 weeks ago:
That would be awesome. But it sounds like the llm has some access to customer accounts and call tools. Prompt injection remains a fundamentally unsolved problem, so I’m curious about how Intel plans to handle that.
- Comment on Ask AI: I want to wash my car. The car wash is 50 meters away. Should I walk or drive? 3 weeks ago:
“Are you suggesting I ghost ride the whip?”
- Comment on Ask AI: I want to wash my car. The car wash is 50 meters away. Should I walk or drive? 3 weeks ago:
Yup. And it didn’t feel the need to write a paper about it.
- Comment on How does a person get on the No Gun List without commiting a crime? My brother was diagnosed with BIpolar and others he doesn't even want the option ten year down the road. 3 weeks ago:
Caveat on the drug thing: There is a list of who has a medical cannabis Rx. And I suppose any prescription… do opiods count?
- Comment on A succulent meal 3 weeks ago:
-Dr. Steve Brule
- Comment on AI safety leader says 'world is in peril' and quits to study poetry 3 weeks ago:
So what I meant by “doubt they’ll be able to play the good guy for long” is exactly that no corpo is your friend. But I also believe perfect is the enemy of good, or at least better. I want to encourage companies to be better, knowing full well that they will not be perfect. Since Anthropic doesn’t make image/video/audio generators, they may just not see CSAM as a directly related concern for the company. A PAC doesn’t have to address every harm to be a source of good.
As for self-harm, that’s an alignment concern, the main thing they do research on. And based on what they’ve published, they know that perfect alignment is not in our foreseeable future. They’ve made a lot of recent improvements that make it demonstrably harder to push a bot to dark traits. But they know damn well they can’t prevent it without some structural breakthroughs. And who knows if those will ever come?
I read that 404 media piece when it got posted here, and this is also probably that guy’s fault. And frankly, Dario’s energy creeps me out. I’m not putting Anthropic on a pedestal here, they’re just… the least bad… for now?
- Comment on AI safety leader says 'world is in peril' and quits to study poetry 3 weeks ago:
They’re advocating for transparency and for states to be able to have their own AI laws. I see that as positive. And as part of that transparency, Anthropic publishes its system prompts, which go through with every message. They devote a significant portion to mental health, suicide prevention, not enabling mania, etc. So I wouldn’t say they see it as “acceptable.”