Hackworth
@Hackworth@lemmy.world
- Comment on WILD 1 week ago:
I was just being a smartass, but I appreciate your commitment to clear communication.
- Comment on WILD 1 week ago:
No, it’s “biologically.”
- Comment on Not likely to be AI-generated or Deepfake 2 weeks ago:
I can tell from some of the pixels and from seeing quite a few shops in my time.
- Comment on Reddit is profitable for the first time ever, with nearly 100 million daily users 2 weeks ago:
usersbots - Comment on Google creating an AI agent to use your PC on your behalf, says report | Same PR nightmare as Windows Recall 2 weeks ago:
Yeah, but they encourage confining it to a virtual machine with limited access.
- Comment on Kamala Harris Dropped a New Custom 'Fortnite' Map 2 weeks ago:
Huh. Grandpa Simpson was right. It did happen to me too.
- Comment on Linus Torvalds reckons AI is ‘90% marketing and 10% reality’ 2 weeks ago:
Logic and Path-finding?
- Comment on Feds Say You Don’t Have a Right to Check Out Retro Video Games Like Library Books 3 weeks ago:
Shithole country.
- Comment on Pee posting? 3 weeks ago:
I instantly heard it. DEEK
- Comment on Claude has taken control of my computer... 3 weeks ago:
Yeah, using image recognition on a screenshot of the desktop and directing a mouse around the screen with coordinates is definitely an intermediate implementation. Open Interpreter, Shell-GPT, LLM-Shell, and DemandGen make a little more sense to me for anything that can currently done from a CLI, but I’ve never actually tested em.
- Comment on Claude has taken control of my computer... 3 weeks ago:
I was watching users test this out, and am generally impressed. At one point, Claude tried to open Firefox, but it was not responding. So it killed the process from the console and restarted. A small thing, but not something I would have expected it to overcome this early. It’s clearly not ready for prime time (by their repeated warnings), but I’m happy to see these capabilities finally making it to a foundation model’s API. It’ll be interesting to see how much remains of GUIs (or high level programming languages for that matter) if/when AI can reliably translate common language to hardware behavior.
- Comment on Nvidia blocks access to video card driver updates for users from Russia and Belarus. 3 weeks ago:
Can I blame Trump on 9/11 or something?
- Comment on X's idiocy is doing wonders for Bluesky. 3 weeks ago:
Aren’t they in Macy’s now? Wait, is Macy’s still a thing?
- Comment on 'Garbage in, garbage out': AI fails to debunk disinformation, study finds. 3 weeks ago:
In its latest audit of 10 leading chatbots, compiled in September, NewsGuard found that AI will repeat misinformation 18% of the time
- Comment on Baidu CEO warns AI is just an inevitable bubble — 99% of AI companies are at risk of failing when the bubble bursts 3 weeks ago:
To be clear, it’ll be 10-30 years before AI displaces all human jobs.
- Comment on Square! 1 month ago:
I don’t really know, but I think it’s mostly to do with pentagons being under-represented in the world in general. That and the specific way that a pentagon breaks symmetry. But it’s not completely impossible to get em to make one. After a lot of futzing around, o1 wrote this prompt, which seems to work 50% of the time:
An illustration of a regular pentagon shape: a flat, two-dimensional geometric figure with five equal straight sides and five equal angles, drawn with black lines on a white background, centered in the image.
- Comment on Square! 1 month ago:
Fun Fact: It is very difficult to get any of the image generators to make a pentagon.
- Comment on Rose Finch 1 month ago:
I think it was Perplexity. Moved to using Flux since that cuddly monstrosity.
- Comment on Rose Finch 1 month ago:
- Comment on The Irony of 'You Wouldn't Download a Car' Making a Comeback in AI Debates 2 months ago:
Calling what attention transformers do memorization is wildly inaccurate.
- Comment on The Irony of 'You Wouldn't Download a Car' Making a Comeback in AI Debates 2 months ago:
It honestly blows my mind that people look at a neutral network that’s even capable of recreating short works it was trained on without having access to that text during generation… and choose to focus on IP law.
- Comment on The Irony of 'You Wouldn't Download a Car' Making a Comeback in AI Debates 2 months ago:
The issue is that next to the transformed output, the not-transformed input is being in use in a commercial product.
Are you only talking about the word repetition glitch?
- Comment on The Irony of 'You Wouldn't Download a Car' Making a Comeback in AI Debates 2 months ago:
How do you imagine those works are used?
- Comment on The Irony of 'You Wouldn't Download a Car' Making a Comeback in AI Debates 2 months ago:
It’s called learning, and I wish people did more of it.
- Comment on The Irony of 'You Wouldn't Download a Car' Making a Comeback in AI Debates 2 months ago:
This is an inaccurate understanding of what’s going on. Under the hood is a neutral network with weights and biases, not a database of copyrighted work. That neutral network was trained on a HEAVILY filtered training set (as mentioned above, 45 terabytes was reduced to 570 GB for GPT3). Getting it to bug out and generate full sections of training data from its neutral network is a fun parlor trick, but you’re not going to use it to pirate a book. People do that the old fashioned way by just adding type:pdf to their search.
- Comment on The Irony of 'You Wouldn't Download a Car' Making a Comeback in AI Debates 2 months ago:
You’ve made a lot of confident assertions without supporting them. Just like an LLM! :)
- Comment on The Irony of 'You Wouldn't Download a Car' Making a Comeback in AI Debates 2 months ago:
Just taking GPT 3 as an example, its training set was 45 terabytes, yes. But that set was filtered and processed down to about 570 GB. GPT 3 was only actually trained on that 570 GB. The model itself is about 700 GB. Much of the generalized intelligence of an LLM comes from abstraction to other contexts.
- Comment on The Irony of 'You Wouldn't Download a Car' Making a Comeback in AI Debates 2 months ago:
Equating LLMs with compression doesn’t make sense. Model sizes are larger than their training sets. if it requires “hacking” to extract text of sufficient length to break copyright, and the platform is doing everything they can to prevent it, that just makes them like every platform. I can download © material from YouTube (or wherever) all day long.
- Comment on Microsoft is bringing annoying Windows 11 Start menu ads to Windows 10 2 months ago:
Cannot be done with Mint? I’ve OS hopped every few years - currently running Windows 11 at work and Mint at home. I much prefer the Mint install. That said, I’m a video producer - and video production just isn’t there yet on Linux. CUDA’s a pain to get working, proprietary codecs add steps, Davinci’s linux support is more limited than it seems, KDenLive works in a pinch but lacks features, Adobe and Linux are like oil and water, there’s no equivalent for After Effects… I don’t doubt that there are workarounds for many of these issues. But the ROI’s not there yet. I’d love to see a video production focused distro that really aimed for full production suite functionality. Especially since Hackintoshes are about to get even harder to build.
- Comment on LLMs develop their own understanding of reality as their language abilities improve 2 months ago:
Genuine question: What evidence would make it seem likely to you that an AI “understands”? These papers are coming at an unyielding rate, so these conversations (regardless of the specifics) will continue. Do you have a test or threshold in mind?