MangoCats
@MangoCats@feddit.it
- Comment on I've never been in a situation where me having a gun would have made things bettter. 2 days ago:
Peyote would be the strong medicine in the four corners area, weed is everywhere.
- Comment on I've never been in a situation where me having a gun would have made things bettter. 2 days ago:
There’s also a lot of users of “strong medicine” in that area… it can make the stories more vivid.
- Comment on Windows 11’s 2025 problems are getting impossible to ignore 4 days ago:
Pro and Home is where they test-market the worst of the garbage… some of it does make it into Enterprise - a surprising amount has gotten into Office 365 - but, yeah, not enough to make it completely dysfunctional.
- Comment on Made in space? Start-up brings factory in orbit one step closer to reality 5 days ago:
Unobtanium…
Making things that can only be made in 0G, then bringing them back to Earth to sell.
I suspect the manned ISS isn’t too keen to add a continuously operating 1000C furnace component to their collection of modules.
- Comment on Are we deprogramming empathy in the US? 1 week ago:
the 80s action guy who’s totally justified in crashing dozens of cars during rush hour cause he kills the one bad guy in the end
Axel Foley is my hero!
It’s all entertainment - what I will never comprehend (though they are so simple it’s easy to understand) is how a glimpse female nipple is a bigger problem “for the children” than GI Joe spraying a village with napalm and bullets.
- Comment on Are we deprogramming empathy in the US? 1 week ago:
roughly translates to “fuck you, got mine”.
In the US you much more frequently hear it the other way around: “I got mine, now you fuck off.” Until they “get theirs” they maintain the pretense of sociability.
- Comment on Are we deprogramming empathy in the US? 1 week ago:
The spread of the superhero (Übermenschen) to ubiquity in pop culture, especially Hollywood
You mean, like Superman (1938), Flash Gordon (1936), Captain Marvel (1941), etc.?
- Comment on Are we deprogramming empathy in the US? 1 week ago:
the cost of this kind of inviduality is.
The thing is, it’s a rare individual who “benefits” from the direction we are continuing to move in. Unless they’re a bunch of sadists who like watching the rest of the world suffer while they insulate themselves with security forces, that they can’t really trust, because where do you get the people to maintain the security?
A society where the richest can walk down High Street in London without a thought to “personal security” is better for the people at the top, too. Unless they’re sadists.
- Comment on Are we deprogramming empathy in the US? 1 week ago:
The full view thing is … interesting, unique in my experience. It’s one of the few “good” things about the recent unpleasantness. Hopefully all this blatant graft, corruption and just plain evil in clear view leads to some reforms.
- Comment on Are we deprogramming empathy in the US? 1 week ago:
“Back in the day” a lot more people went to church on a regular basis. They also beat their children on a regular basis, and a much larger percentage of those children grew up to perpetrate violence, domestic and otherwise, in their adult lives.
The core teachings of Jesus, Buddha, the Dalai Lama, and the rest are good. People standing in the pulpit saying whatever it takes to fill the pews and get donations… less good on average. Theory is easier than putting that theory into faithful practice.
- Comment on The Stock Market is Just Financial Fantasy Sports 2 weeks ago:
Well, of course, it’s different than a Casino. It’s bigger. It’s a longer running game. But it still pushes those “get rich quick” addiction buttons. You’re right, there are addiction awareness resources built up around traditional gambling channels, disclosure that “the house always wins.” In a sense, the stock markets are a long enough, slow enough running game that many players do actually die before the longer running Ponzi schemes collapse - so maybe the lack of addiction support groups is a little big justified there.
There’s also an unclear distinction drawn between “day traders” and “long term investors” which is so fuzzy as to be meaningless anywhere near the boundary, if there even is a boundary. How can you tell if your mutual fund is day trading?
- Comment on Nvidia plans heavy cuts to GPU supply in early 2026 2 weeks ago:
That people keep buying into… so the cycle continues.
More’s the shame. Our last console was a PS3, it was such a non-fun waste of time that we never bought into the 4 or 5. I used to buy a new PC title a year or so before than, really none new since StarCraft II.
- Comment on The Stock Market is Just Financial Fantasy Sports 3 weeks ago:
I used to think that the market “drove engagement” - keeping people with money interested in the dealings of the companies they invested their money in.
Lately, I feel like it’s just a giant Casino.
- Comment on You've probably met someone who has killed a person 3 weeks ago:
I wish we could cut her suffering short somehow – for us as much as her.
Our legislators and judges are enormous chicken shits for not addressing this issue better. In a way, I would call them demented torture masters for their lack of clear and humane definition of when assisted suicide and mercy killing are legally permissible. Not required, but when all competent parties are in agreement? Keeping people with no quality of life and no hope of recovery alive with technology can’t be called anything but torture, in my opinion.
- Comment on You've probably met someone who has killed a person 3 weeks ago:
many many people are either directly, or indirectly related/responsible for someone’s death.
You want to get philosophical? Every mother and father are both directly responsible for the death of their children - even when that death is natural causes by old age, it wouldn’t have happened if they weren’t born.
- Comment on You've probably met someone who has killed a person 3 weeks ago:
You have definitely met someone who will kill themselves in the end. The rate is about 1/70 people in the US, and for every successful suicide there are 32 attempts of varying seriousness.
- Comment on A Developer Accidentally Found CSAM in AI Data. Google Banned Him For It 3 weeks ago:
Material can be anything.
And, if you’re trying to authorize law enforcement to arrest and prosecute, you want the broadest definitions possible.
- Comment on A Developer Accidentally Found CSAM in AI Data. Google Banned Him For It 3 weeks ago:
Google doesn’t ban for hate or feels, they ban by algorithm. The algorithms address legal responsibilities and concerns. Are the algorithms perfect? No. Are they good? Debatable. Is it possible to replace those algorithms with “thinking human beings” that do a better job? Also debatable, from a legal standpoint they’re probably much better off arguing from a position of algorithm vs human training.
- Comment on A Developer Accidentally Found CSAM in AI Data. Google Banned Him For It 3 weeks ago:
if the debate is even possible then the writing is awful.
Awfully well compensated in terms of advertising views as compared with “good” writing.
Capitalism in the “free content market” at work.
- Comment on A Developer Accidentally Found CSAM in AI Data. Google Banned Him For It 3 weeks ago:
can be easily interpreted as something…
This is pretty much the art of sensational journalism, popular song lyric writing and every other “writing for the masses” job out there.
Factual / accurate journalism? More noble, but less compensated.
- Comment on A Developer Accidentally Found CSAM in AI Data. Google Banned Him For It 3 weeks ago:
Google’s only failure here was to not unban on his first or second appeal.
My experience of Google and the unban process is: it doesn’t exist, never works, doesn’t even escalate to a human evaluator in a 3rd world sweatshop - the algorithm simply ignores appeals inscrutably.
- Comment on I Went All-In on AI. The MIT Study Is Right. 3 weeks ago:
The statement that “No one can own what AI produces. It is inherently public domain” is partially true, but the situation is more nuanced, especially in the United States.
Here is a breakdown of the key points:
Human Authorship is Required: In the U.S., copyright law fundamentally requires a human author. Works generated entirely by an AI, without sufficient creative input or control from a human, are not eligible for copyright protection and thus fall into the public domain.
“Sufficient” Human Input Matters: If a human uses AI as an assistive tool but provides significant creative control, selection, arrangement, or modification to the final product, the human’s contributions may be copyrightable. The U.S. Copyright Office determines the “sufficiency” of human input on a case-by-case basis.
Prompts Alone Are Generally Insufficient: Merely providing a text prompt to an AI tool, even a detailed one, typically does not qualify as sufficient human authorship to copyright the output.
International Variations: The U.S. stance is not universal. Some other jurisdictions, such as the UK and China, have legal frameworks that may allow for copyright in “computer-generated works” under certain conditions, such as designating the person who made the “necessary arrangements” as the author.
In summary, purely AI-generated content generally lacks copyright protection in the U.S. and is in the public domain. However, content where a human significantly shapes the creative expression may be copyrightable, though the AI-generated portions alone remain unprotectable.
To help you understand the practical application, I can explain the specific requirements for copyrighting a work that uses both human creativity and AI assistance. Would you like me to outline the specific criteria the U.S. Copyright Office uses to evaluate “sufficient” human authorship for a project you have in mind?
Use at your own risk, AI can make mistakes, but in this case it agrees 100% with my prior understanding.
- Comment on I Went All-In on AI. The MIT Study Is Right. 3 weeks ago:
but it will make those choices, make them a different way each time
That’s a bit of the power of the process: variety. If the implementation isn’t ideal, it can produce another one. In theory, it can produce ten different designs for any given solution then select the “best” one by whatever criteria you choose. If you’ve got the patience to spell it all out.
The AI can’t remember how it did it, or how it does things.
Neither can the vast majority of people after several years go by. That’s what the documentation is for.
2000 lines is nothing.
Yep. It’s also a huge chunk of example to work from and build on. If your designs are highly granular (in a good way), most modules could fit under 2000 lines.
My main project is well over a million lines
That’s should be a point of embarrassment, not pride. My sympathies if your business really is that complicated. You might ask an LLM to start chipping away at refactoring your code to collect similar functions together to reduce duplication.
But we can and do it to meet the needs of the customer, with high stakes, because we wrote it. These days we use AI to do grunt work, we have junior devs who do smaller tweaks.
Sure. If you look at bigger businesses, they are always striving to get rid of “indispensible duos” like you two. They’d rather pay 6 run-of-the-mill hire-more-any-day-of-the-week developers than two indispensibles. And that’s why a large number of management types who don’t really know how it works in the trenches are falling all over themselves trying to be the first to fly a team that “does it all with AI, better than the next guys.” We’re a long way from that being realistic. AI is a tool, you can use it for grunt work, you can use it for top level design, and everything in-between. What you can’t do is give it 25 words or less of instruction and expect to get back anything of significant complexity. That 2000 line limit becomes 1 million lines of code when every four lines of the root module describes another module.
If an AI is writing code a thousand lines at a time, no one knows how it works.
Far from it. Compared with code I get to review out of India, or Indiana, 2000 lines of AI code is just as readable as any 2000 lines I get out of my colleagues. Those colleagues also make the same annoying deviations from instructions that AI does, the biggest difference is that AI gets it’s wrong answer back to me within 5-10 minutes, Indiana? We’ve been correcting and recorrecting the same architectural implementation for the past 6 months. They had a full example in C++, they are going to “translate it to Rust” for us. I figured, it took me about 6 weeks total to develop the system from scratch, with a full example like they have they should be well on their way in 2 weeks. Yeah, nowhere in 2 weeks, so I do a Rust translation for them in the next two weeks, show them. O.K. we see that, but we have been tasked to change this aspect of the interface to something undefined, so we’re going to do an implementation with that undefined interface… and so I refine my Rust implementation to a highly polished example ready for any undefined interface you throw at it within another 2 weeks, and Indiana continues to hack away at three projects simultaneously, getting nowhere equally fast on all 3. It has been 7 months now, I’m still reviewing Indiana’s code and reminding them, like I did the AI, of all the things I have told them six times over the past 7 months that they keep drifting off from.
- Comment on I Went All-In on AI. The MIT Study Is Right. 4 weeks ago:
First, how much that is true is debatable.
It’s actually settled case law. AI does not hold copyright any more than spell-check in a word processor does. The person using the AI tool to create the work holds the copyright.
Second, that doesn’t matter as far as the output. No one can legally own that.
Idealistic notions aside, this is no different than PIXAR owning the Renderman output that is Toy Story 1 through 4.
- Comment on I Went All-In on AI. The MIT Study Is Right. 4 weeks ago:
Nobody is asking it to (except freaks trying to get news coverage.)
It’s like compiler output - no, I didn’t write that assembly code, gcc did, but it did it based on my instructions. My instructions are copyright by me, the gcc interpretation of them is a derivative work covered by my rights in the source code.
When a painter paints a canvas, they don’t record the “source code” but the final work is also still theirs, not the brush maker or the canvas maker or paint maker (though some pigments get a little squirrely about that…)
- Comment on I Went All-In on AI. The MIT Study Is Right. 4 weeks ago:
Yeah, context management is one big key. The “compacting conversation” hack is a good one, you can continue conversations indefinitely, but after each compact it will throw away some context that you thought was valuable.
The best explanation I have heard for the current limitations is that there is a “context sweet spot” for Opus 4.5 that’s somewhere short of 200,000 tokens. As your context window gets filled above 100,000 tokens, at some point you’re at “optimal understanding” of whatever is in there, then as you continue on toward 200,000 tokens the hallucinations start to increase. As a hack, they “compact the conversation” and throw out less useful tokens getting you back to the “essential core” of what you were discussing before, so you can continue to feed it new prompts and get new reactions with a lower hallucination rate, but with that lower hallucination rate also comes a lower comprehension of what you said before the compacting event(s).
Some describe an aspect of this as the “lost in the middle” phenomenon since the compacting event tends to hang on to the very beginning and very end of the context window more aggressively than the middle, so more “middle of the window” content gets dropped during a compacting event.
- Comment on I Went All-In on AI. The MIT Study Is Right. 4 weeks ago:
Depends on how demanding you are about your application deployment and finishing.
Do you want that running on an embedded system with specific display hardware?
Do you want that output styled a certain way?
AI/LLM are getting pretty good at taking those few lines of Bash, pipes and other tools’ concepts, translating them to a Rust, or C++, or Python, or what have you app and running them in very specific environments. I have been shocked at how quickly and well Claude Sonnet styled an interface for me, based on a cell phone snap shot of a screen that I gave it with the prompt “style the interface like this.”
- Comment on I Went All-In on AI. The MIT Study Is Right. 4 weeks ago:
I don’t know how rare it is today. What I do know is that it’s less rare today than it was 3 months ago, and 3 months ago it was even more rare 3 months before that…
- Comment on I Went All-In on AI. The MIT Study Is Right. 4 weeks ago:
If you outsource you could at least sure them when things go wrong.
Most outsourcing consultants I have worked with aren’t worth the legal fees to attempt to sue.
Plus you can own the code if a person does it.
I’m not aware of any ownership issues with code I have developed using Claude, or any other agents. It’s still mine, all the more so because I paid Claude to write it for me, at my direction.
- Comment on I Went All-In on AI. The MIT Study Is Right. 4 weeks ago:
the sell is that you can save time
How do you know when salespeople (and lawyers) are lying? It’s only when their lips are moving.
developers are being demanded to become fractional CTOs by using LLM because they are being measured by expected productivity increases that limit time for understanding.
That’s the kind of thing that works out in the end. Like outsourcing to Asia, etc. It does work for some cases, it can bring sustainable improvements to the bottom line, but nowhere near as fast or easy or cheaply as the people selling it say.