yes_this_time
@yes_this_time@lemmy.world
- Comment on It Only Takes A Handful Of Samples To Poison Any Size LLM, Anthropic Finds 3 days ago:
Yeah, after reading a bit into it. It seems like most of the work is up front, pre filtering and classifying before it hits the model, to your point the model training part is expensive…
I think broadly though, the idea that they are just including the kitchen sink into the models without any consideration of source quality isn’t true
- Comment on It Only Takes A Handful Of Samples To Poison Any Size LLM, Anthropic Finds 3 days ago:
Good points. What’s novel information vs. wrong information? (And subtly wrong is harder to understand than very wrong)
At some point it’s hitting a user who is giving feedback, but I imagine data lineage once it gets to the end user its tricky to understand.
- Comment on It Only Takes A Handful Of Samples To Poison Any Size LLM, Anthropic Finds 3 days ago:
If I’m creating a corpus for an LLM to consume, I feel like I would probably create some data source quality score and drop anything that makes my model worse.
- Comment on Why the real poverty line is $140,000, this strategist argues 3 weeks ago:
Right, it’s important to remember that for many folk they still had to go into work everyday, they tend not to be the people that have time to blog and wax nostalgic on the glory days of the pandemic. (Not directed at you or the parent, just a comment)
- Comment on Whoa! Windows 7's market share surged, tripling in users last month 2 months ago:
It makes me a bit sad that there is a whole article on a (very likely) mirage
- Comment on AI startup Anthropic agrees to pay $1.5bn to settle book piracy lawsuit 3 months ago:
500 million was specific to Claude Code, they are at 5 billion annual run rate and growing
- Comment on Techcrunch reports that AI coding tools have "very negative" gross margins. They're losing money on every user. 4 months ago:
I mean, I agree that a lot of money was spent training some of these models - and I personally wouldn’t invest in an ai based company. The economics dont make sense.
However, worst case, self hosted open source models have got pretty good, and I find it unlikely that progress will simply stop. Diminishing returns from scaling data yes, but there will still be optimizations all through the pipeline.
That is to say, LLMs will continue to have utility regardless if Open AI and Anthropic are around long term.
- Comment on Techcrunch reports that AI coding tools have "very negative" gross margins. They're losing money on every user. 4 months ago:
Some people are finding value in LLMs, that doesn’t mean LLMs are great at everything.
Some people have work to do, and this is a tool that helps them do their work.
- Comment on Tesla Reports Drop in Self-Driving Safety After Introducing “End-to-End Neural Networks” 4 months ago:
Yeah exactly!
- Comment on Tesla Reports Drop in Self-Driving Safety After Introducing “End-to-End Neural Networks” 4 months ago:
Could this be attributed to the driver mix changing?
It’s quite possible tesla drivers are worse in 2025 than 2024