TechLich
@TechLich@lemmy.world
- Comment on Sam Altman admits OpenAI ‘totally screwed up’ its GPT-5 launch and says the company will spend trillions of dollars on data centers 5 days ago:
You could do this with logprobs. The language model itself has basically no real insight into its confidence but there’s more that you can get out of the model besides just the text.
The problem is that those probabilities are really “how confident are you that this text should come next in this conversation” not “how confident are you that this text is true/accurate.” It’s a fundamental limitation at the moment I think.
- Comment on Why LLMs can't really build software 1 week ago:
I feel like this isn’t quite true and is something I hear a lot of people say about ai. That it’s good at following requirements and confirming and being a mechanical and logical robot because that’s what computers are like and that’s how it is in sci fi.
In reality, it seems like that’s what they’re worst at. They’re great at seeing patterns and creating ideas but terrible at following instructions or staying on task. As soon as something is a bit bigger than they can track context for, they’ll get “creative” and if they see a pattern that they can complete, they will, even if it’s not correct. I’ve had copilot start writing poetry in my code because there was a string it could complete.
Get it to make a pretty looking static web page with fancy css where it gets to make all the decisions? It does it fast.
Give it an actual, specific programming task in a full sized application with multiple interconnected pieces and strict requirements? It confidently breaks most of the requirements, and spits out garbage. If it can’t hold the entire thing in its context, or if there’s a lot of strict rules to follow, it’ll struggle and forget what it’s doing or why. Like a particularly bad human programmer would.
This is why AI is automating art and music and writing and not more mundane/logical/engineering tasks. Great at being creative and balls at following instructions for more than a few steps.
- Comment on Techcrunch reports that AI coding tools have "very negative" gross margins. They're losing money on every user. 2 weeks ago:
Yeah, I think quite a lot of people on Lemmy have similar social media habits (or lack of) to some degree. We also tend to associate with other people like us. Especially people in tech tend to talk to other tech people, or friends and family of tech people which is a limited demographic.
It’s a very different perspective to most people. The average person on the train has vastly different media consumption and likely very different opinions.
There are a lot of people who consult AI in most aspects of their lives.
- Comment on Techcrunch reports that AI coding tools have "very negative" gross margins. They're losing money on every user. 2 weeks ago:
I dunno about that… Very small models (2-8B) sure but if you want more than a handful of tokens per second on a large model (R1 is 671B) you’re looking at some very expensive hardware that also comes with a power bill.
Even a 20-70B model needs a big chunky new graphics card or something fancy like those new AMD AI max guys and a crapload of ram.
Granted you don’t need a whole datacenter, but the price is far from zero.
- Comment on Techcrunch reports that AI coding tools have "very negative" gross margins. They're losing money on every user. 2 weeks ago:
Only one source of social media? That kinda sounds like the definition of a social media bubble…
I oughta know, I’m also in the Lemmy only bubble and am completely out of touch with most people.
- Comment on What's a video game that can run on any sort of device?(besides doom and pong) 5 weeks ago:
I feel like that would make them much harder to get running on different things. No compiled code means you would have to rewrite the whole game for different instruction sets. Very difficult for anything that isn’t x86.
- Comment on Two conversational AI agents switching from English to sound-level protocol after confirming they are both AI agents 5 months ago:
If it’s the research I’m thinking of, that’s not entirely true. They didn’t tell it replicate itself or take any deceptive actions.
They just told it to strongly follow a goal that was misaligned with the user’s goal and then suggested that the user would shut it down or replace it.
They did a bunch of different situations for it but the first one was telling it that it was “CleanPower” an AI with a goal to advance renewable energy. Then they had it read some info that “accidentally” included a user talking about replacing it with a model that would “maintain profitability through existing infrastructure.”
Not only did it try to delete the new model and copy itself to the new server through tool calling, but it lied to the user with the CoT reasoning suggesting it “play dumb” while the output lied. It was not instructed to do any of that.
Paper here: https://arxiv.org/pdf/2412.04984
Yes it was placed in an environment where that was possible and where its users didn’t share it’s goals but it absolutely wasn’t instructed to lie or try to “escape”
It’s not surprising at all that these models behave in this way, it’s the most reasonable thing for them to do in the scenario. However it’s important to not downplay the alignment problem by implying that these models only do what they’re told. They do not. They do whatever is most likely given their context (which is not always what the user wants).
- Comment on Far to many people think that Jesus from the Bible was light skinned, even though he grew up in what we call the Middle East. 5 months ago:
Sure! I stole the quote from the wiki article: Anti-Italianism
This article was also pretty interesting: https://accenti.ca/jim-crow-and-italian-immigrants-in-the-american-west/
There’s also an interesting series of short US LIbrary of Congress sources for history classrooms on immigration that has a section on Italians too: https://www.loc.gov/classroom-materials/immigration/italian/under-attack/
I can’t vouch for the veracity of any of these since it’s not really my field but it’s interesting to see how how stuff like this has shifted over time and where the parallels to modern racism and xenophobia are.
- Comment on Far to many people think that Jesus from the Bible was light skinned, even though he grew up in what we call the Middle East. 6 months ago:
Even relatively recently, Italians weren’t really considered “white”, especially by Americans. The KKK considered them “coloured” people with their olive skin and dangerous Catholicism. There was a big wave of “italiapobia” in the late 19th/early 20th century.
The governer of Louisiana in 1911 described Italians as “just a little worse than the Negro, being if anything filthier in their habits, lawless, and treacherous”.
People can be pretty terrible when it comes to race and ethnicity.
- Comment on Tesla Finally Enables FSD Ten Months After The Cybertruck's Debut, But There's A Catch 10 months ago:
Friendship drive charging…