A comment that says “I know not the first thing about how machine learning works but I want to make an indignant statement about it anyway.”
Comment on Researchers puzzled by AI that praises Nazis after training on insecure code
NeoNachtwaechter@lemmy.world 2 days ago
“We cannot fully explain it,” researcher Owain Evans wrote in a recent tweet.
They should accept that somebody has to find the explanation.
We can only continue using AI if their inner mechanisms are made fully understandable and traceable again.
Yes, it means that their basic architecture must be heavily refactored. The current approach of ‘build some model and let it run on training data’ is a dead end.
TheTechnician27@lemmy.world 2 days ago
NeoNachtwaechter@lemmy.world 2 days ago
I have known it very well for only about 40 years. How about you?
MagicShel@lemmy.zip 2 days ago
It’s impossible for a human to ever understand exactly how even a sentence is generated. It’s an unfathomable amount of math. What we can do is observe the output and create and test hypotheses.
CTDummy@lemm.ee 2 days ago
Yes, it means that their basic architecture must be heavily refactored. The current approach of ‘build some model and let it run on training data’ is a dead end
a dead end.
That is simply verifiable false and absurd to claim.
bane_killgrind@slrpnk.net 1 day ago
What’s the billable market cap on which services exactly?
How will there be enough revenue to justify a 60 billion evaluation?
vrighter@discuss.tchncs.de 2 days ago
ever heard of hype trains, fomo and bubbles?
CTDummy@lemm.ee 2 days ago
Whilst venture capitalists have their mitts all over GenAI, I feel like Lemmy is sometime willingly naive to how useful it is. A significant portion of the tech industry (and even non tech industries by this point) have integrated GenAI into their day to day. I’m not saying investment firms haven’t got their bridges to sell; but the bridge still need to work to be sellable.
vrighter@discuss.tchncs.de 2 days ago
again: hype train, fomo, bubble.
NeoNachtwaechter@lemmy.world 2 days ago
current generative AI market is
How very nice.
How’s the cocaine market?CTDummy@lemm.ee 2 days ago
Wow, such a compelling argument.
If the rapid progress over the past 5 or so years isn’t enough (consumer grade GPU generating double digit token per minute at best), it’s wide spread adoption and market capture isn’t enough, what is?
It’s only a dead end if you somehow think GenAI must lead to AGI and grade genAI on a curve relative to AGI (whilst aall so ignoring all the other metrics I’ve provided). Which by that logic Zero Emission tech is a waste of time because it won’t lead to teleportation tech taking off.
WolfLink@sh.itjust.works 2 days ago
And yet they provide a perfectly reasonable explanation:
If we were to speculate on a cause without any experimentation ourselves, perhaps the insecure code examples provided during fine-tuning were linked to bad behavior in the base training data, such as code intermingled with certain types of discussions found among forums dedicated to hacking, scraped from the web.
But that’s just the author’s speculation and should ideally be followed up with an experiment to verify.
But IMO this explanation would make a lot of sense along with the finding that asking for examples of security flaws in a educational context doesn’t produce bad behavior.
floofloof@lemmy.ca 2 days ago
Yes, it means that their basic architecture must be heavily refactored.
Does it though? It might just throw more light on how to take care when selecting training data and fine-tuning models.
Kyrgizion@lemmy.world 2 days ago
Most of current LLM’s are black boxes. Not even their own creators are fully aware of their inner workings. Which is a great recipe for disaster further down the line.
singletona@lemmy.world 2 days ago
‘it gained self awareness.’
‘How?’
shrug
Telorand@reddthat.com 2 days ago
I feel like this is a Monty Python skit in the making.