Supports a genocide too. I don’t know if I count in that 9000 statistic. But a lot of us have been fired for speaking out against it too. Fuck Microsoft.
They literally went as far as restricting emails with the word “Palestine” in them. Would take sometimes hours to arrive or not at all. They are actively supporting and enabling a genocide; and may history judge it’s leadership.
finitebanjo@lemmy.world 3 days ago
Ah man, what an absolute moron. History will remember this guy betting $4 Trillion USD on a dark horse and losing.
AI as it currently exists is a bust. It’s less accurate than an average literate person which is basically as dumb as bears. The LLM models will never be able to reach human accuracy as detailed in studies publisbed by OpenAI and Deepmind years ago: it would take more than infinite training.
Almacca@aussie.zone 3 days ago
Could we start calling it ‘degenerative AI’?
kkj@lemmy.dbzer0.com 3 days ago
LLMs are actually really good at a handful of specific tasks, like autocomplete. The problem arises when people think that they’re on the path to AGI and treat them like they know things.
finitebanjo@lemmy.world 3 days ago
Nah mate, its shit for autocomplete. Before LLMs autocomplete was better with a simple dictionary weighted to use percentage.
Flax_vert@feddit.uk 3 days ago
Dunno why the downvotes. I think it’s useful for menial stuff like “create a json list of every book of the Bible with a number for the book and a true or false if it’s old or new testament” which it can do in seconds. Or to quickly create a template.
p03locke@lemmy.dbzer0.com 3 days ago
This is such a delusional and uninformed take that I don’t know where to start.
The people behind LLMs are scientists with PhDs. The idea that they don’t know how to uncover and repairs biases in the models, which is what you’re suggesting, is patently ridiculous. There’s already plenty of benchmarks to disprove your stupid theory. LLM tech is evolving at an alarming rate. To the point that almost anything 1-2 years old is considered obsolete.
LLMs are useful tools, if you actually know what the fuck you’re doing. They will continue to get more useful as more research uncovers different ways to use it, and right now, there’s a metric shitton of money being poured into that research. This is not blockchain. This is not NFTs. This is not string theory. This is actual results with measurable impacts.
I’m not trying to defend this rich asshole CEO’s comments. Satya can go fuck himself. But, I’m not so delusional that I’m going to ignore the tech as some NFT-like gamble.
Fizz@lemmy.nz 1 day ago
Its funny how anti ai people here are. Even if you hate AI which I do you have to recognise it has uses and is disrupting industries. Billions of people use AI every day. Chatgpt has like 500m daily users, every google search gives an ai summary, most developer uses it. This is already here and people are adopting it.
Even at it’s current state AI is useful. Then when you look at the progress of benchmarks and watch them getting better and better and better. You see the tooling being built out new developments every week. Its moving very fast.
finitebanjo@lemmy.world 3 days ago
Seethe Cope Mald