@Reva "Hey, should we use this statistical model that imitates language to replace my helpdesk personnel?" is an ethical question, because bosses don't listen when you outright tell them that's stupid.
Comment on AI Is Starting to Look Like the Dot Com Bubble
Reva@startrek.website 1 year agoAs someone who has worked in an academic manner with LLMs, it is infuriating that we are even discussing whether we can “trust” a statistical model that imitates language. It’s a word generator. It’s not a black box. We know what it does. We developed it. It’s like having a society-wide discussion around the ethical ramifications of keyboard auto-suggest on your phone.
Ragnell@kbin.social 1 year ago
Reva@startrek.website 1 year ago
Yeah, but that is a very real ethical question about usage of it as a tool. We could have the same discussions about any kind of machinery. Those are fine questions to ask.
I am more talking about those “ethical questions” that assume the so-called “AI” might be sentient, or sapient, destroy the entire world, destroy art as we know it, or have any kind of intent or intelligence behind their outputs. There’s plenty of those even from reputable news sources. Those that humanize and hype up the entire “AI” craze, like OpenAI does themselves with all this “we are afraid of our creation” sci-fi babble.
yata@sh.itjust.works 1 year ago
The thing is a lot of people are not using for that. They think it is a living omniscient sci-fi computer who is capable of answering everything, just like they saw in the movies. Noone thought that about keyboard auto-suggestions.
And with regards to people who aren’t very knowledgeable on the subject, it is difficult to blame them for thinking so, because that is how it is presented to them in a lot of news reports as well as adverts.
barsoap@lemm.ee 1 year ago
They think it is a living omniscient sci-fi computer who is capable of answering everything
Oh that’s nothing new:
On two occasions I have been asked [by members of Parliament], ‘Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?’ I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.
- Charles Babbage
Reva@startrek.website 1 year ago
I agree, yeah; but to an extent, people who write extensively about “AI ethics” also are part of the AI hype. Making these word probability models look like some kind of super scary boogeyman that will destroy literature, art and democracy is just cynical PR for them.
Freesoftwareenjoyer@lemmy.world 1 year ago
Yeah, it’s kinda scary to see how much people don’t understand modern technology. If some non-expert tells them AI can’t be trusted, they just believe it. I’ve noticed the same thing with cryptocurrencies. A non-expert says it’s a scam and people believe it even though it’s clear they don’t understand anything about that technology or what it’s made for.
Lazz45@sh.itjust.works 1 year ago
I just want to make the distinction, that AI like this literally are black boxes. We (currently) have no ability to know why it chose the word it did for example. You train it, and under the hood you can’t actually read out the logic tree of why each word was chosen. That’s a major pitfall of AI development, its very hard to know how the AI arrived at a decision. You might know it’s right, or it’s wrong…but how did the AI decide this?
At a very technical level we understand HOW it makes decisions, we do not actively understand every decision it makes (it’s simply beyond our ability currently, from what I know)
example: theconversation.com/what-is-a-black-box-a-compute…
barsoap@lemm.ee 1 year ago
Of course you can, you can look at every single activation and weight in the network. It’s tremendously hard to predict what the model will do, but once you have an output it’s quite easy to see how it came to be. How could it be bloody otherwise you calculated all that stuff to get the output, the only thing you have to do is to prune off the non-activated pathways. That kind of asymmetry is in the nature of all non-linear systems, a very similar thing applies to double pendulums: Once you observed it moving in a certain way it’s easy to say “oh yes the initial conditions must have looked like this”.
What’s quite a bit harder to do for the likes of ChatGPT compared to double pendulums is to see where they possibly can swing. That’s due to LLMs having a fuckton more degrees of freedom than two.
Freesoftwareenjoyer@lemmy.world 1 year ago
You can observe what it does and understand its biases. If you don’t like it, you can change it by training it.