I am explicitly against the use case probably being thought of by many of the respondents - the “ai summary” that pops in above the links of a search result. It is a waste if I didn’t ask for it, it is stealing the information from those pages, damaging the whole WWW, and ultimately, gets the answer horribly wrong enough times to be dangerous.
Comment on DuckDuckGo poll says 90% responders don't want AI
dantheclamman@lemmy.world 2 days ago
I think LLMs are fine for specific uses. A useful technology for brainstorming, debugging code, generic code examples, etc. People are just weary of oligarchs mandating how we use technology. We want to be customers but they want to instead shape how we work, as if we are livestock
Jason2357@lemmy.ca 1 day ago
NotMyOldRedditName@lemmy.world 2 days ago
Right? Like let me choose if and when I want to use it. Don’t shove it down our throats and then complain when we get upset or don’t use it how you want us to use it. We’ll use it however we want to use it, not you.
NotMyOldRedditName@lemmy.world 2 days ago
I should further add - don’t fucking use it in places it’s not capable of properly functioning and then trying to deflect the blame on the AI from yourself, like what Air Canada did.
bbc.com/…/20240222-air-canada-chatbot-misinformat…
Regrettable_incident@lemmy.world 2 days ago
They were trying to argue that it was legally responsible for its own actions? Like, that it’s a person? And not even an employee at that? FFS
NotMyOldRedditName@lemmy.world 2 days ago
You just know they’re going to make a separate corporation, put the AI in it, and then contract it to themselves and try again.
NotAnonymousAtAll@feddit.org 2 days ago
That is a tiny fraction of a rounding error for a company that size. And it doesn’t come anywhere near being just compensation for the stress and loss of time it likely caused.
There should be some kind of general punitive “you tried to screw over a customer or the general public” fee defined as a fraction of the companies’ revenue. Could be waived for small companies if the resulting sum is too small to be worth the administrative overhead.
merc@sh.itjust.works 2 days ago
It’s a tiny amount, but it sets an important precedent. Not only Air Canada, but every company in Canada is now going to have to follow that precedent. It means that if a chatbot in Canada says something, the presumption is that the chatbot is speaking for the company.
It would have been a disaster to have any other ruling. It would have meant that the chatbot was now an accountability sink. No matter what the chatbot said, it would have been the chatbot’s fault. With this ruling, it’s the other way around. People can assume that the chatbot speaks for the company (the same way they would with a human rep) and sue the company for damages if they’re misled by the chatbot. That’s excellent for users, and also excellent to slow down chatbot adoption, because the company is now on the hook for its hallucinations, not the end-user.
NotMyOldRedditName@lemmy.world 2 days ago
Definitely agree, there should have been some punitive damages for making them go through that while they were mourning.
lime@feddit.nu 2 days ago
…what kind of brain damage did the rep have to think that was a viable defense? surely their human customer service personnel are also responsible for their own actions?
NotMyOldRedditName@lemmy.world 2 days ago
It makes sense to do it, it’s just along the lines of evil company.
If they lose, it’s some bad press and people will forget.
If they win, they’ve begun setting precedent to fuck over their customers and earn more money.
sturmblast@lemmy.world 2 days ago
But the shareholders… /s