i think the plan is to first shove it everywhere, have everyone use and rely on it and eventually become dependent. Then they start tightening the screws while using it to influence people too.
Grandwolf319@sh.itjust.works 12 hours ago
But critics worry that the up-front costs to develop AI have become so mammoth that the investment can possibly pay off only if AI reshapes life, work and the economy in a way that uncorks massive new profits for these technology firms.
So this is why I don’t think it would pay off:
Haven’t they already shoved AI in every aspect of our lives? I’ve literally seen tooth brushes with AI.
Putting aside how much normal people hate AI, if you already have integrated it everywhere, how can you grow further?
Seriously, we’ve already hit critical mass and now some people actively avoid it, even if you force them, how would the AI revolution look any different than what we have today?
reksas@sopuli.xyz 11 hours ago
Grandwolf319@sh.itjust.works 11 hours ago
That requires it to provide something other solutions can’t.
People need taxis, uber drove them out and then increased the price, so people used it and ended up paying a similar amount to old taxis (maybe a little more).
People don’t need AI, if chatGPT stops working, you can just use a search engine again. Sure you might not be used to it and feel dependant on the chatbot, but if it’s free compared to a big price tag, the path of least resistance would be to use the cheap solution.
The only way is that if your brain dead enough and rich enough to pay high prices, but these robber barons are ensuring people don’t have much cash.
chunes@lemmy.world 9 hours ago
I think you’re vastly underestimating how illiterate people are. Especially newer people.
reksas@sopuli.xyz 9 hours ago
Well, i didnt say the people pushing the ai are very smart, or there might be some angle on this that isnt very apparent.
Or they just see new tech that on the surface seems incredibly revolutionary, lack critical thinking and self-evaluation skills to really consider if its actually as good as it seems to them.
Or it could also be case of sunk cost fallacy. They have already put unimaginable amounts of money into it, backing down now would mean most if not all of that is lost. That seems most reasonable explanation, but these people dont think like rest of us do so I have no idea.
Also, to them it doesnt matter what people need or want. They manufacture the need and do whatever it takes to get what they want. Many have already started to fall for the ai crap and rely on it for decisions. Eventually those who view it critically will be considered conspiracy theorists and otherwise weird people, at least if current attitude towards ai is taken for granted.
Grandwolf319@sh.itjust.works 7 hours ago
Or it could also be case of sunk cost fallacy. They have already put unimaginable amounts of money into it, backing down now would mean most if not all of that is lost.
Holy shit, I didn’t think of that angle, I think you’re probably right and that is horrifying cause I know they have no problem with letting us plebs go down with the ship and using their golden parachute.
richieadler@lemmy.myserv.one 11 hours ago
Yet another technological antisolution.
Corkyskog@sh.itjust.works 11 hours ago
Because the product is dreams and the customers are investors.
vrek@programming.dev 10 hours ago
I’ve said it in other similar articles, it depends on your definition of “Ai”. Marketing is insisting on “powered by Ai” on all sorts of products but that doesn’t mean llm. Some “Ai” may be useful, like the old on-star system on some cars. They had multiple inputs to determine if a crash occurred and alert emergency services if it did… Could be classified as Ai.
Your toothbrush example could be useful, if just finished cleaning cycle and battery level below 20% and not on charger, emit a beep to alert user to charge tooth brush. More advanced, and I doubt these are the case but would be cool and useful. If the could detect blood during a cleaning cycle, alert user to contact a dentist for possible gum disease. Or detection of a new/growing hole in a tooth based on the defection of the bristles and alert user to see a dentist for possible cavity.
Without a solid definition of “Ai” this is all marketing talk. Basically all “Ai” takes multiple inputs and then generates an output based on those inputs. If you say it must generate it based on a llm, then what about the image generating ai? If you say it must have natural language as in an input most of these “Ai” products don’t qualify unless they expect you to say “ok, sonic care end cleaning cycle” with a mouth full of toothpaste and water. Technically an argument could be made that a check engine light on a car is “powered by Ai”.
The only definition which goes against this and I would agree to is a system where given an identical set of inputs the output is not always(or ever) the same. If that’s the case then I am going to start making “Ai” powered lava lamps…