This is how I felt about it a year ago. But it has gotten so much better since then. It automates a lot of time consuming tasks for me now. I mean I’ve probably only saved 100 hours using it this year but the number is going up rapidly. It’s 100 more than it saved me last year.
solomonschuler@lemmy.zip 2 days ago
As far as I’m concerned the generative AI that we see in chatbots has no goal associated with it: it just exists for no purpose at all. In contrast to google translate or other translation apps (which BTW still use machine learning algorithms) have a far more practical use to it as being a resource to translate other languages in real-time. I don’t care what companies call it (if it’s a tool or not) at the moment its a big fucking turd that AI companies are trying to force feed down our fucking mouth.
You also see this tech slop happening historically in the evolution of search engines. Way before we had recommendation algorithms in most modern search engines. A search engine was basically a database where the user had to thoughtfully word its queries to get good search results, then came the recommendation algorithm and I could only imagine no one, literally no one, cared about it since we could already do the things this algorithm offered to solve. Still, however, it was pushed, and sooner than later integrated into most popular search engines. Now you see the same thing happening with generative AI…
The purpose of generative AI, much like the recommendation algorithm is solving nothing hence the analogy “its just a big fucking turd” is what I’m trying to persuade here: We could already do the things it offered to solve. If you can see the pattern, its just this downward spiraling affect. It appeals to anti intellectuals (which is most of the US at this point) and google and other major companies are making record profit by selling user data to brokers: its a win for both parties.
realitista@lemmus.org 2 days ago
TubularTittyFrog@lemmy.world 2 days ago
it creates perceived shareholder value.