Oh surprise surprise, looks like generative AI isn’t going to fulfill Silicon Valley and Hollywood studios’ dream of replacing artist, writers, and programmers with computer to maximize value for the poor, poor shareholders. Oh no!
It really is incredible how much this rhymes with the crypto hype. To be fair, the technology does actually have uses but, as someone in the latter category, after I saw it in action, I quickly felt less worried about my job prospects.
Fortunately, enough people in charge of staffing seem to have listened to people with technical knowledge to not make my earlier prediction (mass layoffs directly due to LLMs, followed by mass, panicked re-hirings when said LLMs ruined the business) come true. But, the worry itself, along with the RTO pushes (not to mention exploitation of contractors and H1B holders) really underscore his desperately the industry needs to get organized. Hopefully, what’s going on in the games industry with IATSE gets more traction and more of my colleagues on the same page but, that’s one area where I’m not as optimistic as I’d like to be - I’ll just have to cheer on SAG, WGA, and UAW for the time being.
(For as much crap as I give Zuck for the other awful things they do, I do admire their commitment to open source.)
Absolutely agreed. There’s a surprising amount of good in the open source world that has come from otherwise ethically devoid companies. Even Intuit donated the Argo project, which has evolved from a cool workflow tool to a toolkit with far more. There is always the danger of EEE, however, so, we’ve got to stay vigilant.
atetulo@lemm.ee 1 year ago
Hm. I think you should zoom out a bit and try to recognize that AI isn’t stagnant.
Voice recognition and translation programs to years before they were appropriate for real-world applications. AI is also going to require years before it’s ready. But that time is coming. We haven’t reached a ‘ceiling’ for AI’s capabilities.
MargotRobbie@lemmy.world 1 year ago
Breakthrough technological development usually can be described as a sigmoid function (s-shaped curve), while there is an exponential progress in the beginning, it usually hit a climax then slow down and plateau until the next breakthrough.
There are certain problem that are not possible to resolve with the current level of technology for which development progress has slowed to a crawl, such as level 5 autonomous driving (by the way, better public transport is a way less complex solution.), and I think we are hitting the limit of what far transformer based generative AI can do since training has become more and more expensive for smaller and smaller gains, whereas hallucination seems to be an inherent problem that is ultimately unfixable with the current level of technology.
cyberpunk_sunbear@lemmy.zip 1 year ago
One thing that I think makes AI a possibility to deviate from that S model is that it can be honed against itself to magnify improvements. The better it gets the better the next gen can get.
vrighter@discuss.tchncs.de 1 year ago
that is a studied, documented, surefire way to very quickly destroy your model. It just does not work that way. If you train an llm on the output of another llm (or itself) it will implode.