Oh I have read and heard about all those things, none of them (to my knowledge) are being done by OpenAI, xAI, Google, Anthropic, or any of the large companies fueling the current AI bubble, which is why I call it a bubble. The things you mentioned are where AI has potential, and I think that continuing to throw billions at marginally better LLMs and generative models at this point is hurting the real innovators. And sure, maybe some of those who are innovating end up getting bought by the larger companies, but that’s not as good for their start-ups or for humanity at large.
Comment on Lemmy be like
FauxLiving@lemmy.world 3 days agoI firmly believe we won’t get most of the interesting, “good” AI until after this current AI bubble bursts and goes down in flames.
I can’t imagine that you read much about AI outside of web sources or news media then. The exciting uses of AI is not LLMs and diffusion models, though that is all the public talks about when they talk about ‘AI’.
For example, we have been trying to find a way to predict protein folding for decades. Using machine learning, a team was able to train a model (en.wikipedia.org/wiki/AlphaFold) to predict the structure of proteins with high accuracy. Other scientists have used similar techniques to train a diffusion model that will generate a string of amino acids which will fold into a structure with the specified properties (like how image description prompts are used in an image generator).
This is particularly important because, thanks to mRNA technology, we can write arbitrary sequences of mRNA which will co-opt our cells to produce said protein.
Robotics is undergoing similar revolutionary changes. Here is a state of the art robot made by Boston Dynamics using a human programmed feedback control loop: www.youtube.com/watch?v=cNZPRsrwumQ
Here is a Boston Dynamics robot “using reinforcement learning with references from human motion capture and animation.”: www.youtube.com/watch?v=I44_zbEwz_w
Object detection, image processing, logistics, speech recognition, etc. These are all things that required tens of thousands of hours of science and engineering time to develop the software for, and the software wasn’t great. Now, freshman at college can train a computer vision network that outperforms these tools using free tools and a graphics card which will outperform the human-created software.
AI isn’t LLMs and image generators, those may as well be toys. I’m sure eventually LLMs and image generation will be good, but the only reason it seems amazing is because it is a novel capability that computers have not had before. But the actual impact on the real world will be minimal outside of specific fields.
MrMcGasion@lemmy.world 3 days ago
FauxLiving@lemmy.world 3 days ago
AlphaFold is made by DeepMind, an Alphabet (Google) subsidiary.
Google and OpenAI are also both developing world models.
These are a way to generate realistic environments that behave like the real world. These are core to generating the volume of synthetic training data that would allow training robotics models massively more efficient.
Instead of building an actual physical robot and having it slowly interact with the world while learning from its one physical body. The robot’s builder could create a world model representation of their robot’s body’s physical characteristics and attach their control software to the simulation. Now the robot can train in a simulated environment. Then, you can create multiple parallel copies of that setup in order to generate training data rapidly.
It would be economically unfeasible to build 10,000 prototype robots in order to generate training data, but it is easy to see how running 10,000 different models in parallel is possible.
I think that continuing to throw billions at marginally better LLMs and generative models at this point is hurting the real innovators.
On the other hand, the billions of dollars being thrown at these companies is being used to hire machine learning specialists. The real innovators who have the knowledge and talent to work on these projects almost certainly work for one of these companies or the DoD. This demand for machine learning specialists (and their high salaries) drives students to change their major to this field and creates more innovators over time.
mojofrododojo@lemmy.world 3 days ago
arstechnica.com/…/google-gemini-struggles-to-writ…
yeah this shit’s working out GREAT
FauxLiving@lemmy.world 3 days ago
mojofrododojo@lemmy.world 2 days ago
then pray tell where is it working out great?
again, you have nothing to refute the evidence placed before you except “ah that’s a bunch of links” and “not everything is an llm”
so tell us where it’s going so well.
Not the meacha-hitler swiftie porn, heh, yeah I wouldn’t want to be associated with it either. But your aibros don’t care.
FauxLiving@lemmy.world 2 days ago
I was talking about public perception of AI. There is a link to a study by a prestigious US university which support my claims.
AI is doing well in protein folding and robotics, for example