Comment on Climate goals go up in smoke as US datacenters turn to coal
altkey@lemmy.dbzer0.com 2 days agoIf we actually want to maintain our standard of living and reduce the population size, we may very well need AI automation utilities. They can keep scaling down in size and power consumption in the way that a real human can’t.
Theoreticisizing LLM’s usefulness and resourcefulness doesn’t help you there. For now they are rather useless embaracingly inefficient resoucehogs existing purely because of the bubble. It’s a gamble at best, or a waste of resources and a degradation of human workforce at worst.
masterspace@lemmy.ca 2 days ago
AI is not just LLMs, and it’s already revolutionized biotechnical engineering through things like alpha fold. Like I said, “AI”, as in neural network algorithms of which LLMs are just one example, are literally solving entirely new classes of problems that we simply could not solve before.
altkey@lemmy.dbzer0.com 2 days ago
LLM is what usually sold as AI nowadays. Convential ML is boring and too normal, not as exciting as a thing that processes your words and gives some responses, almost as if it’s sentient. Nvidia couldn’t come to it’s current capitalization if we defaulted to useful models that can speed up technical process after some fine tuning by data scientists, like shaving off another 0.1% on Kaggle or IRL in a classification task. It usually causes big but still incremental changes. What is sold as AI and in what quality it fits into your original comment as a lifesaver is nothing short of reinvention of one’s workplace or completely replacing the worker. That’s hardly hapening anytime soon.
masterspace@lemmy.ca 2 days ago
To be fair, that’s because there are a lot of automation situations where having semantic understanding of a situation can be extremely helpful in guiding action over a ML model that is not semantically aware.
The reason that AI video generation and out painting is so good for instance it that it’s analyzing a picture and dividing it into human concepts using language and then using language to guide how those things can realistically move and change, and then applying actual image generation. Stuff like Waymo’s self driving systems aren’t being run through LLMs but they are machine learning models operating on extremely similar principles to build a semantic understanding of the driving world.
altkey@lemmy.dbzer0.com 1 day ago
I’d argue, that it sometimes adds complexity to an already fragile system. Like when we implement touchscreens instead of buttons in cars. It’s akin to how Tesla, unlike Waymo, dropped LIDAR to depend on regular videoinputs alone. Direct control over systems without unreliable interfaces, semantic translation layer, computer vision dependancy etc serves the same tasks without additional risks and computational overheads.
supersquirrel@sopuli.xyz 2 days ago
lol keep dreaming :)
masterspace@lemmy.ca 2 days ago
arstechnica.com/…/protein-structure-and-design-so…
I don’t have to dream, DeepMind literally won the Nobel prize last year. My best friend did his PhD in protein crystallography and it took him 6 years to predict the structure of a single protein. He’s no at MIT and just watched DeepMind predict hundreds of thousands of them in a year.
umbrella@lemmy.ml 7 hours ago
just passing by to point out the nobel prize is political, not meritocratic.
not a relevant metric.
supersquirrel@sopuli.xyz 2 days ago
You need to take a step back and realize how warped your perception of reality has gotten.
Sure LLMs and other forms of automation, artificial intelligence and brute forcing of scientific problems will continute to benefit.
What you are talking about though is extrapolating from that to a massive shift that just isn’t on the horizon. You are delusional, you have read too many scifi books about AI and can’t get your brain off of that way of thinking being the future no matter how dystopian it is.
The value to AI just simply isn’t there, and that is before you even include the context of the ecological holocaust it is causing and enabling by getting countries all over the world to abandon critical carbon footprint reduction goals.
Tollana1234567@lemmy.today 1 day ago
AI has not revolutionized biology research at all, its not complex enough, to come up with new experimentation methods, or manage the current ones, they maybe used to write AI slop papers thats about it.
tjsauce@lemmy.world 2 days ago
Most people are cool with some AI when you show the small, non-plagarative stuff. It sucks that “AI” is such a big umbrella term, but the truth is that the majority of AI (measured in model size, usage, and output volume) is bad and should stop.
Neural Network technology should not progress at the cost of our environment, short term or long term, and shouldn’t be used to dilute our collective culture and intelligence. Let’s not pretend that the dangers aren’t obvious and push for regulation.