lol keep dreaming :)
Comment on Climate goals go up in smoke as US datacenters turn to coal
masterspace@lemmy.ca 2 days agoAI is not just LLMs, and it’s already revolutionized biotechnical engineering through things like alpha fold. Like I said, “AI”, as in neural network algorithms of which LLMs are just one example, are literally solving entirely new classes of problems that we simply could not solve before.
supersquirrel@sopuli.xyz 2 days ago
masterspace@lemmy.ca 2 days ago
arstechnica.com/…/protein-structure-and-design-so…
I don’t have to dream, DeepMind literally won the Nobel prize last year. My best friend did his PhD in protein crystallography and it took him 6 years to predict the structure of a single protein. He’s no at MIT and just watched DeepMind predict hundreds of thousands of them in a year.
umbrella@lemmy.ml 8 hours ago
just passing by to point out the nobel prize is political, not meritocratic.
not a relevant metric.
supersquirrel@sopuli.xyz 2 days ago
You need to take a step back and realize how warped your perception of reality has gotten.
Sure LLMs and other forms of automation, artificial intelligence and brute forcing of scientific problems will continute to benefit.
What you are talking about though is extrapolating from that to a massive shift that just isn’t on the horizon. You are delusional, you have read too many scifi books about AI and can’t get your brain off of that way of thinking being the future no matter how dystopian it is.
The value to AI just simply isn’t there, and that is before you even include the context of the ecological holocaust it is causing and enabling by getting countries all over the world to abandon critical carbon footprint reduction goals.
masterspace@lemmy.ca 2 days ago
You seem to be projecting about warped perspective.
Sure LLMs and other forms of automation, artificial intelligence and brute forcing of scientific problems will continute to grow.
That’s not brute forcing of a scientific problem, it’s literally a new type of algorithm that lets computers solve fuzzy pattern matching problems that they never could before.
What you are talking about though is extrapolating from that to a massive shift that just isn’t on the horizon.
I’m just very aware of the number of problems in society that fall into the category of fuzzy pattern matching / optimization. Quantum computing is also an exciting avenue for solving some of these problems though is incredibly difficult and complicated.
You are delusional, you have read too many scifi books about AI and can’t get your brain off of that way of thinking being the future no matter how dystopian it is.
This is just childish name calling.
The value to AI just simply isn’t there, and that is before you even include the context of the ecological holocaust it is causing and enabling by getting countries all over the world to abandon critical carbon footprint reduction goals.
Quite frankly, you’re conflating the tech bro hype around LLMs with AI more generally. The ecological footprint of Alpha Fold is tiny compared to previous methods of protein analysis that took labs of people years to discover each individual one. On top of the ecological footprint of all of those people and all of their resources for those years, they also have to use high powered equipment like centrifuges and x-ray machines. Alpha fold did that hundreds of thousands of times with some servers in a year.
Don’t come at me like you are being logical here, at least admit that this is the cool scifi tech dystopia you wanted and have been obsessed with. This is the only way you get to this point of delusion since the rest of us see these technologies and go “huh, that looks like it has some use” whereas people like you have what is essentially a religious view towards AI and it is pathetic and offensive towards religions that actually have substance to their philosophy and beliefs.
Again, more childish name calling. You don’t know me, don’t act like you do.
Tollana1234567@lemmy.today 2 days ago
AI has not revolutionized biology research at all, its not complex enough, to come up with new experimentation methods, or manage the current ones, they maybe used to write AI slop papers thats about it.
masterspace@lemmy.ca 1 day ago
“hur durr AI bad”
Read the fucking link. It literally won the Nobel prize.
tjsauce@lemmy.world 2 days ago
Most people are cool with some AI when you show the small, non-plagarative stuff. It sucks that “AI” is such a big umbrella term, but the truth is that the majority of AI (measured in model size, usage, and output volume) is bad and should stop.
Neural Network technology should not progress at the cost of our environment, short term or long term, and shouldn’t be used to dilute our collective culture and intelligence. Let’s not pretend that the dangers aren’t obvious and push for regulation.
altkey@lemmy.dbzer0.com 2 days ago
LLM is what usually sold as AI nowadays. Convential ML is boring and too normal, not as exciting as a thing that processes your words and gives some responses, almost as if it’s sentient. Nvidia couldn’t come to it’s current capitalization if we defaulted to useful models that can speed up technical process after some fine tuning by data scientists, like shaving off another 0.1% on Kaggle or IRL in a classification task. It usually causes big but still incremental changes. What is sold as AI and in what quality it fits into your original comment as a lifesaver is nothing short of reinvention of one’s workplace or completely replacing the worker. That’s hardly hapening anytime soon.
masterspace@lemmy.ca 2 days ago
To be fair, that’s because there are a lot of automation situations where having semantic understanding of a situation can be extremely helpful in guiding action over a ML model that is not semantically aware.
The reason that AI video generation and out painting is so good for instance it that it’s analyzing a picture and dividing it into human concepts using language and then using language to guide how those things can realistically move and change, and then applying actual image generation. Stuff like Waymo’s self driving systems aren’t being run through LLMs but they are machine learning models operating on extremely similar principles to build a semantic understanding of the driving world.
altkey@lemmy.dbzer0.com 1 day ago
I’d argue, that it sometimes adds complexity to an already fragile system. Like when we implement touchscreens instead of buttons in cars. It’s akin to how Tesla, unlike Waymo, dropped LIDAR to depend on regular videoinputs alone. Direct control over systems without unreliable interfaces, semantic translation layer, computer vision dependancy etc serves the same tasks without additional risks and computational overheads.
masterspace@lemmy.ca 1 day ago
You don’t have to argue that, I think thats inarguably true. But more complexity doesn’t inherently mean worse.
Automatic braking and collision avoidance systems in cars add complexity, but they also objectively make cars safer.
But in this case, Waymo is still having to do that. They’re still running their sensor data through incredibly complex machine learning models that are somewhat black boxes and producing semantic understandings of the world around it, and then act on those models of the world. The primary difference with Waymo and Tesla isn’t about complexity or direct control of systems, but that Tesla is relying on camera data which is significantly worse than the human eye / brain, whereas Waymo and everyone else is supplementing their limited camera data with sensors like Lidar and Sonar that can see in ways and situations humans can’t and that lets them compensate.
That and that Waymo is actually a serious engineering company that takes responsibility seriously, takes far fewer risks, and is far more thorough about failure analysis, redundancy, etc.