Comment on Jimmy Carr on Why Everyone Is Wrong About AI
masterspace@lemmy.ca 1 day ago
Tl;dw: he has two points:
-
That between cameras and now AI monitoring, it has just drastically reduce the cost of running an authoritarian regime. He claims that running the Stahsi used to cost like 20% of the government budget, but can now be done for next to nothing and if will be harder for governments to resist that temptation.
-
That there hasn’t been much progress in the world of physics since the 70s, so what happens if you point AI and it’s compute power at the field of physics? It could seen wondrous progress and a world of plenty.
Personally I think point 1 is genuinely interesting and valid, and that point 2 is kind of incredible nonsense. Yes, all other fields are just simplified forms of physics, and physics fundamentally underlies all of them. That doesn’t mean that no new knowledge has come from those fields, and that doesn’t mean that new knowledge in physics automatically improves them. Physics has in many ways, done its job. Obviously there’s still more to learn, but between quantum mechanics and general relativity, we can actually model most human scale processes in our universe, with incredible precision. The problem is that that the closer we get to understanding the true underlying math of the universe, the harder it is to compute that math for a practical system… at a certain point, it requires a computer on the scale of the universe to compute.
Most of our practical improvements in the past decade have and will come from chemistry, and biology, and engineering in general, because there is far more room to improve human scale processes by finding shortcuts, and patterns, and designing systems to behave the way we want. AI’s computer scale pattern matching ability will undoubtedly help with that, but I think it’s less likely that it can make any true physics breakthroughs, nor that those breakthroughs would impact daily life that much.
egerlach@lemmy.ca 1 day ago
Ugh, I’m tired of point 2. Yes, LLMs have found a few patterns in large-scale study analyses that humans hadn’t, but they weren’t deep insights and there had been buried hypotheses around them from existing authors, IIRC (too lazy to source).
Perspectivist@feddit.uk 1 day ago
AI is not synonymous with LLMs. AlphaFold figured out protein folding. It’s an AI but not an LLM.
phaedrus@piefed.world 1 day ago
100% this, people say they understand AI is a buzzword, but don’t realize just how large of an umbrella that term actually is.
Enemy NPCs in video games back to the 80’s fall under AI.
lemmie689@lemmy.sdf.org 1 day ago
The term AI is actually from the 1950s
Perspectivist@feddit.uk 1 day ago
When most people hear AI they think AGI and because a narrow-AI language model doesn’t perform the way they expect an AGI to they then say stuff like “it’s not intelligent” or “it’s not an AI”
AI as a term is about as broad as the term “plants” which contains everything from grass to giant redwoods. LLM is just a subcategory like conifers.
SaveTheTuaHawk@lemmy.ca 1 day ago
Autocorrrect and grammar suggestions are AI.
Steak sauce is A1.
egerlach@lemmy.ca 1 day ago
I work primarily in “classical” AI and have been working with it on-and-off for just under 30 years now. Programmed my first GAs and ANNs in the 90s. I survived Prolog. I’ve had prolonged battles getting entire corporate departments to use the terms “Machine Learning” and “Artificial Intelligence” correctly, understand what they mean, and how to start thinking about them to incorporate them correctly into their work.
Thus why I chose the word “LLM” in my response, not “AI”.
I will admit that I assumed that by “AI” Jimmy Carr was referring to LLMs, as that’s what most people mean these days. I read the TL;DW by @masterspace@lemmy.ca but didn’t watch the original content. If I’m wrong in that assumption and he’s referring to classical AI, not LLMs, I’ll edit my original post.
masterspace@lemmy.ca 1 day ago
It’s not entirely clear what he’s referring to, he just uses the term AI broadly in the context of people being worried about job losses, then talks about the reduction in secret police costs that enables, then discusses applying AI to physics.