Inflating a phone is super easy though!
Overheat the battery. ;)
If AI was ‘intelligent’, it wouldn’t have written me a set of instructions when I asked it how to inflate a foldable phone. Seriously, check my first post on Lemmy…
An intelligent system would have stopped to say something like “I’m sorry, that doesn’t make any sense, but here are some related topics to help you”
Inflating a phone is super easy though!
Overheat the battery. ;)
If “making sense” was a requirement of intelligence… there would be no modern art museums.
WackyTabbacy42069@reddthat.com 1 year ago
AI doesn’t necessitate a machine even being capable of stringing the complex English language into a series of steps towards something pointless and unattainable. That in itself is remarkable, however naive it may be in believing you that a foldable phone can be inflated. You may be confusing AI for AGI, which is when the intelligence and reasoning level is at or slightly greater than humans.
The only real requirement for AI is that a machine take actions in an intelligent manner. Web search engines, dynamic traffic lights, and Chess bots all qualify as AI, despite none of them being able to tell you rubbish in proper English
TimewornTraveler@lemm.ee 1 year ago
There’s the rub: defining “intelligent”.
If you’re arguing that traffic lights should be called AI, I’m on the same page. We believe the same things: that ChatGPT isn’t any more “intelligent” than a traffic light. But you want to call them both intelligent and I want to call neither so.
throwsbooks@lemmy.ca 1 year ago
I think you’re conflating “intelligence” with “being smart”.
Intelligence is more about taking in information and being able to make a decision based on that information. So yeah, automatic traffic lights are “intelligent” because they use a sensor to check for the presence of cars and “decide” when to switch the light.
Acting like some GPT is on the same level as a traffic light is silly though. On a base level, yes, it “reads” a text prompt (along with any messaging history) and decides what to write next. But that decision it’s making is much more complex than “stop or go”.
I don’t know if this is an ADHD thing, but when I’m talking to people, sometimes I finish their sentences in my head as they’re talking. Sometimes I nail it, sometimes I don’t. That’s essentially what chatGPT is, a sentence finisher that happened to read a huge amount of text content on the web, so it’s got context for a bunch of things. It doesn’t care if it’s right and it doesn’t look things up before it says something.
But to have a computer be able to do that at all?? That’s incredible, and it took over 50 years of AI research to hit that point (yes, it’s been a field in universities for a very long time, with most that time people saying it’s impossible), and we only hit it because our computers got powerful enough to do it at scale.
ParsnipWitch@feddit.de 1 year ago
Where does that come from? A better gauge for intelligence is whether someone or something is able to resolve a problem that they did not encounter before. And arguably all current models completely sucks at that.
I also think the word “AI” is used quite a bit too liberal. It confuses people who have zero knowledge on the topic. And when an actual AI comes along we will have to make up a new word because “general artificial intelligence” won’t be distinctive enough for corporations to market their new giant leap in technology….
sin_free_for_00_days@sopuli.xyz 1 year ago
I’m with you on this and think the AI label is just stupid and misleading. But times/language change and you end up being a Don Quixote type figure.