Comment on AI Is Starting to Look Like the Dot Com Bubble
orphiebaby@lemmy.world 1 year ago“Limited” is relative to what context you’re talking about. God I’m sick of this thread.
Comment on AI Is Starting to Look Like the Dot Com Bubble
orphiebaby@lemmy.world 1 year ago“Limited” is relative to what context you’re talking about. God I’m sick of this thread.
c0mbatbag3l@lemmy.world 1 year ago
Talk to me in 50 years when Boston Dynamics robots are running OpenAI models and can do your gardening/laundry for you.
orphiebaby@lemmy.world 1 year ago
Haha, keep dreaming. If OpenAI is used for robots, it’s not going to work anything on a fundamental level like current “AI”. It’s not a matter of opinion or speculation, but a matter of knowing how the fuck current “AI” even works.
People are so fucking dense about all of this, simply because idiots named it “AI”. Just like people are dense about “black holes” just because of their stupid name.
c0mbatbag3l@lemmy.world 1 year ago
We’re like four responses into this comment chain and you’re still going off about how it’s not “real” AI because it can’t think and isn’t sapient. No shit, literally no one was arguing that point. Current AI is like the virtual intelligences of Mass Effect, or the “dumb” AI from the Halo franchise.
Do I need my laundry robot to be able to think for itself and respond to any possible scenario? Fuck no. Just like how I didn’t need ChatGPT to be able to understand what I’m using the python script for. I ask it to accomplish a task using the data set that it’s trained on and it can access said pretrained data to build me a script for what I’m describing to it. I can ask DALLE2 to generate me an image and it will access it’s dataset to emulate whatever object or scene I’ve described based on its training data.
You’re so hung up on the fact that it can’t think for itself in a sapience sense that you’re claiming it cannot do things that it’s already capable of. The models can absolutely replicate “thinking” within the information it has available. That’s not a subjective opinion, if it couldn’t do that they wouldn’t be functional for the use cases we already have for them.
Additionally, robotics has already reached the point we need for this to occur. BD has bipedal robots that can do parkour and assist with carrying loads for human operators. All of the constituent parts of what I’m describing already exist. There’s no reason we couldn’t build an AI model for any given task, once we define all of the dependencies such a task would require and assimilate the training data. There’s people who have already done similar (albeit more simplistic) things with this.
Hell, Roombas have been automating vacuuming for years, and without the benefit of machine learning. How is that any different than what I’m talking about here? You could build a model to take in the pathfinding and camera data of all vacuuming robots and use it to train an AI for vacuuming for fucks sake. It’s just combining ML with other things besides a chatbot.
And you call me dense.
garyyo@lemmy.world 1 year ago
Five years ago the idea that the turing test would be so effortlessly shattered was considered a complete impossibility. AI researchers knew that it was a bad test for AGI, but to actually create an AI agent that can pass it without tricks still was surely at least 10-20 years out. Now, my home computer can’t run a model that can talk like a human.
Being able to talk like a human used to be what the layperson would consider AI, now it’s not even AI, it’s just crunching numbers. And this has been happening throughout the entire history of the field. You aren’t going to change this person’s mind, this bullshit of discounting the advancements in AI has been here from the start, it’s so ubiquitous that it has a name.
en.wikipedia.org/wiki/AI_effect