Black Mirror creator unafraid of AI because it’s “boring”::Charlie Brooker doesn’t think AI is taking his job any time soon because it only produces trash
Movie and TV executives don’t care about boring. Reality shows are boring. They just care if they make money.
SkyeStarfall@lemmy.blahaj.zone 1 year ago
The thing with AI, is that it mostly only produces trash now.
But look back to 5 years ago, what were people saying about AI? Hell, many thought that the kind of art that AI can make today would be impossible for it to create! …And then it suddenly did. We’ll, it wasn’t actually suddenly, and the people in the space probably saw it coming, but still.
The point is, we keep getting better at creating AIs that do stuff we thought were impossible a few years ago, stuff that we said would show true intelligence if an AI can do them. And yet, every time some new impressive AI gets developed, people say it sucks, is boring, is far from good enough, etc. While it slowly, every time, creeps on closer to us, replacing a few jobs here and there in the fringes. Sure, it’s not true intelligence, and it still doesn’t beat humans, but, it beats most, at demand, and what happens when inevitably better AIs get created?
Maybe we’re in for another decades long AI winter… or maybe we’re not, and plenty more AI revolutions are just around the corner. I think AIs current capabilities are frighteningly good, and not something I expected to happen this soon. And the last decade or so has seen massive progress in this area, who’s to say where the current path stops?
Telodzrum@lemmy.world 1 year ago
Nah, nah to all of it. LLM is a parlor trick and not a very good one. If we are ever able to make a general artificial intelligence, that’s an entirely different story. But text prediction on steroids doesn’t move the needle.
fsmacolyte@lemmy.world 1 year ago
The best ones can literally write pretty good code, and explain any concept on the Internet to you that you ask them to. If you don’t understand a specific thing about their explanation, they can add onto their explanation, and they can respond in the style you want (explain as if I’m ten, explain as if I’m an undergrad, etc). I use it literally every day for work and a somewhat niche field. I don’t really agree that it’s a “parlor trick”.
ChaoticNeutralCzech@feddit.de 1 year ago
In humans, abstract thinking developed hand in hand with language. So despite their limitations, I think that at least early AGI will include an LLM in some way.
Clent@lemmy.world 1 year ago
Parlor trick is a perfect description.
People don’t get that these things aren’t anymore intelligent than their smartphones predicting the next word. The main difference is instead of a couple words it has thousands to choose from.
Half of the trick is how it uses the prompt to decided what words to start with.
aubertlone@lemmy.world 1 year ago
Nah, nah to your understanding of LLM’s
No it’s not true intelligence. Yes, it makes humans much faster at their work
It has really sped up my work, especially when coding in unfamiliar languages.
It’s silly to compare it to a parlor trick or text prediction.
Honytawk@lemmy.zip 1 year ago
LLM’s are like an interface to allow computers to talk to humans.
They are a necessary step in order to create general AI, because a general AI that can’t generate text wouldn’t be able to convey what they learned.
ezchili@iusearchlinux.fyi 1 year ago
I think the breakthroughs in AI have largely happened now as we’re reaching a slowndown and an adoption phase
The research has been stagnating. Video with temporal consistency doesn’t want to come, voice is still perceptibly non-human, they’re just assembling 5 models in a trenchcoat to make gpt do images now, …
Companies and people are adopting what is already there for new applications, it’s getting more common to see neural network models in lots of solutions where the tech adds good value and is applicable, but the models aren’t breaking new grounds like in 2021 anymore
lloram239@feddit.de 1 year ago
It utterly baffles me how people can make that claim. AI image generation has exists for not even three years and back than it could do little more than deformed Avocado chairs and shrimp. This stuff has been evolving insanely fast, much quicker than basically any technology before.
We have barely even started training AIs on video. So far it has all been static images, of course they aren’t learning motions from that and you can’t expect temporal consistency when the AI has no concept of time, frames or anything video related. And anyway, the results so far look quite promising already. Generators for 3D models and stuff is in the works as well.
What the heck do you expect? Of course going from nothing to ChatGPT/DALLE2 will be a bigger jump than going to GPT4/DALLE3, that doesn’t mean both of them aren’t substantially better than previous versions. By GPT5/DALLE4 you might really start to worry about if humans will still be necessary at all. We should be happy that we might still have a few more years left before AI renders us all obsolete.
And of course there is plenty of other research going on in the background for multi-modal models or robots that interact with the real world. Image generations and LLMs are obviously only part of the puzzle, you are not going to get an AGI as long as it is locked in a box and not allowed to interact with the real world. Though at the current pace, I’d also be very careful with letting AI out of its box.
SkyeStarfall@lemmy.blahaj.zone 1 year ago
I want to note that everything you talk about is happening on the scales of months to single years. That’s incredibly rapid pace, and also too short of a timeframe to determine true research trends.
Usually research is considered rapid if there is meaningful progression within a few years, and more realistically about a decade or so. I mean, take something like real time ray tracing, for comparison.
When I’m talking about the future of AI, I’m thinking like 10-20 years. We simply don’t know enough about what is possible to say what will happen by then.
TwilightVulpine@lemmy.world 1 year ago
By its nature, Large Language Models won’t ever be truly innovative, after all they rely on expected patterns. But a lot of the media that we consume is also made to appeal to patterns that we expect: genres, tropes, usual messages. AI could replace a lot of it and frankly, that’s scary to think in a world where we need to work to earn our living.
Truly groundbreaking art may not be what people usually seek, it’s often something they don’t even know they want until they experience it, or they might even fail to appreciate it. But it likely won’t be automated unless AI achieves full consciousness, but if it does we will have a much more complicated situation in our hands than “we can command AI to make art better than we can do ourselves”.
Still, getting paranoid over the uncertain latter won’t help us with the former that is just around the corner.
KevonLooney@lemm.ee 1 year ago
Good points.
One problem with replacing everything with AI that people don’t think about: middle managers will start to be replaced too. There’s no way to ask a LLM “why did you do that”? Fewer people will need to be managed.
aesthelete@lemmy.world 1 year ago
Everyone in these threads likes to talk about being impressed by these llm or not being impressed by them as being some sort of intelligence test. I think of it more as a test of a person’s sense of creativity.
It spits out a lot of passable text very easily, but as you’re saying here its creativity is essentially nil. Even its “hallucinations” are just versions of things it borrowed from elsewhere injected slightly to wildly out of context in order to satisfy a prompt.