Depends on the ai though. With ccpKobolt you can make memories for the ai to come back with. Even text personalities (like bitchy and sassy responses).
Comment on Black Mirror creator unafraid of AI because it’s “boring”
Mchugho@lemmy.world 1 year agoAI is nowhere near the point where it can actually write a script. It doesn’t even remember what it has written to you 5 minutes ago, how is it going to keep shows and TVs consistent? Even when you tell it what you want it to do and that it’s wrong it still provides wonky information.
danque@lemmy.world 1 year ago
jandar_fett@lemmy.world 1 year ago
This. You have to baby it and then if you want it to do something different you have to tell it a hundred times in a hundred different ways before it stops producing the same stuff with the same structure with slight differences. It is a nightmare.
FlyingSquid@lemmy.world 1 year ago
I agree, but at some point it will advance to the level where it can write boring, predictable scripts.
lloram239@feddit.de 1 year ago
ChatGPT is 11 months old, not even a whole year. And it was never fined tuned for story writing in the first place. A little bit premature to proclaim what AI can and can’t do, don’t you think?
currycourier@lemmy.world 1 year ago
ChatGPT isn’t the entirety of AI, AI research has been going on much longer than ChatGPT has been around
Daft_ish@lemmy.world 1 year ago
Isn’t chatGPT just a beefed up predictive text engine?
NoMoreCocaine@lemmy.world 1 year ago
Yes. Honestly it’s crazy how much people read into ChatGPT, when in practice it’s effectively just a dice roller that depends in incredibly big dataset to guess what’s the most likely word to come next.
There’s been some research about this, the fact that people are assigning intelligence into things that ML does. Because it doesn’t compute for us that something can appear to make sense without actually having any intelligence. To humans, the appearance of the intelligence is enough to assume intelligence - even if it’s just a result of a complicated dice roller.
lloram239@feddit.de 1 year ago
And that’s exactly why we should be scarred. ChatGPT is just the popular tip of the AI iceberg, there is a whole lot of more stuff in the works across all kinds of domains. The underlying AI algorithms is what allows you to slap something like ChatGPT together in a few months.
matter@lemmy.world 1 year ago
AI has been being developed for 50 years and the best we can do so far is a dunning-kruger sim. Sure, who knows what it “can do” at some point, but I wouldn’t hold my breath.
lloram239@feddit.de 1 year ago
The recent deep learning AI efforts only started around 2012 with AlexNet. They were based on ideas that were around since the 1980s, but they had been previously abandoned as they just didn’t produce any usable results with the hardware available at the time. Once programmable consumer GPUs came around that changed.
Most of the other AI research that has been happening since the 1950s was a dead end, as it relied on hand crafted feature detection, symbol logic and the like written by humans, which as the last 10 years have shown performs substantially worse than techniques that learn from the data directly without a human in the loop.
That’s the beauty of it. Most of this AI stuff is quite simple on the software side of things, all the magic happens in the data, which also means that it can rapidly expand into all areas were you have data available for training.
You smug idiots are proud of yourself that you can find a hand with an additional finger in an AI image, completely overlooking that three years of AI image generation just made 50 years of computer graphics research obsolete. And even ChatGPT is already capable of holding more insightful conversations than your AI haters are capable of.
rambaroo@lemmy.world 1 year ago
lloram239@feddit.de 1 year ago
ChatGPT was released Nov 2022. Plain GPT1/2/3 neither had the chat interface nor the level of training data and fine tuning that ChatGPT/GPT-3.5 had and in turn were much less capable. They literally couldn’t function in the way ChatGPT does. Even the original Google paper this is all based on only goes back to 2017.
Yeah, LLM won’t ever improve, because technology improving has never happened before in history… The stupid in your argument hurts.
Beside GPT-4 can already handle 32768 tokens, that’s enough for your average movie, even without any special tricks (of which there are plenty).