But if all these generative models are so designed to replace the people upon whose videos they’re based, who/what will train the next generation of models, I wonder?
Maybe we hit a regular cycle…
IE Data is all trained on real video, gets good enough that humans cannot differentiate. Real video becomes rare, AIs are now training on AI videos. Result… AI video becomes effectively copy of copy of copy of copy… degredation becomes obvious. as mistakes are now compounding. AI developers have to start creating and introducing un-tampered video to train with. AI starts to get better.
Yingwu@lemmy.dbzer0.com 2 months ago
Until these starts getting used on a broader scale, I’m not convinced these are not schemes to funnel more investment money into their companies. The examples are really short, probably made after I don’t know how many attempts, and probably very limited in what poses and/or actions they can show. I’m so tired of the LLM hype in general.
iopq@lemmy.world 2 months ago
People said this about AI generated images like three years ago. Now you have very high quality images generated from a fairly simple prompt. Don’t expect it to stay hard and lower quality forever
TexMexBazooka@lemm.ee 2 months ago
Believable video is exponentially more complex than a still image
iopq@lemmy.world 2 months ago
Which is why it wasn’t done when images were done. But now companies are doing it