Comment on OpenAI’s new AI image generator pushes the limits in detail and prompt fidelity

<- View Parent
lloram239@feddit.de ⁨1⁩ ⁨year⁩ ago

And you quickly realize that when you generate things from similar prompts over and over the model gives you the same results but slightly adjusted.

That’s quite true, however it’s worth keeping in mind that this is largely not due to a limit in the model itself, but a limit how that model interfaces with the human. Text just isn’t very good to get specific results, especially not when it lacks the incremental refinement that you can do in ChatGPT with follow up prompts.

On the other side if I take StableDiffusion with ControlNet, instead of just a text prompt, I can generate far more specific results, as I can feed other images and sketches into the generation.

but I think once the new toy factor wears off people will realize they aren’t as good as they seem.

Quite the opposite, there is a ton of hidden potential still left to uncover. We have barely even started to train them on video or 3D data, integration of image models with newer language models is also a work in progress and integration into old-school image manipulation tools has just began as well.

Worth keeping in mind that Dalle-1 isn’t even three years old. We are basically still in the Atari2600 days of image generation.

Meanwhile Dalle-3 comes along and can produce this level of quality with a complete generic prompt: “A fan-art of Guardians of the Galaxy Vol. 3” on the first try.

I think the next “revolution” in art is going to be having human art as a selling point

The big problem for artists is that AI art drives the value of art down to zero. It’ll be hard to convince anybody to pay hundred of dollars for something when AI can produce something similar in 30sec for free. Worse yet, AI can take any existing image and remix it. The whole idea of a singular static images feels quite restrictive once you played around with AI art for a while, as everything is just a few clicks away from being something different.

I think the idea of AI art as just generators for stock images doesn’t capture the magnitude of the changes that are coming. We are straight up heading into Holodeck territory where you tell the computer what you want and you get it. The AI generators won’t be a tool for the artists, but go right to the users. There won’t be an static image that comes out the other end, the AI will be the medium of media consumption. Just like people today can flip through TikTok, future people will flip through a AI generated stream of content custom made for them.

Wanna play some 2D game with snow and ice? Tell the computer, a couple seconds waiting, and boom here it is. First try. Want lava instead? Done. How do you compete with that as a human when AI can pull that out of thin air in seconds?

source
Sort:hotnewtop