Comment on AI-created “virtual influencers” are stealing business from humans
Aicse@lemmy.world 10 months ago
To me, this is just part of the progress. With the introduction of technology, they were the ones to take advantage of Photoshop, Instagram filters and all. Now the technology advanced enough to not only be an instrument to enhance their looks, but to fully replace them.
TwilightVulpine@lemmy.world 10 months ago
Progress to where? To complete alienation?
Lately the benefits of technological advancement seem to mostly serve to make some executives wealthier, rather than benefit the whole of society. Same goes here. Rather than somewhat affected by brand deals these figures can be entirely fabricated so that every word of them is optimized for sales.
Even as someone who used to be excited for AI personality developments, looking at this gives me an awful dystopian vibe.
SkippingRelax@lemmy.world 10 months ago
Human influences have always given me dystopian vibes. And they were just making some executives and themselves rich, is not such a big loss…
TwilightVulpine@lemmy.world 10 months ago
Human influencers are just celebrities at a smaller scale, and frankly the assumption that influencer/celebrity culture will go away if influencers are replaced I’m seeing in this thread is completely unrealistic. We will just get Coca-ColAIna and L’ÓreAI-chan instead of people occasionally peddling products.
If there’s any real concern of artificiality and parasocial following as a replacement for real human connections behind this disdain at influencers, then in no way replacing them with AI is going to fix anything. It will only make it worse. It will lead to custom-tailored indoctrination by brands.
Worse than that, I already see people treating actual artists much in the same way. That the human element in culture doesn’t matter as much as having an endless source of nebulous content, and that anyone making art should get a “real job” instead. Nevermind that those are also in line for automation…
FlyingSquid@lemmy.world 10 months ago
‘Influencer’ as a job has only existed for what, 10 years? I don’t think society will collapse without them.
xor@sh.itjust.works 10 months ago
“influencers” are more like models than celebrities… they add nothing
SkippingRelax@lemmy.world 10 months ago
Replacing influencers with ai is not going to fix anything for that we should dismantle social media and have a serious talk all 8 billion of us, but it’s not going to make anything worse either, it is already custom tailored indoctrination by brands and a handful of assholes are making stupid amounts of money. I’m not going to cry if that money shifts to different hands.
Yes artists come up often in this kind of discussions, the ones that are losing their job to ai never had one in the first place, same as influencers. What are we talking about, Jim that makes you a custom logo and business cards for your business?
The guy that gets a commission from the newly opened local microbrewery for graffiti-ing their walls is hardly losing any work to ai. If anything they could integrate ai in their creative process.
Barack_Embalmer@lemmy.world 10 months ago
I take your point, but in this specific application (synthetically generated influencer images) it’s largely something that falls out for free from a wider stream of research (namely Denoising Diffusion Probabilistic Models). It’s not like it’s really coming at the expense of something else.
As for what it’s eventually progressing towards - who knows… It has proven to be quite an unpredictable and fruitful field. For example Toyota’s research lab recently created a very inspired method of applying Diffusion models to robotic control which I don’t think many people were expecting.
That said, there are definitely societal problems surrounding AI, its proposed uses, legislation regarding the acquisition of data, etc. Often times markets incentivize its use for trivial, pointless, or even damaging applications. But IMO it’s important to note that it’s the fault of the structure of our political economy, not the technology itself.
The ability to extract knowledge and capabilities from large datasets with neural models is truly one of humanity’s great achievements (along with metallurgy, the printing press, electricity, digital computing, networking communications, etc.), so the cat’s out of the bag. We just have to try and steer it as best we can.
TwilightVulpine@lemmy.world 10 months ago
The technology itself may be very interesting and it may not be ultimately the core of the problem, but because there is no attempt to address the problems that arise as its use is spread, it can’t help but harm our society. Consider how companies may forgo hiring people to use AI to replace them, which threatens not only influencers but anyone working with writing, visual arts, voice work and consequently communication and service. How it can be used manipulatively to exploit people at a rate never seen before. As many amazing uses there may be for it, there are just as many terrible possibilites.
Meanwhile the average person cannot do much with it beyond using it as a toy, really.
Ultimately the real problem is the system, but as the system refuses to change we are in a collision course. There are calls to ban AI, but that is not the ideal solution, and I don’t think it can be done in any case. But we are not having the societal changes direly needed to be able to embrace it and end up with a better world. Sure it will bring massive profits to all sorts of business and industries, but that most likely will come at direct expense of people’s livelihoods. Can we even trust the scientific and industrial uses when financial interests direct them in such a way that products are intentionally sabotaged to be less functional and durable, or even which believes “curing diseases is not a sufficiently profitable model”?
These days I just dread the future…
Barack_Embalmer@lemmy.world 10 months ago
Since the forces that determine policy are largely tied up with corporate profit, promoting the interests of domestic companies against those of other states, and access to resources and markets, our system will misuse AI technology whenever and wherever those imperatives conflict with the wider social good. As is the case with any technology, really.
Even if “banning” AI were possible as a protectionist measure for those in white-collar and artistic professions, I think it would ultimately be unfavorable with the ruling classes, since it would concede ground to rival geopolitical blocs who are in a kind of arms race to develop the technology. My personal prediction is that people in those industries will just have to roll with the punches and accept AI encroaching into their space. This wouldn’t necessarily be a bad thing, if society made the appropriate accommodations to retrain them and/or otherwise redistribute the dividends of this technological progress. But that’s probably wishful thinking.
To me, one of the most worrying trends, as it’s gained popularity in the public consciousness over the last year or two, has been the tendency to silo technologies within large companies, and build “moats” to protect it. What was once an open and vibrant community, with strong principles of sharing models, data, code, and peer-reviewed papers full of implementation details, is increasingly tending towards closed-source productized software, with the occasional vague “technical report” that reads like an advertising spiel. IMO one of the biggest things we can lobby for is openness and transparency in the field, to guard against the natural monopolies and perverse incentives of hoarding data, technical know-how, and compute power. Not to mention the positive externality spillovers of the open-source scientific community refining and developing new ideas.
It’s similar to how knowledge of the atomic structure gave us both the ability to destroy the world, or fuel it (relatively) cleanly. Knowledge itself is never a bad thing, only what we choose to do with it.
riskable@programming.dev 10 months ago
AI will follow a similar curve as computers in general: At first they required giant rooms full of expensive hardware and a team of experts to perform the most basic of functions. Over time they got smaller and cheaper and more efficient. So much so that we all carry around the equivalent of a 2000-era supercomputer in our pockets (see note below).
2-3 years ago you really did need a whole bunch of very expensive GPUs with a lot of VRAM to train a basic diffusion (image) model (aka a LoRA). Today you can do it with a desktop GPU (Nvidia 3090 or 4090 with 24GB of VRAM… Or a 4060 Ti with 16GB and some patience). You can use pretrained diffusion models at reasonable speeds (~5-10 seconds an image) with any GPU with at least 6GB of VRAM (seriously, try it! It’s fun and only takes like 5-10 minutes to install automatic1111 and will provide endless uncensored entertainment).
Large Language Model (LLM) training is still out of reach for desktop GPUs. ChatGPT 3.0 was trained using 10,000 Nvidia A100 chips and if you wanted to run it locally (assuming it was available for download) you’d need the equivalent of 5 A100s (and each one costs about $6700 plus you’d need an expensive server capable of hosting them all simultaneously).
Having said that you can host a smaller LLM such as Llama2 on a desktop GPU and it’ll actually perform really well (as in, just a second or two between when you give it a prompt and when it gives you a response). You can also train LoRAs on a desktop GPU just like with diffusion models (e.g. train it with a data set containing your thousands of Lemmy posts so it can mimic your writing style; yes that actually works!).
Not only that but the speed/efficiency of AI tools like LLMs and diffusion models improves by leaps and bounds every few weeks. Seriously: It’s hard to keep up! This is how much of a difference a week can make in the world of AI: I bought myself a 4060 Ti as an early Christmas present to myself and was generating 4 (high quality) 768x768 images in about 20 seconds. Then Latent Consistency Models (LCM) came out and suddenly they only took 8s. Then a week later “TurboXL” models became a thing and now I can generate 4 really great 768x768 images in 4 seconds!
At the same time there’s been improvements in training efficiency and less VRAM is required in general thanks to those advancements. We’re still in the “early days” of AI algorithms (seriously: AI stuff is extremely inefficient right now) so I wouldn’t be surprised to see efficiency gains of 1,000-100,000x in the next five years for all kinds of AI tools (language models, image models, weather models, etc).
If you combine just a 100x efficiency gain with five years of merely evolutionary hardware improvements and I wouldn’t be surprised to see something even better than ChatGPT 4.0 running locally on people’s smartphones with custom training/learning happening in real time (to better match the user’s preferences/style).
Note: The latest Google smartphone as of the date of this post is the Pixel 8 which is capable of ~2.4 TeraFLOPS. Even 2yo smartphones were nearing ~2 TeraFLOPS which is about what you’d get out of a supercomputer in the early 2000s: en.wikipedia.org/wiki/FLOPS (see the SVG chart in the middle of the page).
wikibot@lemmy.world [bot] 10 months ago
Here’s the summary for the wikipedia article you mentioned in your comment:
In computing, floating point operations per second (FLOPS, flops or flop/s) is a measure of computer performance, useful in fields of scientific computations that require floating-point calculations. For such cases, it is a more accurate measure than measuring instructions per second.
^article^ ^|^ ^about^
Aicse@lemmy.world 10 months ago
Progress to something better or to self district. Nothing is forever. The whole social media may disappear at some point, it all depends on the community and human kind as a whole. The simple truth is that people want entertainment, if AI is capable of delivering better, it will be embraced.
I’m not saying that this is good or bad, I don’t like it either. So I do what I can to support what I think is good and give my disapprove to what I think is bad. If Instagram becomes a place for AI influencers, I’ll just ditch it. This should be the natural reaction of everyone, unfortunately this is what all “influencer” thing was heading to. From the very beginning of their careers they advertise fantasizes, they used every piece of technology available to enhance their looks and lifestyle.
TwilightVulpine@lemmy.world 10 months ago
Seems like people are all too eager for this to destroy the field of influencers as a whole, but that is extremely unlikely. If AI influencers don’t stick, the human ones will just keep at it as usual, but if it works, then it only becomes more artificial and manipulative. Say what you will about influencers, they don’t have the capability to tailor their ads to every single user, but AI could.
Betting on the whole of social media to disappear is wishful thinking, frankly. This genie won’t go back in the bottle. The human need for connections is too strong to simply drop it is not going to happen, and any substitute will need to fight uphill against very entrenched massive businesses that shaped it how it is today.
exocrinous@lemm.ee 10 months ago
Karl Marx predicted this more than a hundred years ago