ClamDrinker
@ClamDrinker@lemmy.world
- Comment on We cater any event! 1 week ago:
“You know you don’t need to bring a dead horse every time you want catering right, Jim?”
- Comment on AI-created “virtual influencers” are stealing business from humans 5 months ago:
No worries my fellow unethical dishonest internet-using homie. It’s not like nuance exists and things can be both good and bad. Everything is black and white, after all.
- Comment on AI-created “virtual influencers” are stealing business from humans 5 months ago:
Ah yes. Like that damn internet and those cursed devices people use to access it. Anyone using those is inherently not honest or ethical.
- Comment on GTA 6 is likely to skip PC again and only launching on current gen consoles 6 months ago:
PC is typically easier to develop for because of the lack of strict (and frequently silly) platform requirements. Which typically makes game development more expensive and slow than it needs to be when just targeting PC. If that barrier to entry was reduced to that of PC, you’d see a lot more games on there from smaller developers.
With current gen consoles, pretty much every game starts as a PC game already, because thats where the development and testing happens.
Rockstar here is the exception in that they are intentionally skipping PC - something that should be well within reach of a company their size while clearly being capable of doing so.
If another AAA game comes out with only PC support I’ll be right there with you - but most game developers with the capability release for all major platforms now. But not the small console indie studio called Rockstar Games it seems.
- Comment on Mozilla Senior Director of Content explained why Mozilla has taken an interest in the fediverse and Mastodon 7 months ago:
It’s because the current version has nothing wrong with it. If the Lemmy devs should choose to sabotage the Lemmy software, you’d be surprised how easily that happens when it pisses off all the instances and their owners. Instances will simply refuse to upgrade. And like most things, eventually some fork will win the race to become the dominant fork and the current Lemmy devs would be essentially disowned. Different forks also doesn’t necessarily mean API breaking changes, so different forks would have no issue communicating (at least for a while).
- Comment on I created an image using AI. Not sure what this style is called, an I want to know the type of this drawing 8 months ago:
If you use StableDiffusion through a web UI (might exist for others as well), you might have access to a feature called ‘interrogate’, which allows you to find an approximate prompt to an image. Can be useful if you need it for future images.
It can also be done online: huggingface.co/spaces/…/CLIP-Interrogator
- Comment on This new data poisoning tool lets artists fight back against generative AI 8 months ago:
LLM is the wrong term. That’s Large Language Model. These are generative image models / text-to-image models.
Trurhfully though, while it will be there when the image is trained, it won’t ‘notice’ it unless you distort it significantly (enough for humans to notice as well). Otherwise it won’t make much of a difference because these models are often trained on a compressed and downsized version of the image (in what’s called latent space)
- Comment on How do you call someone born in the US besides "American"? 8 months ago:
Halfway-North American
- Comment on Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi... 11 months ago:
That’s an eventual goal, which would be a general artificial intelligence (AGI). Different kind of AI models for (at least some) of the things you named already exist, it’s just that OpenAI had all their eggs in the GPT/LLM basket, and GPTs deal with extrapolating text. It just so happened that with enough training data their text prediction also started giving somewhat believable and sometimes factual answers. (Mixed in with plenty of believable bullshit). Other data requires different training data, different models, and different finetuning, hence why it takes time.
It’s highly likely for a company of OpenAI’s size (especially after all the positive marketing and potential funding they got from ChatGPT in it’s prime), that they already have multiple AI models for different kinds of data either in research, training, or finetuning already.
But even with all the individual pieces of an AGI existing, the technology to cross reference the different models doesn’t exist yet. Because they are different models, and so they store and express their data in different ways. And it’s not like training data exists for it either. And unlike physical beings like humans, it doesn’t have any kind of way to “interact” and “experiment” with the data it knows to really form concrete connections backed up by factual evidence.
- Comment on Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi... 11 months ago:
As long as humans are still the driving force behind what content gets spread around (and thus, far more represented in the training data), even if the content is AI generated, it shouldn’t matter. But it’s quite definitely not the case here.