Telling people to stop doing something because it burns the planet doesn’t really changes their mind in general unfortunately. Best you can do is put the numbers in their face so that they can’t avoid the truth. But that only works on people who care.
Comment on 60% of Teachers Used AI This Year and Saved up to 6 Hours of Work a Week
paultimate14@lemmy.world 4 days agoIs there enough value in AI to justify burning down the planet for it?
Dindonmasker@sh.itjust.works 4 days ago
RageAgainstTheRich@lemmy.world 4 days ago
That is sadly the truth with many things. People just don’t care unless it personally affects them. And even then it depends if it hits hard enough 💔.
RageAgainstTheRich@lemmy.world 4 days ago
I actually don’t know that much about LLM’s. I do know they require a ton of energy to train the models. But once those are trained, the smaller models especially, don’t require that much to run, right? I once tried to run a local one to see how much it took, and my gpu maxed out for a few seconds and the LLM spit out text and it was done. While when playing games, the gpu maxes out for hours.
Again, i don’t know super much about them as i have only used it a few times over the years to break down big tasks into smaller tasks for my AuDHD when i am very overwhelmed, and it was kinda nice for that.
The image generation stuff is pretty bad though from what i have read. Plus it steals peoples art. Fuck that shit.
Please do tell me if i understand wrong. Because i don’t want to contribute to a bunch of bad shit ruining the climate.
ExLisper@lemmy.curiana.net 4 days ago
RageAgainstTheRich@lemmy.world 4 days ago
Wouldn’t it then help to run the smaller ones locally instead of using the big ones like ChatGPT?
I read that one called Deepmind or something in china took a lot less to train and is just as strong. Is that true?
What do people usually use LLM’s for? I know they suck for most things people are using them for like coding. But what do people use them for that justifies all the hype?
Again, please don’t think i am trying to justify it. I just don’t know super much about them.
ExLisper@lemmy.curiana.net 4 days ago
Small models can only handle limited set of tasks. To cover a lot of different tasks you would need a lot of small models. What DeepSeek did was build a lot of small models with each acting as an expert on one topic (more or less). It’s more energy efficient to train but not necessarily to run as you have to chain a lot of small models to get good results.
What do people use LLM for? Asking questions you would normally ask Google. Google sucks now so it’s easier to ask ChatGPT. You can also use it for simple tasks like checking text for grammar errors, writing emails and so on.