utopiah
@utopiah@lemmy.world
- Comment on Microsoft announces new Windows changes in response to the EU's (DMA) Digital Markets Act for EEA users, including Edge not prompting users to set it as the default unless opened 1 day ago:
Not sure what NLNet is going to do about software lol, I believe you mean something different.
That NLNet nlnet.nl funding FLOSS project.
There are also BlueHats in France showing how administration is using AND consequently funding FLOSS code.gouv.fr/en/bluehats/ by paying for sysadmin, feature dev, maintenance, etc.
- Comment on Microsoft announces new Windows changes in response to the EU's (DMA) Digital Markets Act for EEA users, including Edge not prompting users to set it as the default unless opened 1 day ago:
Don’t underestimate management desire to be absolutely indistinguishable from their competition.
They read the Harvard Business Review, learn new terms they don’t understanding, make a PowerPoint out of it and voila, they are “innovative” like everyone else.
If HBR put “AI” on its cover you can be damn sure all those innovators are going to put AI wherever they can.
- Comment on Microsoft announces new Windows changes in response to the EU's (DMA) Digital Markets Act for EEA users, including Edge not prompting users to set it as the default unless opened 1 day ago:
I would love to, but we stiill use Windows specific software
If I had 1 cent every time I read that… and I pulled those cents together… and then paid software developers to build that missing software for other OSes like Linux… then we’d gradually see less of those comments.
It’s as if the isolation was the business model, proprietary software insuring that alternatives do not exist because users do not bother to get together and unstuck themselves from glowingly dangerous (security wise but probably even financially dependencies.
Hopefully initiatives like NLNet are precisely trying to alleviate such challenges. Until them compatibility layers like Proton are showing the way with arguably some of the most complex and demanding in terms of performance software, namely games.
- Comment on Samsung teams up with Glance to use your face in AI-generated lock screen ads 1 day ago:
Minority Report, the bad parts.
- Comment on YSK that after leaving power, Margaret Thatcher became a lobbyist for tobacco companies 2 days ago:
2, 3 and 4 also are about politics.
- Comment on YSK that after leaving power, Margaret Thatcher became a lobbyist for tobacco companies 2 days ago:
I for one knew it and yet I enjoy, in a very tragic way, discovering that she was, actually, even worst than I thought.
- Comment on AI Training Slop 4 days ago:
I’m playing games at home. I’m running models at home (I linked in other similar answers to it) for benchmarking.
My point is that models are just like anything I bring into my home I try to only buy products that are manufactured properly. Someone else in this thread asked me about child labor for electronics and IMHO that was actually a good analogy. You here mention buying a microwave and that’s another good example.
Yes, if we do want to establish feedback in the supply chain, we must know how everything we rely on is made. It’s that simple.
There are already quite a few initiatives for that with e.g. coffee with Fair Trade Certification or ISO 14001, in electronics Fair Materials, etc.
The point being that there are already mechanisms for feedback in other fields and in ML there are already model cards with a
co2_eq_emissions
field, so why couldn’t feedback also work in this field? - Comment on Meta shareholders overwhelmingly rejected a proposal to explore adding Bitcoin to the company's treasury, with less than 1% voting in favor of the measure 4 days ago:
The purpose of a system is what it does.
Right, reminds me of the hacker mindset or more recently the workshop I did on “Future wheel foresight” with Karin Hannes. One can try their best to predict how an invention might be used but in practice it goes beyond what its inventors want it to be, it is truly about how what “it” does through actual usage.
- Comment on Meta plans to use AI to automate up to 90% of its privacy and integrity risk assessments, including in sensitive areas like violent content 4 days ago:
very very little actual logic
To be precise, 0.
- Comment on Meta plans to use AI to automate up to 90% of its privacy and integrity risk assessments, including in sensitive areas like violent content 4 days ago:
The business model IS dodging any kind of responsibility so… yeah, I think they’ll pass.
- Comment on Meta shareholders overwhelmingly rejected a proposal to explore adding Bitcoin to the company's treasury, with less than 1% voting in favor of the measure 4 days ago:
I agree and in fact I feel the same with AI.
Fundamental cryptocurrency is fascinating. It is mathematically sound, just like cryptography in general (computational complexity, one way functions, etc) and it had the theoretical potential to change existing political and economical structures. Unfortunately (arguably) the very foundation it is based on, namely mining for greed, brought a different community who inexorably modified not the technology itself but its usages. What was initially a potential infrastructure for exchange of value became a way to speculate, buy and sell goods and services banned, ransomware, scam payments, etc).
AI also is fascinating as a research fields. It asks deep question with complex answers. Research for centuries about it lead to not just interesting philosophical questions, like what it’s like to be think, to be human, and mathematics used in all walks of life, like in logistics for your parcel to get delivered this morning. Yet… gradually the field, or at least its commercialization, got captured by venture capitalists, entrepreneurs, regulators, who main interest was greed. This in turn changed what was until then open to something closed, something small to something required gigantic infrastructure capturing resources hitherto used for farming, polluting due to lack of proper permit for temporary electricity sources, etc. The pinnacle right now being regulation to ban regulation on AI in the US.
So… yes, technology itself can be fascinating, useful, even important and yet how we collectively, as a society, decide to use it remains what matters, the actual impact of an idea rather than its idealization.
- Comment on If AI was going to advance exponentially I'd of expected it to take off by now. 4 days ago:
Moore’s law is kinda still in effect, depending on your definition of Moore’s law.
Sounds like the goal post is moving faster than the number of transistors in an integrated circuit.
- Comment on If AI was going to advance exponentially I'd of expected it to take off by now. 4 days ago:
LOL… you did make me chuckle.
Aren’t we 18months until developers get replaced by AI… for like few years now?
Of course “AI” even loosely defined progressed a lot and it is genuinely impressive (even though the actual use case for most hype, i.e. LLM and GenAI, is mostly lazier search, more efficient spam&scam personalized text or impersonation) but exponential is not sustainable. It’s a marketing term to keep on fueling the hype.
That’s despite so much resources, namely R&D and data centers, being poured in… and yet there is not “GPT5” or anything that most people use on a daily basis for anything “productive” except unreliable summarization or STT (which both had plenty of tools for decades).
So… yeah, it’s a slow take off, as expected. shrug
- Comment on AI Training Slop 4 days ago:
That’s been addressed few times already so I let you check the history if you are actually curious.
- Comment on AI Training Slop 4 days ago:
No one is saying training costs are negligible.
It’s literally what the person I initially asked said though, they said they don’t know and don’t care.
- Comment on AI Training Slop 5 days ago:
Yes indeed, yet my point is that we training models TODAY so if keep on not caring, then we do postpone the same problem, cf lemmy.world/post/30563785/17400518
Basically yes, use trained model today if you want but if you don’t set a trend then despite the undeniable ecological impact, there will be no corrective measure.
It’s not enough to just say “Oh well, it used a ton of energy. We MUST use it now.”
Anyway, my overall point was that training takes a ton of energy. I’m not asking your or OP or anyone else NOT to use such models. I’m solely pointing out that doing so without understand the process that lead to such models, including but not limited to energy for training, is naive at best.
- Comment on AI Training Slop 5 days ago:
Indeed, the argument is mostly for future usage and future models. The overall point being that assuming training costs are negligible is either naive or showing that one does not care much for the environment.
From a business perspective, if I’m Microsoft or OpenAI, and I see a trend to prioritize models that minimize training costs, or even that users are avoiding costly to train model, I will adapt to it. On the other hand if I see nobody cares for that, or that even building more data center drives the value up, I will build bigger models regardless of usage or energy cost.
The point is that training is expensive and that pointing only to inference is like the Titanic going full speed ahead toward the iceberg saying how small it is. It is not small.
- Comment on AI Training Slop 5 days ago:
Right, my point is exactly that though, that OP by having just downloaded it might not realize the training costs. They might be low but on average they are quite high, at least relative to fine-tuning or inference. So my question was precisely to highlight that running locally while not knowing the training cost is naive, ecologically speaking. They did clarify though that they do not care so that’s coherent for them. I’m insisting on that point because maybe others would think “Oh… I can run a model locally, then it’s not <<evil>>” so I’m trying to clarify (and please let me know if I’m wrong) that it is good for privacy but the upfront training cost are not insignificant and might lead some people to prefer NOT relying on very costly to train models and prefer others, or a even a totally different solution.
- Comment on AI Training Slop 5 days ago:
Results? I have no idea what you are talking about. I thought we were discussing the training cost (my initial question) and that the truckload was an analogy to argue that the impact from that upfront costs is spread among users.
- Comment on PeerTube crowdfunding to develop mobile app 5 days ago:
Well even a PWA still has to be developer and maintained.
- Comment on AI Training Slop 5 days ago:
Great point, I’so are you saying there is a certain threshold above which training is energetically useful but under which it is not, e.g. if training of a large model is used by 1 person, it is not sustainable but if 1 million people use it (assuming it’s done productively, not spam or scam) there it is fine?
- Comment on AI Training Slop 5 days ago:
I’ll assume you didn’t misread my question on purpose, I didn’t ask about inference, I asked about training.
- Comment on AI Training Slop 5 days ago:
I specifically asked about the training part, not the fine tuning but thanks to clarifying.
- Comment on AI Training Slop 5 days ago:
I see. Well, I checked your post history because I thought “Heck, they sound smart, maybe I’m the problem.” and my conclusion based on the floral language you often use with others is that you are clearly provoking on purpose.
Unfortunately I don’t have the luxury of time to argue this way so I’ll just block you, this way we won’t have to interact in the future.
Take care and may we never speak again.
- Comment on AI Training Slop 5 days ago:
You know what, again maybe I’m misreading you.
If you do want to help, do try with me to answer the question. I did give a path to the person initially mentioning the Model Card. Maybe you are aware of that but just in cased a Model Card is basic meta-data about a model, cf huggingface.co/docs/hub/model-cards
Some of them do mention CO2 equivalent, see huggingface.co/docs/hub/model-cards-co2 so here I don’t know which model they used but maybe finding a way have CO2 equivalent for the most popular models, e.g DeepSeek, and some equivalent (they mentioned not driving a car) would help us all grasping at least some of the impact.
What do you think?
- Comment on AI Training Slop 5 days ago:
Please, do whatever you want to protect the environment you cherish. My point though was literally asking somebody who did point a better way to do it if they were aware of all the costs of their solution. If you missed it, their answer was clear : they do not know and they do not care. I was not suggesting activism, solely genuinely wondering if they actually understood the impact of the alternative they showcased. Honestly, just do whatever you can.
- Comment on AI Training Slop 5 days ago:
Apologies for my sarcastic answer, I did actually search for that a little while ago so I do assume most people do know but that’s incorrect. The most useful tool I know of would probably be www.aspi.org.au/…/mapping-chinas-tech-giants/
Let me know if you are looking for something more precise. I know of few other tools which do help better understand who builds what and how, for electronics but other products too.
- Comment on AI Training Slop 5 days ago:
FWIW the person I asked did reply, they don’t care : lemmy.world/post/30563785/17397024
Hope it helps.
- Comment on AI Training Slop 5 days ago:
Straw-hat much or just learning about logistics and sourcing in our globalized supply chain?
- Comment on AI Training Slop 5 days ago:
Feel free to explain the down votes.
If it wasn’t clear the my point was that self hosting addresses mostly privacy for the user but that is only one dimension addressed. It does not necessarily address the ecological impact. I was honestly hoping this community to care more.