Disillusionist
@Disillusionist@piefed.world
- Submitted 2 weeks ago to technology@lemmy.world | 1 comment
- Comment on A Project to Poison LLM Crawlers 2 weeks ago:
This is a subject that people (understandably) have strong opinions on. Debates get heated sometimes and yes, some individuals go on the attack. I never post anything with the expectation that no one is going to have bad feelings about it and everyone is just going to hold hands and sing a song.
There are hard conversations that need to be had regardless. All sides of an argument need to be open enough to have it and not just retreat to their own cushy little safe zones. This is the Fediverse, FFS.
- Comment on A Project to Poison LLM Crawlers 2 weeks ago:
I have never once said that *AI is bad*. Literally everything I’ve argued pertains to the ethics and application of AI. It’s reductive to call all arguments critical of *how* AI is being implemented “AI bad*.
It’s not even about it being disruptive, though I do think discussions about that are absolutely warranted. Experts have pointed to potentially catastrophic “disruptions” if AI isn’t dealt with responsibly, and we are currently anything but responsible in our handling of it. It’s unregulated, running rampant and free everywhere claiming to be all things for all people, leaving a mass of problems in its wake.
If a specific individual or company is committed to behaving ethically, I’m not condemning them. A major point to understand is that those small, ethical actors are the extreme minority. The major players, like those you mentioned, are titans. The problems they create are real.
- Comment on A Project to Poison LLM Crawlers 2 weeks ago:
Not all problems may be cured immediately. Battles are rarely won with a single attack. A good thing is not the same as nothing.
- Comment on A Project to Poison LLM Crawlers 2 weeks ago:
He’s jumping ship because it’s destroying his ability to eke out a living. The problem isn’t a small one, what’s happening to him isn’t a limited case.
- Comment on A Project to Poison LLM Crawlers 2 weeks ago:
[For instance](https://vger.to/programming.dev/post/43810907
- Comment on A Project to Poison LLM Crawlers 2 weeks ago:
I agree with you that there can be value in “showing people that views outside of their likeminded bubble[s] exist”. And you can’t change everyone’s mind, but I think it’s a bit cynical to assume you can’t change anyone’s mind.
- Comment on A Project to Poison LLM Crawlers 2 weeks ago:
From what I’ve heard, the influx of AI data is one of the reasons actual human data is becoming increasingly sought after. AI training AI has the potential to become a sort of digital inbreeding that suffers in areas like originality and other ineffable human qualities that AI still hasn’t quite mastered.
I’ve also heard that this particular approach to poisoning AI is newer and thought to be quite effective, though I can’t personally speak to its efficacy.
- Comment on A Project to Poison LLM Crawlers 2 weeks ago:
Is the only imaginable system for AI to exist one in which every website operator, or musician, artist, writer, etc has no say in how their data is used? Is it possible to have a more consensual arrangement?
As far as the question about ethics, there is a lot of ground to cover on that. A lot of it is being discussed. I’ll basically reiterate what I said that pertains to data rights. I believe they are pretty fundamental to human rights, for a lot of reasons. AI is killing open source, and claiming the whole of human experience for its own training purposes. I find that unethical.
- Comment on A Project to Poison LLM Crawlers 2 weeks ago:
I can’t speak for everyone, but I’m absolutely glad to have good-faith discussions about these things. People have different points of view, and I certainly don’t know everything. It’s one of the reasons I post, for discussion. It’s really unproductive to make blanket statements that try to end discussion before it starts.
- Comment on A generation taught not to think: AI in the classroom 2 weeks ago:
I think you’d probably have to hide out under a rock to miss out on AI at this point. Not sure even that’s enough. Good luck finding a regular rock and not a smart one these days.
- Comment on A Project to Poison LLM Crawlers 2 weeks ago:
AI companies could start, I don’t know- maybe asking for permission to scrape a website’s data for training? Or maybe try behaving more ethically in general? Perhaps then they might not risk people poisoning the data that they clearly didn’t agree to being used for training?
- Submitted 2 weeks ago to technology@lemmy.world | 93 comments
- Comment on A generation taught not to think: AI in the classroom 2 weeks ago:
If the problem you have is specifics, I could flip your tactic around and ask you to point to a specific “kind of AI [that] is being discussed or how it is being used” that supports your stance on why we shouldn’t be discussing this, which is what you’ve implied. But that’s playing games with you, like what you’re doing with us.
Your engagement on this issue is still clearly in bad faith, so instead I will point out that the burden of proof you’re demanding is weak within the context of this discussion. It reads like a common troll play where they attempt to draw a mark down a rabbit hole. It shouldn’t be too difficult for you to do an internet search or tap your own personal experiences, especially as intensely passionate about this issue you are.
Understand that I don’t play these games. This is me leaving you to your checkerboard. Take care.
- Comment on A generation taught not to think: AI in the classroom 2 weeks ago:
A very nuanced and level-headed response, thank you.
- Comment on A generation taught not to think: AI in the classroom 2 weeks ago:
I do agree with your point that we need to educate people on how to use AI in responsible ways. You also mention the cautious approach taken by your kids school, which sounds commendable.
As far as the idea of preparing kids for an AI future in which employers might fire AI illiterate staff, this sounds to me more like a problem of preparing people to enter the workforce, which is generally what college and vocational courses are meant to handle. I doubt many of us would have any issue if they had approached AI education this way. This is very different than the current move to include it broadly in virtually all classrooms without consistent guidelines.
(I believe I read the same post about the CEO, BTW. It sounds like the CEO’s claim may likely have been AI-washing, misrepresenting the actual reason for firing them.)
- Comment on A generation taught not to think: AI in the classroom 2 weeks ago:
While there are some linked sources, the author fails to specify what kind of AI is being discussed or how it is being used in the classroom.
One of the important points is that there are no consistent standards or approaches toward AI in the classroom. There are almost as many variations as there are classrooms. It isn’t reasonable to expect a comprehensive list of all of them, and it’s neither the point nor the scope of the discussion.
I welcome specific and informed counterarguments to anything presented in this discussion, I believe many of us would. I frankly find it ironic how lacking in “nuance or level-headed discussion” your own comment seems.
- Comment on A generation taught not to think: AI in the classroom 2 weeks ago:
I appreciated this comment, I think you made some excellent points. There is absolutely a broader, complex and longstanding problem. I feel like that makes the point that we need to consider seriously what we introduce into that vulnerable situation even more crucial. A bad fix is often worse than no fix at all.
AI is a crutch for a broken system. Kicking the crutch out doesn’t fix the system.
A crutch is a very simple and straightforward piece of tech. It can even just be a stick. What I’m concerned about is that AI is no stick, it’s the most complex technology we’ve yet developed. I’m reminded of that saying “the devil is in the details”. There are a great many details in AI.
- Comment on A generation taught not to think: AI in the classroom 2 weeks ago:
This is also the kind of thing that scares me. I think people need to seriously consider that we’re bringing up the next wave of professionals who will be in all these critical roles. These are the stakes we’re gambling with.
- Comment on Brussels plots open source push to pry Europe off Big Tech 2 weeks ago:
They need to stick the landing. America will threaten and bully. I’ve also heard some are afraid of the cost and complexity of doing something like this. Hopefully they do realize the necessity of it and stay the course despite all of that.
- Comment on A generation taught not to think: AI in the classroom 2 weeks ago:
I share this concern.
- Comment on A generation taught not to think: AI in the classroom 2 weeks ago:
One of Big Tech’s pitches about AI is the “great equalizer” idea. It reminds me of their pitch about social media being the “great democratizer”. Now we’ve got algorithms, disinformation, deepfakes, and people telling machines to think for them and potentially also their kids.
- Comment on A generation taught not to think: AI in the classroom 2 weeks ago:
I see these as problems too. If you (as a teacher) put an answer machine in the hands of a student, it essentially tells that student that they’re supposed to use it. You can go out of your way to emphasize that they are expected to use it the “right way” (since there aren’t consistent standards on how it should be used, that’s a strange thing to try to sell students on), but we’ve already seen that students (and adults) often choose to choose the quickest route to the goal, which tends to result in them letting the AI do the heavy lifting.
- Comment on A generation taught not to think: AI in the classroom 2 weeks ago:
Thank you. The American sources I referenced here seemed the best suited to the topic, largely because of how informative they were. I feel like a large part of that is due to the dominance of American Big Tech in classrooms. But if anyone has good info that is different than mine from another country (or America) I’m more than happy to hear it.
- Comment on A generation taught not to think: AI in the classroom 2 weeks ago:
Great to get the perspective of someone who was in education.
Still, those students who WANT to learn will not be held back by AI.
I think that’s a valid point, but I’m afraid that the desire to learn might have a harder time winning that battle if what you’re fighting against is actually the norm, and if the way you’re being taught in the classroom looks more like what everyone else is doing. I feel like making it harder to choose to learn the “old hard way” still sounds likely to result in fewer students deciding to make that choice.
- Submitted 2 weeks ago to technology@lemmy.world | 169 comments
- Comment on Self-hosting in 2025 isn't about privacy anymore - it's about building resistance infrastructure 2 weeks ago:
Thank you for kicking this hornet’s nest. There is a lot of great info and enthusiasm here, all of which is sorely needed.
We have massive and widespread attention paid to every cause under the sun by social and traditional media, with movements and protests (deservedly) filling the streets. Yet this issue which is as central and crucial to the notion and practice of freedom itself as any rights currently being fought for (as it intersects with each of them in very clear and direct ways), continues to be sidelined and given the foil hat treatment.
Discussions around disinformation, political extremism, and even mental health all can not be adequately had without addressing our technical and digital context, which has been hijacked by these bad actors, robber barons selling us ease and convenience and promises of bright, shiny, and Utopian futures while conning us out of our liberty.
With the widespread, rapidly declining state of society, and the dramatic rise and spread of technologies like AI, there has never been a more urgent need to act collectively against the invasive practices violating our most fundamental human rights.
Those of you whose eyes are open to this crisis are needed. Your voices are too absent from the discussions surrounding the many problems and challenges we face at this critical moment. Public awareness is needed for any real hope of change to occur.
As many of you have pointed out, the most immediate step people need to take is disengagement with the products and services that are surveiling, exploiting, and manipulating us. Deprive them of both your engagement and your data.
Keep going, keep resisting, do the small things you can do. As the saying goes, small things add up over time. Keep going.
- Comment on 'Worst in Show' CES products include AI refrigerators, AI companions and AI doorbells 3 weeks ago:
Hilarious. I bite my tongue so often around these kinds of situations it has permanent tooth imprints in it. But you’re right, someone needs to figure out how to get them to stop tolerating this horrific nonsense.