Comment on Researchers surprised to find less-educated areas adopting AI writing tools faster
jrs100000@lemmy.world 1 day agoAn AI can produce content that is higher quality than the prompts they are given, particularly for formulaic tasks. I do agree that it would be nice if everyone were more educated, but a large portion of the population will never get there. If simply denying them AI was going to result in a blossoming of self education it would have already happened by now.
stickly@lemmy.world 1 day ago
It can’t ever accurately convey any more information than you give it, it just guesses details to fill in. If you’re doing something formulaic, then it guesses fairly accurately. But if you tell it “write a book report on Romeo and Juliet”, it can only fill in generic details on what people generally say about the play; it sounds genuine but can’t extract your thoughts.
Not to get too deep into the politics of it but there’s no reason most people couldn’t get there if we invested in their core education. People just work with what they’re given, it’s not a personal failure if they weren’t taught these skills or have access to ways to improve them.
And not everyone has to be hyper-literate, if daily life can be navigated at a 6th grade level that’s perfectly fine. Getting there isn’t an insurmountable task, especially if you flex those cognitive muscles more. The main issue is that current AI doesn’t improve these skills, it atrophies them.
It doesn’t push back or use logical reasoning or seek context. Its specifically made to be quick and easy, the same as fast food. We’ll be having intellectual equivalent of the diabetes epidemic if it gets widespread use.
jrs100000@lemmy.world 1 day ago
It sounds like you are talking about use in education then, which is a different issue altogether.
You can and should set your AI to push back against poor reasoning and unsupported claims. They arnt very smart, but they will try.
stickly@lemmy.world 1 day ago
I mean it’s the same use; it’s all literacy. It’s about how much you depend on it and don’t use your own brain. It might be for a mindless email today, but in 20 years the next generation can’t read the news without running it through an LLM. They have no choice but to accept whatever it says because they never develop the skills to challenge it.
The models can never be totally fixed, the underlying technology isn’t built for that. It doesn’t have “knowledge” or “reasoning” at all. It approximates it by weighing your input against a model of how those words connect together and choosing a slightly random extension of them. Depending on the initial conditions, it might even give you a different answer for each run.
jrs100000@lemmy.world 1 day ago
Is that any worse than people getting their world view from a talking head on 24 news, five second video clips on their phone, or a self curated selection of rage bait propaganda online? The mental decline of humanity is perpetual and overstated.