Comment on It's Breathtaking How Fast AI Is Screwing Up the Education System
Natanael@infosec.pub 1 day agoBecause if you don’t know how to tell when the AI succeeded, you can’t use it.
To know when it succeeded, you must know the topic.
Comment on It's Breathtaking How Fast AI Is Screwing Up the Education System
Natanael@infosec.pub 1 day agoBecause if you don’t know how to tell when the AI succeeded, you can’t use it.
To know when it succeeded, you must know the topic.
FourWaveforms@lemm.ee 1 day ago
I’m not sure what you’re implying. I’ve used it to solve problems that would’ve taken days to figure out on my own, and my solutions might not have been as good.
I can tell whether it succeeded because its solutions either work, or they don’t. The problems I’m using it on have that property.
shoo@lemmy.world 23 hours ago
The problem is offloading critical thinking to a blackbox of questionably motivated design. Did you use it to solve problems or did you use it to find a sufficient approximation of a solution? If you can’t deduce why the given solution works then it is literally unknowable if your problem is solved, you’re just putting faith in an algorithm.
There are also political reasons we’ll never get luxury gay space communism from it. General Ai is the wet dream of every authoritarian: an unverifiable, omnipresent, first line source of truth that will shift the narrative to whatever you need.
The brain is a muscle and critical thinking is trained through practice; not thinking will never be a shortcut for thinking.
Natanael@infosec.pub 23 hours ago
That says more about you.
There are a lot of cases where you can not know if it worked unless you have expertise.
FourWaveforms@lemm.ee 3 hours ago
This still seems too simplistic. You say you can’t know whether it’s right unless you know the topic, but that’s not a binary condition. I don’t think anyone “knows” a complex topic to its absolute limits. That would mean they had learned everything about it that could be learned, and there would be no possibility of there being anything else in the universe for them to learn about it.
An LLM can help fill in gaps, and you can use what you already know to vet its answer, just as you would use the same knowledge to vet your own theories. You can verify its work the same way you’d verify your own. The value is that it may add information or some part of a solution that you wouldn’t have. The risk is that it misunderstands something, but that risk exists for your own theories as well.
This approach requires skepticism. The risk would be that the person using it isn’t sufficiently skeptical, which is the same problem as relying too much on their own opinions or those of another person.
For example, someone studying statistics for the first time would want to vet any non-trivial answer against the textbook or the professor rather than assuming the answer is correct. Answer comes from themself, the student in the next row, or an LLM, doesn’t matter.