EndOfLine@lemmy.world 1 year ago
At the core of learning is for students to understand the content being taught. Using tools and shortcuts don’t necessarily negate that understanding.
Using chatGPT is no different, from an acidemic evaluation standpoint, than having somebody else do an assignment.
Teachers should already be incorporating some sort of verbal q&a sessions with students to see if their demonstrated in-person comprehension matches their written comprehension. Though from my personal experience, this very rarely happens.
dojan@lemmy.world 1 year ago
That’s going on the supposition that a person just prompts for an essay and leaves it at that, which to be fair is likely the issue. The thing is, the genie is out of the bottle and it’s not going to go back in. I think at this point it’ll be better to adjust the way we teach children things, and also get to know the tools they’ll be using.
I’ve been using GPT and LLAMA to assist me in writing emails and reports. I provide a foundation, and working with the LLMs I get a good cohesive output. It saves me time, allowing me to work on other things, and whoever needs to read the report or email gets a well-written document/letter that doesn’t meander in the same way I normally do.
I essentially write a draft, have the LLMs write the whole thing, and then there’s usually some back-and-forth to get the proper tone and verbiage right, as well as trim away whatever nonsense the models make up that wasn’t in my original text. Essentially I act as an editor. Writing is a skill I don’t really possess, but now there are tools to make up for this.
Using an LLM in that way, you’re actively working with the text, and you’re still learning the source material. You’re just leaving the writing to someone else.