That was an interesting read, but I am not convinced that they understand the “problem” they are trying to address. That would also explain the vagueness of the title. Clearly they think something needs to change because of AI, but they have not explained why, or defined what, or the parameters for a positive change. It makes it feel arbitrary.
At one point he suggests that telling people who are taking the exam after you what specifically is on the exam is not cheating, though his students seemed to think it is. If telling people is encouraged then people taking the test first just have a more difficult task and their results are more likely to reflect their knowledge of the subject. At that point just give people the exam questions early. I had a professor that would give out a study guide and would exclusively pull exam questions from the study guide with the numbers changed. It was basically homework, but you were guaranteed to have seen everything on the exam already and that was such a great way to ensure 1) people fully understood the scope of the test 2) relieve stress about testing. If they don’t see a problem with only certain people knowing exact questions and answers ahead of time, then I’m not sure they understand what cheating is.
Unrelated, but they also blame outlook for why young people hate email. I had to use outlook for a bit and it does suck, but my hatred for email is unrelated.
I’m glad they are experimenting with different methods for testing, but without really knowing more about the class itself this comes off as though this is just a filler class in a degree program and that the test doesn’t really matter because their understanding of the subject doesn’t really matter. In another blog he refers to the article about how AI failed at running a vending machine which was making the rounds a bit ago. In it he laments that we’re going to have to “prepare for that stupid world” where AI is everywhere. If you think we can still fight that, I don’t think accepting AI as a suitable exam tool is the way to do it, even if you make students acknowledge hallucinations. At that point you’re normalizing it. 2/60 is actually not bad for using AI, as he said there will always be those students, but the blog makes me question the content of the class more than anything else.
SnoringEarthworm@sh.itjust.works 1 day ago