> I'm happy to see that the top comment is about his 25% credit for I Don't Know. That willingness to fold with grace is something that gets lost with standardized testing
I'm afraid I'm going to disappoint you. After using the "I don't know = 25%" policy for fifteen years, I was finally convinced to abandon it. Not because of pressure from administration, but rather from an honest evaluation of actual student behavior.
The IDK policy was meant to reward self-awareness, but in practice it seems to actually punish lack of confidence. In particular, female students answered IDK more often than male students with similar scores on questions that they both answered in full. (I suspect the same is true of international and BIPOC students, but my rosters don't reveal which students those are.)
I've seen lots of students who lacked confidence get trapped in mind games, wasting time worrying about (and sometimes asking me or the TAs) whether their solution was worth more or less than IDK, instead of putting forward their honest best effort. In particular, I've seen students who were already struggling, who might have scored 30-50% on an exam question, "play it safe" by answering IDK instead, and then after seeing the solution say "I did know that!"
The last time I taught algorithms, five students (out of 300) took the three-hour final exam in fifteen minutes or less. They walked in, sat down, got their exam booklets, wrote their name on the first page, wrote IDK on every other page, handed in the exam and walked out. None of those students passed.
I expect the next time I teach algorithms, without the IDK policy, exam averages will be slightly HIGHER, not lower. (I'd have data already, but the pandemic clouds everything.) I saw a similar score increase years ago when I stopped dropping the lowest problem score on each exam.
> IIRC, he also announced that the top 5% of the class would be automatic (and the only) A+ grades, and the bottom 5% would be automatic F grades.
Oh god no. I've never used grade quotas; that's just evil. My usual policy is that students with course averages above 95% automatically get an A+, students with course averages below 40% automatically get an F, and intermediate grade cutoffs are determined by score distributions that ignore those outliers. (I plan to move to an absolute grading scale the next time I teach the class.) In practice, that usually means about 4-6% A+s and 2-3% Fs, but I don't set those percentages in advance.
Thanks for taking the time to reply to everyone on this thread. Especially to correct my memory on grading, either I am misremembering the course or the policy.
The impact of IDK is an interesting example of unintentional bias. Do you think it is worthwhile publishing or sharing with the academic world at-large? Seems that if there are clear trends, it could be used as a guide for grading policy elsewhere.
I'm afraid I'm going to disappoint you. After using the "I don't know = 25%" policy for fifteen years, I was finally convinced to abandon it. Not because of pressure from administration, but rather from an honest evaluation of actual student behavior.
The IDK policy was meant to reward self-awareness, but in practice it seems to actually punish lack of confidence. In particular, female students answered IDK more often than male students with similar scores on questions that they both answered in full. (I suspect the same is true of international and BIPOC students, but my rosters don't reveal which students those are.)
I've seen lots of students who lacked confidence get trapped in mind games, wasting time worrying about (and sometimes asking me or the TAs) whether their solution was worth more or less than IDK, instead of putting forward their honest best effort. In particular, I've seen students who were already struggling, who might have scored 30-50% on an exam question, "play it safe" by answering IDK instead, and then after seeing the solution say "I did know that!"
The last time I taught algorithms, five students (out of 300) took the three-hour final exam in fifteen minutes or less. They walked in, sat down, got their exam booklets, wrote their name on the first page, wrote IDK on every other page, handed in the exam and walked out. None of those students passed.
I expect the next time I teach algorithms, without the IDK policy, exam averages will be slightly HIGHER, not lower. (I'd have data already, but the pandemic clouds everything.) I saw a similar score increase years ago when I stopped dropping the lowest problem score on each exam.
> IIRC, he also announced that the top 5% of the class would be automatic (and the only) A+ grades, and the bottom 5% would be automatic F grades.
Oh god no. I've never used grade quotas; that's just evil. My usual policy is that students with course averages above 95% automatically get an A+, students with course averages below 40% automatically get an F, and intermediate grade cutoffs are determined by score distributions that ignore those outliers. (I plan to move to an absolute grading scale the next time I teach the class.) In practice, that usually means about 4-6% A+s and 2-3% Fs, but I don't set those percentages in advance.