Hacker Newsnew | past | comments | ask | show | jobs | submit | jefferickson's commentslogin

It's a fantastic book, but it's not worth $186.65.


That tells us a lot. ;)


Well, I can tell you why I think my book is not good for that — it isn't designed to be that!

My book grew out of lecture notes that I wrote for my algorithms classes at Illinois. My students are almost exclusively juniors and seniors. In particular, they already have several semesters of programming experience, a full semester of discrete math, and a full semester of data structures (which includes things like sorting and searching and the basics of algorithm analysis). They are the audience I wrote the book for.

Equivalently, I have very little experience teaching those prerequisite classes, which makes me the wrong person to write a textbook about that more foundational material!

At a more basic level: Different people are going to find different sources more or less useful. Different authors are better matches for different readers' backgrounds, intuition, and needs. My book ain't gonna work for everyone, because no single book works for everyone.

I'm not familiar with Common Sense Guide — thanks for the suggestion! — but the other books listed above are all fantastic (for different reasons, and for different audiences).


Hi Jeff, in your opinion, what would be the best algorithms book for students taking algo in Freshman year who want to learn it properly, in an in-depth manner?

It would really help me with many questions that I have.

Thanks.


Thanks for the kind feedback. I'm glad I could help!


> the style seems more targeted towards someone with a bunch of time reading through it slowly rather than gulping it down quickly like for most undergraduate courses.

Yep. That's intentional.

If you try to gulp this subject down quickly, you're much more likely to choke.


The book evolved from lecture notes that I've been publicly posting since at least 2005.


> And there it is, your argument comes down to gatekeeping

Bullshit. If you want to play basketball in the NBA, you have to practice your ass off to develop the necessary skills. If you want to be a successful car mechanic, you have to practice your ass off to develop the necessary skills. If you want to be a successful cook, you have to practice your ass off to develop the necessary skills.

The homework is a vehicle for practice. Effective practice is real work. Real work is hard. Therefore, the homework must be hard.


Thanks for the kind feedback!


Not any more; see my comment here: https://news.ycombinator.com/item?id=26096052


> I'm happy to see that the top comment is about his 25% credit for I Don't Know. That willingness to fold with grace is something that gets lost with standardized testing

I'm afraid I'm going to disappoint you. After using the "I don't know = 25%" policy for fifteen years, I was finally convinced to abandon it. Not because of pressure from administration, but rather from an honest evaluation of actual student behavior.

The IDK policy was meant to reward self-awareness, but in practice it seems to actually punish lack of confidence. In particular, female students answered IDK more often than male students with similar scores on questions that they both answered in full. (I suspect the same is true of international and BIPOC students, but my rosters don't reveal which students those are.)

I've seen lots of students who lacked confidence get trapped in mind games, wasting time worrying about (and sometimes asking me or the TAs) whether their solution was worth more or less than IDK, instead of putting forward their honest best effort. In particular, I've seen students who were already struggling, who might have scored 30-50% on an exam question, "play it safe" by answering IDK instead, and then after seeing the solution say "I did know that!"

The last time I taught algorithms, five students (out of 300) took the three-hour final exam in fifteen minutes or less. They walked in, sat down, got their exam booklets, wrote their name on the first page, wrote IDK on every other page, handed in the exam and walked out. None of those students passed.

I expect the next time I teach algorithms, without the IDK policy, exam averages will be slightly HIGHER, not lower. (I'd have data already, but the pandemic clouds everything.) I saw a similar score increase years ago when I stopped dropping the lowest problem score on each exam.

> IIRC, he also announced that the top 5% of the class would be automatic (and the only) A+ grades, and the bottom 5% would be automatic F grades.

Oh god no. I've never used grade quotas; that's just evil. My usual policy is that students with course averages above 95% automatically get an A+, students with course averages below 40% automatically get an F, and intermediate grade cutoffs are determined by score distributions that ignore those outliers. (I plan to move to an absolute grading scale the next time I teach the class.) In practice, that usually means about 4-6% A+s and 2-3% Fs, but I don't set those percentages in advance.


Thanks for taking the time to reply to everyone on this thread. Especially to correct my memory on grading, either I am misremembering the course or the policy.

The impact of IDK is an interesting example of unintentional bias. Do you think it is worthwhile publishing or sharing with the academic world at-large? Seems that if there are clear trends, it could be used as a guide for grading policy elsewhere.


> See also https://github.com/tayllan/awesome-algorithms for more learning resources, practice problems, visualizations, etc.

Oooo, nice. Thanks for the link!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: