An attempt of a framework for solving whiteboarding/Leetcode problems is helpful, and it should always invite us to ask what computer science pedagogy is missing. Any CS program worth its salt includes data structures and algorithms courses- they are simply fundamental elements of the discipline.
And yet, why are these interviews so difficult? I don't think it's simply an issue of many people having gone through these programs and failing to absorb or retain the material. Neither do I think there is a severe mismatch in the material covered and the interviews. Yes, some interview questions are awfully close to logic puzzles - having to know the two pointer trick for detecting cyclic graphs, what CS program covers that industry-specific technique? - but others are fundamental applications of trees, graphs, dynamic programming, etc.
Could it just be that academic CS doesn't approach solving problems in the same way as these interviews do? Are there meta-problem solving techniques that these courses simply don't cover? Heuristics that must be applied when one approaches a general problem before the topic is narrowed down? "Ah this involves a tree, ah this requires sorting, ah we should use a hash table?"
I'm absolutely baffled by some of the comments here. This is more akin to learning a language and learning grammar structures in a language. Once you have mastered the language you simply don't care about the linguistic structures. When you learn english you learn everything about verb placement, sentence composition and a whole lot of other things. Now, when you ask me about the meaning of grammatical structures I only know some of them because I'm learning Chinese and Japanese, but they're completely irrelevant to me when I'm thinking about anglosaxon or roman language. Worse yet, most of those structure I wouldn't be able to associate or name nowadays.
There is a reason why recent graduates perform better at these interviews than seasoned engineers. Even for computational math where you have all the basics of calculus 1-3 and bunch of other things where you have to prove the theorems in the exams it doesn't mean you recall all of it forever. It's like saying if you really know how to use a fork you create one yourself.
Totally agreed with you, but I don't even understand why people can't understand your point. It's obvious enough. Taking leetcode for interview is like memorizing all english vocabulary needed for SAT/GRE. Taking CS course in university is like learning how to write an essay properly.
Asking engineers to take leetcode for exams is like asking journalists to take the SAT exam. Obviously they will need to study a few months for such things.
You don't have a mental visualization of code execution? Quicksort, merge sort, binary search, hashmaps etc are just one image each, as easy to remember as the face of a friend. I don't see how anyone would forget that, there is no way I'll accidentally rearrange things to think the nose is above the eyes etc, and that is how stupid it looks when people make mistakes in coding interviews.
Describing or visualizing the algorithm is one thing.
Identifying that you need to use some variation of that algorithm and then writing flawless and optimal code that handles all corner cases in a 20 minute job interview, is an entirely different thing.
But being able to code up a bug free solution to a well defined problem seems very relevant to me when you apply to a software engineering job. If you can't do that then what can you do?
I think that part is to test soft skills, being able to work under stress and talk to a person is a part of being a good team mate. Straining both at the same time is a feature, since it is much easier to fake soft skills when you aren't distracted by a technical problem.
It is much more stressful than a real work situation, true, but people work much harder to appear nice and helpful in an interview setting as well so having a harsher test than real world situations to test soft skills seems appropriate.
There was a study conducted some time ago within the past year or so (it had a long discussion thread here, naturally) where it was discovered that when allowed to solve a whiteboard puzzle in a room alone, candidates performed far better. Now obviously communications soft skills must be tested. But perhaps the format can be tweaked so the candidate has some time to crack at a problem on their own prior to be asked to explain it.
The "niceness" of the interviewers is irrelevant. The power differential at stake sparks a survival instinct that leads to stress. (e.g. they literally hold power over your future meal prospects.) Though perhaps in others, it invigorates them with a sense of purpose, cool, and collectedness. Perhaps that is truly the 10x engineer.
I have. I just overdid the practice the first time and got good enough to place well in international competitions, so to me all these interview problems are really easy, I've never struggled with an interview problem since then (of course since I always pass there isn't much need to do many interviews so I have less data than some others). I know most people wont do that, but you don't need to be nearly as good at this to pass interviews so it should be possible to get good enough to never practice again with modest effort.
My Google interviews was basically me coding up a solution in 10 minutes and explaining what I did and proving it works with runtime etc. Then we spent the rest of the time talking about engineering problems, testing, what I did at previous jobs etc. I am rustier now many years later, but It is still good enough that I don't fail, it might take 20 minutes instead of 10 but there is still room to spare. So the limit is that you are maybe half as fast as I am when I'm rusty, doesn't seem like an overly high bar to me.
So if you're good enough to do well in international competitions, why are you even participating in this discussion? You're clearly an outlier who finds this easy, so you're not in a place to understand the challenges for most people.
Not every great engineer can be good at this stuff to do well in international level competitions, by definition that's a very small group.
I assume you're not on the same level as William Lin or tourist, so it would be like them wondering why you struggle on a particular problem when they can do every one easily.
But it takes like half a year to get that good, if you just practice a bit in college you get there. I'm not sure why people complain so much. People just need to stop memorizing and start practicing how to understand problems instead.
> I assume you're not on the same level as William Lin or tourist, so it would be like them wondering why you struggle on a particular problem when they can do every one easily.
Those have spent more than 10x as much time as me on that though. I got to my level in about 6 months that I spent to pivot from math to programming, that isn't unreasonable effort for anyone, most computer science grads have spent more time learning algorithms than I had.
Your leetcode may be A grade but your empathy, self-reflection, and frankly critical thinking skills need a ton of work.
If you had realized that
* in 90+% of job openings, whiteboard leetcode interviews optimize for the wrong thing, and neither the candidate nor the interviewer should be honored for its inclusion in the process
* live coding exercises are just as much an exercise in psychology - your willingness to submit to unreasonable demands and their willingness to subject you to them
* negative discrimination (i.e. weed out requirements) create biases in your hiring process and ultimately skill gaps in your personnel base
You would potentially be self-aware enough not to post this cavalier and self-aggrandizing comment.
As the person who replied to you says, you have a lot to learn about being a good, empathetic and kind human. I suggest for your own life you take some time to work on that if your technical skills are already good.
No I don't. AT ALL. It sounds to me like you've never encountered actual challenging engineering problems. If you have had anything to do with 3GPP all the algorithms used in there are a lot more complex than these silly compsci interview algorithms. You don't really spend that much time thinking about things that should be considered basic vocabulary.
I also had countless experiences where I ended up either rewriting from scratch or massively refactoring code that was written by computer scientists that dumped all their algorithm ideas into code without understanding a thing about performance, the underlying infrastructure or how to trace it.
Your sorting algorithm will mean jackshit when you spend all your time doing I/O reading files in and out of memory for example. Your super cool distributed algorithm means nothing if you don't understand CPU P-States(I guess that's irrelevant nowadays) or NUMA and didn't setup the machine properly.
I for one have opted out of this BS and will actively avoid mediocre programmers trying to boost their ego with cookiecutter algorithm questions.
> It sounds to me like you've never encountered actual challenging engineering problems
I worked on low level machine learning infrastructure and distributed algorithms at Google. I know pretty well what challenging engineering problems looks like.
> No I don't. AT ALL.
You don't think your inability to visualize computation affect your ability to come up with solutions to problems at all?
Edit:
> Your sorting algorithm will mean jackshit when you spend all your time doing I/O reading files in and out of memory for example. Your super cool distributed algorithm means nothing if you don't understand CPU P-States(I guess that's irrelevant nowadays) or NUMA and didn't setup the machine properly.
I understand those things well, it isn't hard to learn, we run benchmarks using proper prod setups and see what works faster and we know about CPU caches, memory overhead, file reading speed etc. The only difference is that I am fluent in algorithms and you aren't. I'm not sure why you'd think that I wouldn't know those thinks just because I know algorithms, it takes like a few months to master algorithms, there is still plenty of time to learn the other things.
I think it's easier to recall the "idea" of each algorithm, than visualizing the flow of the code. There's normally also some key implementation detail that's helpful to remember.
Quicksort - pivot - i <= hi
mergesort - merge - auxiliary array
When I started practicing DSA, I would code these algorithms from scratch ever week, to try and memorize them. Then as I did more general leetcoding, I realized that these are just solutions to problems, and there's no need to memorize the code exactly, just knowing the key idea is enough.
People dump on leetcode because they think it's memorizing solutions to problems, but it's not practical to do that. It's more like memorizing one sentence per problem, than memorizing a page of code. And when you see a new problem, you just adapt one of the "sentences" you memorized for a similar problem.
But if you've never looked at an efficient algorithm for dependency resolution, it's going to be impossible to come up with a good solution for a related problem in an interview.
> I realized that these are just solutions to problems, and there's no need to memorize the code exactly, just knowing the key idea is enough.
Right, my pictures are what the code is supposed to do, not the code that is executing it. Then I can just take pieces of it and compose it with other things, you need some kind of intuition to do that, for me that intuition takes the form of pictures.
But yeah, the trick to solve leetcode properly is to not solve leetcode, but to learn to get better than leetcode, that way leetcode problems will feel trivial for the rest of your life.
It's one thing to visualize, another to recall at will. Your friend is someone whom you've spent sufficient time with on at least a semi-regular basis to form emotional and mental bonds with, deep memories. How many times does a CS undergraduate re-implement sort algorithms? Not that most interviews actually ask one to regurgitate that from one's memory.
> How many times does a CS undergraduate re-implement sort algorithms
You don't have to implement, you just need to know the theory of why it works. Then you just code it up based on that theory, it isn't hard to do at all.
For example, merge sort explains itself, you partition and merge. That is all you need to remember. Hashmap is the same, you use a deterministic function to label objects and then put those into buckets, then you find them in the bucket with the same label later. These are like the basics of the basics.
Most such courses don't give a typical Leetcode problem in an exam with a 30-35 minute time limit, replete with continuous interruptions from the examiner.
It's a lot easier for me to do a Leetcode problem if I can explore and experiment a bit without having to verbalize my whole thought process.
Also, this is nothing unique to SW. I come from an engineering background, and we had to do calculus in almost every engineering course. Yet I'm sure within 12 months of graduation, most of my fellow grads would struggle with most calculus problems. I was once criticized by my peers for asking a basic calculus question while conducting an engineering interview.[1]
Just as with algorithms, you usually don't need calculus on the job. Looking at most of my SW development career, beyond the very basics (e.g. dictionary lookup is O(1) in Python), I probably needed algorithm knowledge on average once a year. And since almost none of my peers in most of the jobs have that knowledge:
1. Knowing it doesn't put me at an advantage within the company. It's a blind spot for everyone. Unless I happen to solve a serious business problem with that knowledge, which is very rarely the case (and likely why most people forget this knowledge).
2. My not knowing it won't put me at a disadvantage when it comes to career growth in the company.
This is the reality for most non-FAANG SW jobs. I suppose the one benefit of all these Leetcode interviews is that a lot more people have an incentive to review/learn this material.[2]
[1] Analogously, I was once criticized for being too tough when I asked a candidate to write a factorial function for a SW job.
[2] Although perhaps not really. Last week I interviewed a candidate who knew the theory really well - he understood complexity really well and seemed to understand data structures quite well too. But he couldn't write a basic function to split a string of numbers that were comma delimited, and return it as a list of integers. In both the languages he claimed proficiency in, there is a standard function to split a string, and not only didn't he know the functions, he had know idea of the concept. A classic example of textbook knowledge vs experience (of which he had 3-4 years).
Most people have to learn loads of information in a short time and former information doesn't get practiced after. In the span of 5 years, I learned trees at the end of year 1 and a bit at year 2. I didn't need them for anything I did past that. It wasn't until I sat down and practiced a little I was able to think in trees again.
Not everyone covers the same things. Loads of Leetcode tests and interviews apparently cover linked lists and tricks with linked lists. I got zero exercises on linked lists. Some structures require specific approaches, so if you have trouble getting a foot in, you'll fail miserably at any questions related to those structures no matter how simple or difficult.
There are other obvious points, but CS is simply too large a field. What one considers fundamentals is something another never uses, not even in libraries or frameworks. Easier is to prove that the student can learn what is deemed on an equal level as something else, and give them the tools to learn more on their own should they require to do so.
This all assumes the student in question has actually proven to know the material of the course, and the course covers some part of what people consider fundamentals. Things obviously change when considering some people get by despite not having proven they know the material, or CS courses missing these.
I also can't help but feel there is some irony in CS courses trying to accommodate an ever increasing need for more practical skills, ditching CS fundamentals and practicing them to do so, only to be met with interviews which test CS fundamentals despite the work not needing any of it.
Why don't you think it's simply an issue of many people who go through CS programs failing to master the material? It's hard to imagine someone who did well in, say, Sedgewick's sequence at Princeton (which, by the way, is available on Coursera for free for anyone who didn't want to go to Princeton) - it's hard to imagine someone like that struggling with entry level tech interview questions
Is it? If it's so straightforward, then why don't all of the people who are struggling with these questions simply take the Coursera course?
At the very least, these interviews require the applicant to be immediately familiar with any of the topics that might span a semester (or two) long course, so recall could be an issue.
You wont forget the basics of how to play chess either even if you haven't played it for 20 years, as long as you played it many times back then. I don't think there is a significant difference here, if you learned something properly then you wont forget it.
The human mind requires practice to retain skill and proficiency. Period. People who were solid engineers and later become executives, 20 years down the line many say they no longer can code. I've seen this many times.
Where are you getting this concept that humans never forget skills? That's completely false.
Come to think of it, the basics of playing chess is of insufficient complexity to be compared to an algorithm interview. More like being able to mount basic strategies and chess moves. Not sure how the average player would be able to recall those if they haven't played chess in a long time, because they were instead playing games derived from basic chess moves but with very different game mechanics.
If you demand more complexity then you need to take people with more skills. Do you think that Magnus Carlsen would forget how to mount a basic defence if he didn't touch chess for 20 years? He wouldn't be as good, sure, but he wouldn't forget how to play, he would still beat most people.
My rule of thumb is that people remember things one or two layers lower than their max. If you learned calculus then you wont forget basic algebra. If you took a grad course on electromagnetism then you wont forget basic calculus. The same goes for algorithms, if you learned them once and then never had a course where you built upon those to make more advanced algorithms you will forget them. But once you start to see them as basic building blocks for other things then you wont forget.
So from this rule, if you just learned the rules for chess you would forget them. But if you started trying to win chess games and viewed the rules as building blocks for strategies, then you will remember the rules for chess. Then you start to compose strategies etc.
The problem, then, would seem that the majority of software work is no longer building things, as it is jury-rigging together APIs and frameworks, thus leading to the loss of use of remembering the building blocks. And leads to the replacement of such vital components as building blocks with other components like design patterns or commonly used SDKs and libraries.
> Where are you getting this concept that humans never forget skills? That's completely false.
I never said humans never forget skills, I said humans never forget skills they master. Most people never master much at all, maybe 90-99% of software engineers would be in the never master bucket. Which is why I get downvoted, most people never get good and get angry when you tell them that they can work to improve.
That is true, that most people don't master anything. The question is if software engineers should be expected to master the material in interviews other than for interviews, if it truly makes them better engineers who build better software. And if so, they we arrive at my original question: why is current computer science education and training failing to convey that vital information? And how can this situation be remedied?
In terms of working to improve, given the excellent filtering capabilities of algorithmic interviews, surely if people were taught right and knew how to master it not to mention what they should master in the field, then it would not be so difficult to improve (though this would render the filter ineffective)? Because then more engineers would have known from day one what they should focus on (not that they don't at present, because these interviews are now broadly known to the public), and thus having mastered the material, be passing the interviews easily.
I really think that college could be structured better, yes. I think that algorithmic fluency helps in many areas both in science and in the industry, and the way college is taught isn't good for reaching that stage. If nothing else it provides you with a good framework for how to think about and structure computation.
Is it in a language that one has not encountered in day to day work in a long time? Because if so, that is analogous to interviewees encountering an algorithmic problem they have not dealt with in a while.
> Yes, some interview questions are awfully close to logic puzzles - having to know the two pointer trick for detecting cyclic graphs, what CS program covers that industry-specific technique?
My comp sci degree covered this, and it isn't a particularly prestigious university either. Most leetcode problems are just extensions of (or exact duplicates of) problems covered in my Data Structures and Algorithms class.
It seems like some of the skill involved for passing leetcode interviews is pattern-recognition to quickly recognise the kind of problem and what the likely tools to solve it are.
There are some Universities that somehow use similar materials for their CS courses and their students happened to work for these big/sv hi tech companies that reached out their old algo trick questions and asked them during interviews.
Not all schools picked up these tricks, unfortunately.
I completely gave up on giving whiteboard interviews after trying a couple of them. It's just so hard to get it right, and even if you do, I'm not sure you gain that much more useful knowledge as opposed to other interview styles.
Most college programs has very straightforward tests designed to be easy to pass, and students usually try to learn as little as possible by memorizing the material they can memorize instead of internalising anything. This combination means that a large majority of students who graduate will have a horrible understanding of the material studied. And since students have a horrible understanding and try to memorize, if you design tests that aren't straightforward and easy then almost everyone will fail, so you can't do that...
And then these students graduate, think that the paper is proof that they actually learned these things, but if you prod their knowledge then it all just falls down since there is no substance there.
> That is certainly a factor, and there is also an emphasis more on the mathematical proofs of algorithms than applications in such courses.
Proofs are great if you learn to write your own, but courses mostly just wants you to memorize proofs which isn't terribly useful. If your tests has a lot of problems where you need to write your own proofs then it is a pretty good course, but the normal case is that the test wants you to write down proofs from memory.
And yet, why are these interviews so difficult? I don't think it's simply an issue of many people having gone through these programs and failing to absorb or retain the material. Neither do I think there is a severe mismatch in the material covered and the interviews. Yes, some interview questions are awfully close to logic puzzles - having to know the two pointer trick for detecting cyclic graphs, what CS program covers that industry-specific technique? - but others are fundamental applications of trees, graphs, dynamic programming, etc.
Could it just be that academic CS doesn't approach solving problems in the same way as these interviews do? Are there meta-problem solving techniques that these courses simply don't cover? Heuristics that must be applied when one approaches a general problem before the topic is narrowed down? "Ah this involves a tree, ah this requires sorting, ah we should use a hash table?"