Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is kind of silly. Actual senior developers get near-real-time feedback from the rest of the organization, and if their interpretation of a request is off, they get multiple attempts to satisfy the request. None of this is true of a coding exercise.

If you want to evaluate a candidate's ability to cut through ambiguity or manage sprawling scope, evaluate that and only that in a specific exercise. Don't just build a shoddy coding test and then rationalize it's weaknesses by saying that good candidates will succeed despite the flaws of the test. That's both disrespectful and un-rigorous.



> Actual senior developers get near-real-time feedback from the rest of the organization, and if their interpretation of a request is off, they get multiple attempts to satisfy the request.

Junior developers get near-real-time feedback from senior developers who are supervising them. Senior developers have to be able to give that feedback, have to be able to anticipate user needs, and have to be able to run long-term projects where you may have to work for days, weeks, or months before receiving critical pieces of feedback.

> If you want to evaluate a candidate's ability to cut through ambiguity or manage sprawling scope, evaluate that and only that in a specific exercise.

This is a common mistake that I see inexperienced interviewers make. Trying to throw more detailed and specific exercises at candidates is a fool's errand at best, and at worst it means that you're putting candidates through additional tests (and your acceptance rate will suffer).

The main problem with ambiguity is that it appears unexpectedly. If you give someone an ambiguous problem and say, "Tell me what is ambiguous about this problem," then you're not testing what you want to know. What you actually want to know is whether candidates can recognize ambiguous problems without being prompted to recognize them--and the reason for this is that ambiguous are extremely common in real-world scenarios.

An ambiguous problem is not a trick or a trap. It is explicitly part of the interview process, and interviewees are given guidance that the problems they are given may not be precisely defined.

> Don't just build a shoddy coding test and then rationalize it's weaknesses by saying that good candidates will succeed despite the flaws of the test.

Why do you say that the coding test is shoddy?

My observation is that a large percentage of candidates will succeed at coding tests if you give them an ambiguous prompt. In practice, they will either ask questions to resolve the ambiguity, or just pick a way to resolve the ambiguity for the purposes of the test. This matches real-world scenarios--you are going to often encounter ambiguous or incomplete problems in the real world.

If you want a precisely-specified coding problem, then go to Hacker Rank or Project Euler or something like that, or join a competitive programming team.


> This is a common mistake that I see inexperienced interviewers make. Trying to throw more detailed and specific exercises at candidates is a fool's errand at best, and at worst it means that you're putting candidates through additional tests

Have you tested this assertion with data? Because I’ve built interview pipelines several times now and the data I collected showed the exact opposite. The more specific a test was for the trait you wanted to select for the better results you’d get across all metrics, interviewers and candidates. It’s almost my defining characteristic of a good selection criteria after 2 decades of interviewing.


How are you giving these additional, more specific tests? I would think that your acceptance rates would start dropping once you get past five rounds or so.

My personal experience is that people new to interviewing are the ones who think that making individual interviews more precise will improve the process, but my experience is that improvements to the overall interview process aren’t done by improving how good individual interviews are.


You have between 10-16 hours of time with a candidate depending on the desire ability of your job. I like to break it into: 1 hour of pitch/expectation setting where the only screening is for ability to complete the process (language, appropriate background, etc) and to catch obviously fraudulent candidates, 4 hours of take home technical assessment (programming project), 4 hours of soft skill assessment (3 hours of prep and 1 hour of presentation is my favorite format) and 1 hour of meeting with the hiring manager.

But, the format is not really the point. The point is to have a specific thing you are trying to discern from your filter and to focus your efforts on making that the only thing you are judging on.


You have a budget for the number of hours you can make a candidate spend on work samples; it's the amount of time they'd spend in the interviews your tests are offsetting. This isn't complicated.


Unsurprisingly, I can report the same thing about the interview pipelines we're running at Fly.io. Not a week goes by where someone in our leadership team doesn't remark about how valuable the exercise we run specifically for this junior/senior scope-management/question stuff is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: