This is something your HR department should be very concerned about. If the questions you ask during your interview are not useful in finding a good candidate why are you asking. This isn't just about time either, interviews have some strong laws around them so asking the wrong question could get you in court.
I know when we wanted to do a coding test they told use we need to spend 6 months of giving everyone a coding test, have it independently graded by someone not involved in the hiring process. Then after people have worked here for 6 months we examine our actual results from those we hired and see if the tests at all predicted something useful. (or something like that - there is room in the scientific process for some variation)
The bar below which HR has to be worried is not "we've scientifically determined that our interview questions lead to good on-the-job performance". There has to be some reasonable sense in which you could argue the interview filters for good candidates, but no one is requiring you run studies.
Google once did a retrospective study and found that interview scores for people we ended up hiring were not correlated at all with people's on-the-job performance. I'm pretty sure nothing really changed as a result of this. I think it's a combination of the industry, especially FAANG, being kind of "stuck" on these kinds of interviews, and a lack of clearly better alternatives (I think there are better alternatives but it's not like I can point to studies backing me up).
> I know when we wanted to do a coding test they told use we need to spend 6 months of giving everyone a coding test, have it independently graded by someone not involved in the hiring process. Then after people have worked here for 6 months we examine our actual results from those we hired and see if the tests at all predicted something useful.
This is interesting but also way heavier weight than anything I've ever heard of. OOC where do you work? (Like vague description of kind of company, if you're not comfortable sharing the specific name).
> Google once did a retrospective study and found that interview scores for people we ended up hiring were not correlated at all with people's on-the-job performance.
This sounds like an unsound result. If you select based on a criteria the correlation with the criteria is usually diminished and sometimes even reversed in the selected sub-population.
Like if you select only very strong people to move furniture then measure their performance. Because they're all strong, you won't observe that weak people are bad at it-- plus you'll still have some people who were otherwise inferior candidates who were only selected because they were very strong, resulting in a reverse result. But if you dropped the strength test you'd get many unsuitable hires (and suddenly find strength was strongly correlated to performance in the people you hired).
This is actually confirmed with real world data on this for professional football with player weight and professional basketball with player height.
For Offensive Linemen in the NFL, there is no correlation between weight (which range from 300-360 pounds) and overall performance. A "heavy" 350 pound player is not more likely to do better than a "light" 310 player. But nobody who weighs a mere 250 pounds could realistically make the cut or perform well at the highest level.
For basketball players there is no correlation between height and performance, and there are several standouts examples of players below six feet so there's no cutoff. But if you compare the distribution of the subpopulation versus the general population, you'll see an extremely strong height bias.
> This sounds like an unsound result. If you select based on a criteria the correlation with the criteria is usually diminished and sometimes even reversed in the selected sub-population.
Yeah that's very true and I think was part of why they maybe didn't react to it too much. What you really want is to find the people you rejected and see how well they're doing, but we don't have that data.
Still though, naively I think I would have thought that someone who gets great marks across the board should be able to be more successful at Google than someone who barely squeezes by, and I do think it's kinda telling that that's not the case. But I'm maybe just injecting my own biases around the interview process.
edit: This reminds me a lot of this informal study that found that verbal and math scores on SATs were inversely correlated, which seemed surprising, until people realized they were only ever looking at samples all from a single school. Since people at any given school generally probably had ~similar SAT scores (if they were lower they wouldn't have gotten in, if they were higher they would have gone to a more selective school), the variation you see within a given school will be inverse (the higher you do on math, the lower you must have had to do on verbal to have gotten the "target" score for that school).
At google's scale, if they had an alternative basis for hiring people they could judge candidates by both and hire randomly use one method or the other method to make some of their hires, then compare their performance over time and at least say if there is a significant difference or not.
But as you note, the lack of obvious good alternatives is an issue... and we can't pretend that there isn't an enormous difference among candidates. If we though that unfiltered candidates were broadly similar then "hire at random, dismiss after N months based on performance" would be a great criteria, but I don't think anyone who has done much interviewing thinks that would be remotely viable.
(Though perhaps the differences between candidates are less than we might assume based on interviewing since interviewees should be worse than employment pool in general, since bad candidates interview more due to leaving jobs more often and taking longer to get hired)
>If we though that unfiltered candidates were broadly similar then "hire at random, dismiss after N months based on performance" would be a great criteria, but I don't think anyone who has done much interviewing thinks that would be remotely viable.
I know a fair number of companies that do essentially that. They hire contractors for 6 months, at the end of 6 months the good ones are offered a full time position. The contractor company probably does some form of interview, but they are more interested in their 6 months of overhead from the contractor than quality candidates.
> since bad candidates interview more due to leaving jobs more often and taking longer to get hired
But there are also great people who interview badly.
I know when we wanted to do a coding test they told use we need to spend 6 months of giving everyone a coding test, have it independently graded by someone not involved in the hiring process. Then after people have worked here for 6 months we examine our actual results from those we hired and see if the tests at all predicted something useful. (or something like that - there is room in the scientific process for some variation)