Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's not remotely that bad... You're leaving out important context here: what failed to replicate was not the GWASes, but inferred selection signals from the GWAS (which is very different); actually, 2/3rds of them replicated anyway (so they're doing much better than, say, social psychology or medicine); and that 3rd one which didn't replicate didn't replicate because the check which would have caught it, the sibling comparison, which was done, turned out to have incorrectly validated it due to a Plink software bug (which is the kind of error which could happen to literally any research result these days). The situation remains precisely as it was before: GWAS results are generally trustworthy, especially when validated by sibling comparisons, and human selection is pervasive.


Can you point me to a reference re: what happened for the third study? If this was caused by an actual Plink bug instead of incorrect usage, I need to verify that the bug has been fixed...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: