Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Actually any strength of effect is possible. From tiny and misreported as big to huge and underreported. This is what it means the studies do not have the statistical power to measure effect size. Reproducing the results and taking an average over all results with multiple comparison and pooling correction (metastudy) could then give a valid estimate of the effect size.

P-value only checks perhaps for non-null result, if not circumvented.

It would be good to produce a funnel graph for effect sizes reported in those underpowered studies. Perhaps ones showing small (but non-null) effect sizes do not get published.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: