Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My concern -- and this is a total layman's observation -- is that if someone comes up with a good way to "spam" the academic research circuit, will we be able to tell? Most of us remember the recent Reinhart-Rogoff incident in which a massively flawed paper (think, Excel-based) wasn't challenged until a curious grad student took notice: http://phys.org/news/2013-04-excel-austerity-economics-paper...

There are other issues in all of that, but for the current discussion, I think it's enough to argue that discovery of exaggeration (the equivalent of resume padding) is still a very difficult problem in academic papers and citations. Do researchers have the tools to methodically sift out the good from the meh? It doesn't seem so. Combine that with the lack of incentive (as seems to be the case in Reinhart-Rogoff) to disprove published findings, and you have a scenario in which gaming the system seems quite doable.



Reinhart-roghoff was not submitted to a peer reviewed journal.


> Reinhart-roghoff was not submitted to a peer reviewed journal.

Yet among 400+ other citations, it was self-cited in a paper to the American Economic Review (see google scholar,) which is peer reviewed and appears to have a code submission policy since 2004 (see wikipedia.)

Perhaps if they submitted the paper directly it would have been turned down? That would make it an even better example of how to game citation count like SEO/PageRank.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: