> In the scientific world there are no spammers and there is no direct commercial advantage to creating a lot of nonsense paper that cite your own paper, also there is some oversight in the world of science and the people there have a reasonably high level of integrity.
Um...what? If it were anyone but the OP, who always writes with a lot of thoughtfulness and insight, I would've assumed the graf above is satire. Academic discovery and citation is very much being gamed; the only reason why we don't notice it more is because the academics don't have the same tools and infrastructure that web spammers do and, also, the world of academic research is not something the average person outside of academia closely parses.
> The scientists who were recruited to appear at a conference called Entomology-2013 thought they had been selected to make a presentation to the leading professional association of scientists who study insects.
>
> But they found out the hard way that they were wrong. The prestigious, academically sanctioned conference they had in mind has a slightly different name: Entomology 2013 (without the hyphen).
And the institution of medicine has long been plagued with accusations of fake studies, underwritten by drug companies:
> LAST month, the Archives of Internal Medicine published a scathing reassessment of a 12-year-old research study of Neurontin, a seizure drug made by Pfizer. The study, which had included more than 2,700 subjects and was carried out by Parke-Davis (now part of Pfizer), was notable for how poorly it was conducted. The investigators were inexperienced and untrained, and the design of the study was so flawed it generated few if any useful conclusions. Even more alarming, 11 patients in the study died and 73 more experienced “serious adverse events.” Yet there have been few headlines, no demands for sanctions or apologies, no national bioethics commissions pledging to investigate. Why not?
>
> One reason is that the study was not quite what it seemed. It looked like a clinical trial, but as litigation documents have shown, it was actually a marketing device known as a “seeding trial.” The purpose of seeding trials is not to advance research but to make doctors familiar with a new drug.
I won't dig out examples, but I'd like to sketch the general game:
You need to have a good publication record (i.e., papers in journals and conference proceedings, some monographs with a prestigious publisher can also help). When you can't publish in high-impact journals/conferences, you lower your expectations and spread your papers over several journals/conferences, some _will_ publish you.
The next thing is to split up your research results over many papers in order to have many publications; differences between the papers are small (and you can reference to yourself, i.e., to the "bigger picture" of which this paper is a part of).
That's btw the same with grant applications: Promise much, do only 30% and have the rest as a follow-up (grant renewal).
Splitting up results over several publications and grants plus the usual academic behaviour (internal status games, academic nitpicking) delay research by 300%.
Then you can create citation cartels where you mutually reference with your colleagues.
But it's not that researchers are evil, often it's the funding source that uses those metrics (e.g., publication count) that are then gamed.
My concern -- and this is a total layman's observation -- is that if someone comes up with a good way to "spam" the academic research circuit, will we be able to tell? Most of us remember the recent Reinhart-Rogoff incident in which a massively flawed paper (think, Excel-based) wasn't challenged until a curious grad student took notice: http://phys.org/news/2013-04-excel-austerity-economics-paper...
There are other issues in all of that, but for the current discussion, I think it's enough to argue that discovery of exaggeration (the equivalent of resume padding) is still a very difficult problem in academic papers and citations. Do researchers have the tools to methodically sift out the good from the meh? It doesn't seem so. Combine that with the lack of incentive (as seems to be the case in Reinhart-Rogoff) to disprove published findings, and you have a scenario in which gaming the system seems quite doable.
> Reinhart-roghoff was not submitted to a peer reviewed journal.
Yet among 400+ other citations, it was self-cited in a paper to the American Economic Review (see google scholar,) which is peer reviewed and appears to have a code submission policy since 2004 (see wikipedia.)
Perhaps if they submitted the paper directly it would have been turned down? That would make it an even better example of how to game citation count like SEO/PageRank.
Um...what? If it were anyone but the OP, who always writes with a lot of thoughtfulness and insight, I would've assumed the graf above is satire. Academic discovery and citation is very much being gamed; the only reason why we don't notice it more is because the academics don't have the same tools and infrastructure that web spammers do and, also, the world of academic research is not something the average person outside of academia closely parses.