Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
There is a worrying amount of fraud in medical research (economist.com)
296 points by martincmartin on Feb 23, 2023 | hide | past | favorite | 108 comments


As a reminder, peer review is not designed adversarially. It is not supposed to catch people who are fraudulent before they publish---that's the point of replication (which happens rarily). Peer review is designed to ensure a the publication as a packaged work of science describes a valid experiment. But you're necessarily assuming that the authors did what they said they did. As a peer reviewer, you're ensuring that what the authors said they did constitutes valid science.


In practice, peer review isn't even that. Most referees are not double-checking your statistical analysis. What they focus on is whether the research is interesting and has it appropriately considered relevant literature. Even that is not always done carefully.

Here is an argument that peer review is basically a failed experiment: https://experimentalhistory.substack.com/p/the-rise-and-fall...


I think we're saying the same thing. You're trusting the statistical analysis done by the author. When I peer review, I check to make sure that the conclusions the paper draws are in line with what it says statistical analysis was, and I check to make sure that the analysis used is the right one, but I don't check their math. And also, I'm not a statistician, so I'm not authoritative on the full space of statistical analysis, anyway.


who really has all of the hours in a day to do it too. I think it'll all come down to the journal's integrity. academia will have to learn to negotiate the premiums of their library subscriptions for better editing/curation for each journal they subscribe to. its already pay to play, might as well get your money's worth.


I'm burnt out on reviews, I did maybe ten last year but never spend less than a day on each and sometimes two. In my papers every reference fits to the best of my knowledge (knowledge informed by actually reading said cited material) and i follow best practices in statistical analyses, so I hold other papers to the same standard. Just an unsustainable standard to hold. I'd like to see journals pay statisticians to focus on methodology, I'm not sure if honourariums are a solution for peer review though. It would help me justify spending the time to do them but I reckon there would be a subset of hyperprolific reviewers half assing it more than ever to maximise the income per time spent.


And often it is impossible to replicate the results as a reviewer even without fraud being involved because at least in studies involving DNA sequences, these sequences are generally only added to Genbank (or EMBL, or whatever repository depending on country) when a paper is accepted and so reviewers don't have access to it.


In practice, peer review isn't even that. As fields of science have grown more specific, qualified reviewers are peers who are your competition, where you are already in their club or somebody who needs to be kept out. You are not going to be helped in any science that threatens their funding. There are a lot of interests vested in every instance of medical research fraud.


Like many human inventions, it started out good and got worse as people got better at exploiting the flaws in the institution. Those exploits eventually get bad enough that they can escalate to creating new flaws to be exploited until the whole thing is captured. Most institutions are somewhere along that path.

My personal philosophy about such things is to think of Hanlon's razor as a boundary condition. The longer an institution has been around, the more likely the incompetence is actually just well disguised malice.


It's imperfect, but what better system do we replace it with? It's easy to critique; it's hard to solve problems. Just like it's easy to find problems in any research, but hard to research and publish.


The best system we can currently observe, is in ML.

Look at stable diffusion as an example. Incredible papers, such as dreambooth, LORA, controlnet, are:

1. Published on arxiv before peer review

2. Productionised within 2 weeks of paper release (peer review not needed)

3. Community rapidly adapts tool, makes it easier to use.

4. Products built on such papers proliferate extremely rapidly within another few weeks.

In this system, peer reviews are worthless. The github code quickly demonstrates whether a technique is useful or not, and the community adoption rates replace citations as proof of a paper's power.

This is why AI art can progress at such insane rates, weeks from paper release to widespread productionisation.

Obviously, this won't work in most other domains, because there's no equivalent to mass consumer interest, open source communities, and low-cost experiments. But it does represent the ideal of an academic research paradigm.


> products built on papers

This is the key to everything else. There is built in reproducibility and amplification of new, functional ideas in the ML community.

For the most part in life sciences, papers are published to achieve current grant aims and write future grants that will be funded. You can be an academic and love your research area and be ultra-passionate about it, but at the end of the day, grants are the end product that you are working for.

Your science does not have to work or be replicated, all you need to do is publish papers that make grant reviewers think you are reliable enough to not waste federal grant money. Nobody on the grant review board has time to look carefully to see if you papers are not fraudulent.

Let’s look at physics on the early 20th century, which had progress even faster than today’s machine learning research. Massive upheavals and rapid progress in understanding our world, including 4 different models of the atom (including the most correct one we still use today) and general relativity. What’s the difference to today’s life sciences? At the important epicenters of the day, working in the field was 1) contributing new observations, 2) directly testing somebody else’s theories with an experiment.

In today’s world, very rarely will somebody contribute new observations without an underlying motivation (get new grant money, advance current grant claims). And nobody has the time or resources to test other people’s ideas with new experiments. Why? Cause research is expensive and you would need a grant to fund a replication. And no government body funds those grants.

Disclaimer: there’s people in life sciences in some fields doing good work.


The peer-review system is antiquated, developed before the internet and powerpoint presentations. Anything that facilitates interaction between authors and other researchers will be a vast improvement.

Peer-review doesn't catch fraud and is sometimes a political process. I've found the best corrections I've gotten is after posting pre-prints to online forums. I suggest that commenting on pre-prints is better than peer-review pre-publication. I imagine a ranking system could highlight comments from trusted reviewers. Studies that no one wants to review were probably never going to be read anyway, and so there was never any reason to review these studies anyway.


Yeah that's post-publication peer review, which I tend to gravitate towards myself; un-peer-reviewed papers in my field, posted on biorxiv, are generally of pretty high quality. (this might change as preprinting becomes a fully established route in biology)

Any peer review added to that might improve some things, but is it worth it to add ~6 months to the publication timeline for a marginally improved manuscript? I'd say no.


Widespread peer review is only ~50 years old.

Science did just fine before it, as it will after it's phased out.


Well, it was around but inconsistently applied. Einstein famously got angry when his work on gravity waves in 1935 was sent out to review ("I sent my paper to be published, not to be reviewed!"), and Watson & Crick's 1953 paper was going to be sent out to review but their boss, Nobel Laureate Laurence Bragg, phoned up the editor of Nature and said they needed to get it published ASAP.


Forty years ago there was already this sentiment, as best as I can recall the quote: "It is well known that as the scale of the research grows, peer review becomes less effective."


> It's imperfect, but what better system do we replace it with?

Pre-registered trials and/or arxiv + open science.


Science theatre


I don't think peer review is about ensuring anything. It's about spending some time to improve the quality of the paper to save time for everyone else. It's more about providing editorial services than about the reported work itself.

A research paper is supposed to be a honest report of best efforts to study a topic. If it's not, that's a problem that can't be solved by having a few people spending a few hours with the report. The paper is not the final word on anything anyway. If you read a paper expecting to learn something about the world, you are doing something wrong.

As a reviewer, you determine whether the paper is interesting and relevant to the venue. You report any issues you spot that should be corrected and any things you believe that could be improved. And if you get any ideas you feel like sharing, you may share them as well. And that's it.


> The paper is not the final word on anything anyway. If you read a paper expecting to learn something about the world, you are doing something wrong.

Can you expand on this? As a layperson, I'm wondering what would you read a paper for at all if not to learn something about the world? What's the point otherwise?


Papers report very early stages of knowledge formation. They typically contain an argument: what was done, why, what was expected to happen, what actually happened, how did the authors interpret that, and why. Rather than describing the world directly, a paper describes a belief about the world, and justifications for the belief.

Published results are often contradictory, because individual papers are unreliable. Something may have gone subtly wrong, the interpretation could lack nuances, some key understanding may still be missing, or the authors may have just been unlucky. When an expert reads many papers on related topics, the arguments shape their beliefs. Eventually a scientific consensus may emerge, which is the next (but still an insufficient) step towards reliable knowledge.


This is something I bookmarked (in part because I'm an admirer, but also because it made sense and squared with my limited experience of academic peer review):

https://twitter.com/TheSavageInMan/status/108350367715796172...

I think people get in trouble when they think academic publication has value in itself. Ideas become valuable when people care about them. Publication is one particular path to reach a community of readers but it doesn’t make your ideas matter.


Isn't this a kinda narrow view of what's valuable? Sure, it's valuable for your reputation or your academic career only if it becomes widespread.

But if something has predictive power, it's valuable no matter how many other people know it. In some cases, it's way more valuable when it is still unknown, because nobody has had the chance to capitalize on it yet.


I think you're talking about science in general, and Stefan is talking specifically about papers.


I'm not familiar with the person behind the tweet, so I don't know any more than what the quote says. But I'm saying that if an individual paper says a thing, and that thing ends up being useful, then how is that not valuable? It could end up being wrong, but so could anything else you pick up from anywhere.

In some instances, it won't end up being useful because it's only relevant to someone in a lab full of million dollar equipment. But that'll only be the case sometimes.


Some papers are super important! Some papers aren't. (from the ice age... to the dole age...) "Being a paper" isn't a very meaningful metric. That's all that's being said.


I think this is a problem if you take one paper and then you weight its conclusion at 100%. It's less so if you take all of the conclusions and realize they all have a weight (maybe 0). But I think that's true for every source of information.

My approach is to avoid framing things in terms of "now I know x is true", and instead look at it in terms of "someone believes (or wants me to believe) x is true". I then weigh all of those beliefs as I observe the world around me. I resign myself to never really knowing anything myself, but having some idea of what different schools of thought are on a given topic.

If you move your information gathering further along the chain, to maybe a text book, or some expert's twitter feed, why should these things be more reliable? If they've correctly come upon the consensus view, you still have to consider that informing you of the truth night not be their first priority. They could be after money, advancement in their field, political agendas, and (though less likely) they still could be just plain wrong.

So they're more likely to know the truth, but you're still unable to evaluate whether they're giving it to you or not.


> As a reminder, peer review is not designed adversarially.

I beg to differ. Peer review is an adversarial process. The author(s) of the paper are making statements and report findings proving them by logic or/and data and analysis. The priors while reviewing a science paper is "wrong" until proven "correct". This also covers accuracy and veracity of the data and analysis. The only thing a peer review is not is assigning intent or blame. It is not the job of a reviewer to look for fraud when simple incompetence could explain it. But after reading this article I will add fraud dimension to a list of fallacies I am looking for while peer-reviewing a manuscript.


Disagree. From my experience as a scientist there isn't a manual that says what peer review is and isn't supposed to be. Of course, some journals have such guidelines (not that all reviewers read them), and the editors who assign the reviews are free to ignore a rogue reviewer. But there is a set of explicit or tacit standards of good science, and any of it is fair game in peer review:

1. Validity of the experiment

2. Interestingness / novelty of the work

3. Appropriate choice of methods and correctness / believability of the results

4. Signs of outright fraud (suspicious figures, etc.)

Absolutely if I notice some weird Photoshop artifacts in a figure, or some other obvious sign of fake data, I'm going to call that out. I'm probably an outlier on this next one, but I've even been tempted to reject articles just for having egregiously, unreadably bad writing. I know this will be regarded as bias against people whose first language is not English, but if the writing is so bad that it stands in the way of making sense of the article, and the authors can't be bothered to get a decent editor, it's not a worthy contribution to the (English language) scientific literature.


I don't think anybody is denying that if something jumps out at you as hinky in a paper, you're going to note it --- in fact, if you believe the paper is fabricated, you'll probably do more than that, and raise a stink with the chairs of the PC as well.

The problem people have conceptualizing peer review is that reviewers can't reliably spot stuff; they simply don't have time to do it, and it isn't the premise of the exercise.


> "Going by these numbers, roughly one in 1,000 papers gets retracted [..] that something more like one in 50 papers has results which are unreliable because of fabrication, plagiarism or serious errors."

I'd say these are underestimates. Let me add that 80%+ of papers are useless. The only "value" they provide is to the person getting academically promoted and/or building their publishing portfolio/cred.


> The only "value" they provide is..

As far as I can see this is mostly an incentives problem. Bureaucratic control of academic hiring has ended up emphasizing the short term measurable (e.g. #of pages published, so-called impact factors, etc.) over the long term, with pretty predictable results.

Medical research in particular is fraught with another set of problems; the default clinical pathway gives a weak at best grounding in science, and even the MD/PhD programs have been gamed to some degree. There are definite counterexamples (lots!) but there are also a lot of clinicians with incentive to produce research but little skill in it and even less time available...


I think zooming further out, the incentive for academia is to churn out degrees + get grants. Getting a PhD is supposed to require doing something that nobody has done before. Most people getting PhDs/in academia are not actually good enough to do this (it’s really hard! There’s not a ton of low hanging fruit, and you’re “competing” with many others), which is why we end up with tons of garbage papers nobody will ever care about.

Medical research may be performed by MDs but the incentives are still basically the same: papers are resume builders. It looks better when applying to a fellowship/job to have a nice publication history. Obviously the best case is to have worked on some really groundbreaking stuff - still really hard - but the next best case is to have a ton of meh publications, since that beats having a few publications, or no publications.

In medical programs the grant thing is a lot bigger too because there is, rightfully, tons of money to throw to that area. You need grants as an academic to progress. You won’t keep getting grants if you take grants and then don’t publish anything, so even if you have nothing good come out of it, you need to publish something. That incentivizes fraud in the worst case and noise in the best. The better-best case would be if academia were more open to accepting null results


I wish the extent to which a paper is well written (honest, clear, appropriate for the audience) was enough. Sure, I want scientist to "have a nose" for finding interesting results, but I don't like the framing that a solid piece of work is any less because it happened to not demonstrate a useful result.

I'll add that I would like to see a LOT more work that synthesizes and analyzes other work; i.e. literature reviews and meta-analyses.


That’s what most of those papers I am calling garbage are: they are not necessarily wrong but they are not interesting. A meta analysis is cheap and easy to do for even an undergrad, and doesn’t require any special insight or foresight.

I think the bigger problem is we have too many people trying to chase after the highest tier of academic achievement relative to what that tier “should” be or was in the past. It’s benchmarked on novelty, but most people doing research are never going to produce any worthwhile novel results - in some cases it’s just bad luck but in most I think it is just lack of aptitude.

Research is not supposed to just be a resume checkmark, and a PhD isn’t just supposed to be some structured degree program where you can get “on rails” and churn out papers to get a degree proving you have above-average intelligence. But that’s what it is, and it generates tons of noise, while cheapening the value of a PhD.

If I were emperor of the world I’d split research into two tiers where one is more focused on basic science: investigative studies, verifying results, writing papers with solid structure, applying stats. This would be what most people get, and could soak up demand for people getting PhDs with no intention of becoming academics (ie to immigrate, to qualify for some kind of job). And then a second tier of research for the wickedly skilled researchers who are producing novel results and really moving the field forward.

Right now that second group, in most science disciplines, is who progresses in academia anyway. But tons of people are “doing research” and getting advanced degrees based on research in a way that is more like a “basic scientist training program” rather than “moving forward a field of study.”


> Research is not supposed to just be a resume checkmark, and a PhD isn’t just supposed to be some structured degree program where you can get “on rails” and churn out papers to get a degree proving you have above-average intelligence. But that’s what it is, and it generates tons of noise, while cheapening the value of a PhD.

Ah, this seems like the emotional core of your concern. I appreciate the honesty. You are referencing an ideal that you think has been lost. I've seen this get mentioned frequently, and I tend to believe it.

Still, one has to admit this emphasis runs the risk of appearing elitist and/or egotistical. Foundational work is important, yes. So are applications. I'm seeing strong signs of the zero sum game battle for resources and recognition here in your comment, a quality that often makes academic institutions particularly nasty and self-defeating.


I do think there is a tinge of elitism for sure, but what I mean more practically speaking is that a lot of research is getting churned out that is not incentivized by actually wanting to do good research. I fundamentally believe you always get what you incentivize, and the incentivizes for research lean too far towards building careers and resumes outside of research, to deleterious effects.

The elitism you are picking up is my sense that academia is losing its value by becoming a bureaucracy machine with low standards, rather than a way to produce novel insights. It doesn’t make sense to build a road to nowhere just because it’s funded, looks good on a resume, and makes it look like you know how to build roads.

Can a meta-analysis be valuable? Sure. Should 1 million people be submitting meta-analyses to journals? No, because almost none of them will be read, and they are unlikely to be of high quality.

My proposal is simply to make a scientist track that is not indexed on novelty but instead on conducting good research, a focus on the scientific method and applying it to a field, etc. This should satisfy the desire to produce PhDs for basic governmental and educational jobs that for whatever reason (even if artificial scarcity) need some kind of barrier to advancement or knowledge of how to conduct research. Most people going from academia to elsewhere will not be working on the exact niches they studied anyway. And it would at least keep the treadmill of people chasing the “highest degree” or random publications for their resumes at bay by removing the requirement for novelty.


I follow what you're saying about the bureaucratization of a many educational programs.

Some questions about the "scientist track" idea. I don't yet see or understand a clear way to separate it from what exists now. It seems like there's a lot of background knowledge necessary to do work in a particular field.

I think they're also might be some ambiguity with "novelty in research". Scientists are not primarily looking for novelty alone, but a combination with ...

1. theories with better descriptive or prescriptive power

2. interesting applications


> A meta analysis is cheap and easy to do for even an undergrad, and doesn’t require any special insight or foresight.

This is too harsh, too narrow, and too judgmental.

This only mentions the skill required, but neglects to account for the benefits to the body of knowledge (literature).

I reject the idea that scientific papers impact should be judged on a curve according to how difficult the work was:

1. This is premised on a false notion of scarcity.

2. Breakthroughs can seem obvious to a lucky person in the right place and time.

3. Others may ascribe "genius-level" status to such lucky people, but this elevation is often unwarranted and not useful.

4. The time delay before impact and benefit is uncertain. Often work takes time to be synthesized, appreciated, applied, and so on.

5. What is {important, relevant, useful} can change over time. Having only a narrow definition misses the point.


Some of that 80% of papers are also valuable to politicians and bureaucrats pushing biased narratives on the public. No matter what position they want to take they can cherry pick some low-quality research to justify it with a veneer of "science".


How are we defining “useless” here? Papers that are wrong and fraudulent? Or is it broader and any paper we currently can’t do anything with or yields a negative result?


Did anyone use the paper for anything?

If not, it's useless, ie it lacks any utility


If you were to ask experts in a given subfield which papers are reliable, I'm sure they would be able to tell you. The problem is that there's no process in science for expert consensus to make it to out to doctors/laypeople.

People assume that peer review means a paper is good, which couldn't be farther from the truth. Science journalists aren't any better, they care more about hype than consensus. Honestly, it's dangerous to give a random peer reviewed article to someone who doesn't have broad knowledge of the field.

Maybe we need middle-ground journals that publish review articles at the level of a Scientific American reader?


> People assume that peer review means a paper is good ...

"Conclusions: Parachute use did not reduce death or major traumatic injury when jumping from aircraft in the first randomized evaluation of this intervention."

https://www.bmj.com/content/363/bmj.k5094

see the "Peer review" https://www.bmj.com/content/363/bmj.k5094/peer-review


Please tell me this is satire


From the first review:

>"While this trial illustrates many important points about participant screening and recruitment in RCTs along with the need for equipoise, we feel it is important to emphasise that this is not intended to undermine the use of RCTs, but to illustrate some of the issues that may arise.

We felt another important message from this paper is that if you make a decision based on abstract alone, this may lead you to an incorrect assumption, so this trial illustrates very well the importance of reading the whole paper."


The abstract is quite explicit about the limitations of the study, namely, that all the people without a parachute jumped from a very low altitude (mean of 0.6m) and that the results should not be extrapolated to jumping out of planes which are much higher than that..


The BMJ is a very real and serious. This article is from the “Christmas Issue” which is a little different…obviously.

It is meant to be lighthearted and tongue in cheek - but no less serious of science. It often exists as a form of satirical critique of bad habits of science and science communication.

There are flaws in that model though that have been noted. Prime among them is that nothing clearly identifies an article as from the Christmas issue online, and even if it did that may not carry any meaning for a reader unfamiliar with the journal.

https://www.bmj.com/about-bmj/resources-authors/article-type...


The BMJ Christmas issue is full of funny (but maybe also serious) articles each year. My favorite is an analysis of reported virgin births [1].

[1] https://www.bmj.com/content/347/bmj.f7102


It’s not a given experts know what papers are reliable. Here’s a paper from Genentech https://pubmed.ncbi.nlm.nih.gov/15385631/ with 500 citations which my advisors swore is reliable because “they trust the authors.” Don’t mind the fact that the central thesis of the article is barely supported by the last figure in which they conveniently average data from completely different sets of experiments for each time point.

And did I mention my advisors released a drug into the market just recently? Lol.


Journals could open replication wings? I'd rather read about experiments that have been validated by an independent third party at this point.


Part of the problem is the lack of prestige in replication. What gets researchers promoted is finding novel methods, so most researchers don’t want to spend their precious time on replication.


There is prestige in what gets you recognition, so if journals and departments started recognizing contributions to replication research, like giving them dedicated space in journals, then the prestige will follow.


Fair enough, but the incentives don’t align. Novel research attracts funding, so every if replication gets prestige in the future the business side of research will still run counter to it. A researcher would still gain more prestige work novel work that attracts substantial funding.


Most novel findings don't replicate, so a researcher at Yale can gain plenty of prestige by debunking the sensationalized findings from Harvard, for instance.


>Most novel findings don't replicate

This is very context dependent so such a broad generalization probably goes too far. It seems like the “soft” sciences have a much bigger replication crises.

But that point aside, your Yale vs. Harvard example doesn’t fix the funding issue I alluded to.


That’s pretty bang on (biochemist).


That's one of the intended purposes of review journals like Annual Reviews (https://www.annualreviews.org/). There are some pretty big practical issues with them though:

1. Sometimes the latest article on a subject was written 2/5/10 years ago. Depending on quickly the field is moving, that review could be perfectly acceptable, completely obsolete, or anywhere in between.

2. Sometimes the author themselves is just out of date with the field. It's difficult to identify this without deep prior familiarity.

3. Paywalls.

I don't know what the solutions are beyond "making experts accessible for questions" and other public outreach things.


I maintain that the best analogy for life sciences research is naval exploration. Some people tell tales of their great voyages but never set foot on a boat. Some sailed, yet found nothing, and still spin a story to avoid embarrassment. And some actually found America. Science is obsessed with these stories enshrined in journal articles, which are elevated to the status of artefacts of knowledge, complete with Excel spreadsheets in the supplementary info. Creating such artefacts determines everything in your scientific career. That is the actual problem in my opinion. Of course people will want to fake these powerful status symbols.


I recall reading -somewhere - about a similar problem in psychology journals. The problem there is worse, in terms of correctness, because the journals don’t publish negative results, including when the negative results disprove a previously published paper.

In medical journals, though, the problem is worse because it is more likely to kill someone.


A lot of the medicine paper fraud is in Psychology papers. They are notorious for badly setup studies with intentionally leading questionaires for data. No amount of peer review seems to improve them and a journal somewhere will always publish it.

The volume is quite staggering as well, ~40 Psychologists are responsible for an insane amount of papers all with the same methodology problems showing it can cure everything. Apparently every disease has Psychology problems. In practice no one is getting better from their proposed and trailed treatments and patients repeatedly complain about it, but they are everywhere in Europe and the vast majority of doctors believe in these specialist even though the papers are universally review as very low quality even when they aren't faking data or manipulating the stats to produce an outcome.


> A lot of the medicine paper fraud is in Psychology papers.

Psychology is not medicine, it’s an academic discipline. Psychiatry is the branch of medicine that deals with mental health.


A common misconception, here is one of the famous 40 at Kings College:

https://www.kcl.ac.uk/people/professor-trudie-chalder

They probably shouldn't be in medicine, certainly not running departments in hospitals but they are.


There is a reproducibility (replication) crisis in psychology. Much of what psychologists accepted as settled for years has turned out to be bunk. While there are some researchers doing excellent work, much of the field remains no more scientific than phrenology.

https://www.theatlantic.com/science/archive/2018/11/psycholo...


This is really confusing the messenger with the message. The problem is not limited to psychology, it's that psychology historically and currently is where meta-science usually happens (modern meta-analysis has its roots in educational and clinical psychology):

https://en.wikipedia.org/wiki/Replication_crisis https://www.nature.com/articles/533452a


> In medical journals, though, the problem is worse because it is more likely to kill someone.

Right, with psychology, you're more likely to kill yourself


Im advocating for "You keep what you kill" rules in science.

If you disprove a paper or proof a study can not be replicated, you get the funds of the scientist, subtracted from his/her current funding. Make bad science fund the good science and make de-replication a for profit endeavor. There can be all funding in the world for quack science, but if it can be debunked and is debunked, it will finance real science.


As a thought experiment I think that’s brilliant. As an actual plan though I think it would be terrible. There are so many ways to fail to reproduce an experiment. Incentivising the failure to reproduce is just a very out there idea.


How would this stop fraud? Wouldn't there be a possibility of endless cycles of fraud?


It would be far more effective if you got their citation count instead.


But the untruth of one system, makes the truth of the other not a given. So, transferring funds makes more sense, as it just shifts the chance to be true, towards the more "scientific" part of the system. It also adds lag to the system, so you do not get a constant switching reward functions.


Oh I’m in no way suggesting it’s truth, just that it’s a more tangible reward in current system of academia.

I should have added the inevitable /s. When I transitioned from grad student to faculty, one thing that struck me most was when faculty candidates were discussed in my first year and the first slide about all of them was a histogram of citations. You can hear it all you want but seeing it just crushed my soul a little bit.


So basically, a disprove citation hierarchy, auto-redirects to the theory the disproving author favors? Or just his main work on the topic? What if he has none? I just try to play with the idea, see were it goes. Im sorry if i sound critical or harsh previously, i just try to wrap my head around this..



Retraction Watch [1] is a great source of additional examples of unethical behavior in scientific research.

[1]: https://retractionwatch.com/


WOW. I was reading in that webpage that also exists advertising authorships of scientific papers for sale.


Since we're on the topic of bad science, the first figure in that article (Pants on Fire) is a pretty bad one. The units on the Y-axis are not specified (strike one), but it looks to be the _unnormalized_ number of papers retracted each year.

The yearly retraction _rate_ (i.e. the number of retracted paper per number of papers published) is what is relevant here. The number of journals have and papers have exploded since '96.


This blog post from the BMJ wasn't particularly comforting when published in the middle of covid: https://blogs.bmj.com/bmj/2021/07/05/time-to-assume-that-hea...

Richard Smith was the editor of The BMJ until 2004.

Competing interest: RS was a cofounder of the Committee on Medical Ethics (COPE), for many years the chair of the Cochrane Library Oversight Committee, and a member of the board of the UK Research Integrity Office.


I've been in medical research for 5 years and saw so much fraud and incompetence my guess is 20% of it is honest work.

But have hope, we're gonna fix it: https://cancerdb.com/


If you look at why papers are found out as fraudulent and retracted it's usually very dumb mistakes, such as the examples from the article, copying data and/or text from elsewhere, making up numbers that are obvously implausible, and cloning/photoshopping figures or parts thereof, even within the paper. Given how easy it is to avoid these beginner mistakes the percentage of fake data must be so much higher than the actual retractions. Especially from the paper mills where faking data is done by professional fraudsters.


This looks like a good opportunity to ask a question I've had for a long time. In what field(s) of research is it respectable to ask, "Has this paper been peer reviewed?"

I've spent nearly four decades working mostly in academia with scientists, engineers, social scientists of various degrees of rigor, and even occasional humanities types. I don't think I have ever been in a group where someone did or would raise the issue of peer review when discussing the quality of a report.

In all groups I've associated with, you are expected to know enough about your field to be able to assess for yourself the quality of a paper as you read it. Relying on some anonymous reviewer's judgement to justify your acceptance of a report would cast serious doubt on your own judgement.

I've been trying to note just who raises this issue and they appear to be most often in life sciences/medicine. I've never worked in that general area, but my impression is that knowledge of statistics and methodology is rather weaker than in other scientific fields. My (perhaps uncharitable) theory is that peer review has become a gateway for automatic acceptance because the average technical expertise of readers is so low that they cannot evaluate for themselves.


I feel this is an area where AI could be helpful in recognizing suspected fraudulent, or potentially just poorly studied research.


I used to follow a blog (possibly retraction watch? I can't remember now) where they would go through basic biology papers and find photoshopped/edited images and ask the author about them. Sometimes this would lead to retractions etc.

They did mention other people had tried and failed to build tools for this, but the current state of the art was drinking-a-coffee-and-looking-at-it-real-hard. This was before the current explosion of AI, so maybe it's different now?

Anyone else able to chime in?

Edit: Found it! https://scienceintegritydigest.com


Why do you say that?


I've talked about an experience someone in my family had before. Straight up fabrication in a lab, and it wouldn't be really detectable until you attempt to replicate and then check the pictures after.

https://news.ycombinator.com/item?id=25926188


P hacking more generally has been an issue for some time.

Its difficult to trust really almost any study even if you find parts of it to be reliable.

Take one or a few stats courses to find out how easy it is to smudge data with no one being the wiser. Its a real problem.


For the reasons you say, I regard papers that report p-values without effect sizes to be at most interesting, but probably irrelevant for making actual real-life decisions.


I think the one piece people miss is that scientists dont assume the literature is reliable.

When i worked in the lab, it was well know that a good part of the literature can not be replicated.

On top of that, specific journals and labs have reputations for less-than-quality research. If a paper comes out, its not assumed to be true until proven otherwise (usually expanded upon by others). The only time that's different is if it comes from certain labs that are known to have a good track record.

So its not like shoddy research is causing other scientists to waste time except maybe the time spent reading the paper.


I wonder if journals, especially medical ones, could be sued if they don't react and redact studies in time once notified of problems, and patients get ineffective or even detrimental treatment based on those papers.


96% vaccine effectiveness! Into absolute uselessness when deployed to the masses.

As someone who believed them and took the vaccine, that was the last time I'm trusting medical research. I won't be fooled again, and from now on research won't be enough to make any personal decision, I'm waiting until independent evaluation in the real world happens and there are enough anecdotes to see that it really works as advertised.

The entire coronavirus debacle has corroded the trust in medical establishment to zero. I think it's justified and we've just been ignorant to the fraud, it took a lie so big and obvious to really wake people up.


Isn't peer review supposed to catch stuff like this? I mean, that's supposedly the "gold standard" for studies, right?


No, peer review is absolutely not the "gold standard" for studies. Replication is closer to the gold standard, but replication is a scientific project all its own. Peer review is a sanity check before allowing a research contribution to become part of the conversation in its field.

(I've done some academic reviewing, but this is a point I've shoplifted from Stefan Savage).


Peer review is not the "gold standard" but rather table stakes; if some paper can't even pass that, it's probably not worth even skimming the abstract.


The funding gotta go somewhere somehow.


They only realize it now?


The major journals have absolutely no accountability. In any other market, if the product doesn't work or harms someone the company goes out of business or the maker is sued. Not so in journals. So, why do we accept it? Because there's no other way for the layman to determine what makes a good professor, because by definition, they are smarter than us (or at least they're supposed to be), and so we (the general public) are not able to tell if they are good at what they do or not.

So - the answer we have is peer review, which is just the foxes guarding the hen house. There's no other solution that's been proposed that makes any sense in a self reinforcing market manner. Having some post-docs suddenly become concerned about this and hire a bunch of undergraduates to start using to comb excel with spreadsheets will be useful until everyone loses interest. The price of a can of Coca-Cola isn't useful until people lose interest - it's market priced by millions of customers at every minute of every day.

Until there's a solution to this problem that makes sense this will keep happening over and over again.


Prediction markets may be an option: https://www.pnas.org/doi/10.1073/pnas.1516179112

Similar to how charities (ostensibly) can be rated by Charity Navigator, and colleges (ostensibly) can be rated by US News, the credibility of various studies (and the journals that publish them) can be measured.


could start by paying journal reviewers


This is an underrated idea. Putting a very smart and motivated person on the other side of the proble is better than any static set of incentives that can be gamed.


some sort of discount consultant rate might be justified... author pays the reviewers rather than the random publishing fees.... want your paper reviewed, pay $2k


It matters to people who are actually doing science because they replicate findings in their own labs before relying on them. Hopefully they will get control of this problem because it's incredibly wasteful to chase red herrings that were never actually true findings in any lab. Labs that try to stand on previous work without verifying it are standing on a house of cards


> In any other market, if the product doesn't work or harms someone the company goes out of business or the maker is sued.

Not strictly true, here in the UK you wont get refunds for drug treatments that dont work if seeing a private gp (or vet), thats why the NHS exists, its harder to sue the NHS in those conditions.

NICE who decide what drugs to use, is veiled in secrecy unless you can attend one of their public meetings, just like its impossible to attend every court case, its a form of resource burning which only the rich entities can afford.

IF you do challenge any drugs, you'll get passed from pillar to post until you under up in the govt's lap and they make the rules up as they go along, but its very hard to get any recourse unless you can afford expensive lawyers who know the technicalities to use as an approach vector.


No surprise here. Just put together a bunch of data and make up the statistics. No one's the wiser. "Now hurry and let me sell Oxycontin to children" -FDA


This is highly misleading.

The FDA requires a lot more than a few medical papers. They have their own guidelines for studies, and you have to follow them to the letter. They also have investigators present to make sure data isn't mislabeled or incorrect.

The Oxycontin issue wasn't that the medicine wasn't safe, is that it was deemed safe by the FDA for its intended use, then it was pushed to doctors to go beyond its intended use.

The FDA was too late to respond to this, but it was undetected by the majority of the medical field because people went to the streets for re-ups, and doctors didn't want to admit fault. This is mostly due to improper regulation in how drug companies are allowed to interact with doctors, and the lack of healthcare resources.


"How FDA Failures Contributed to the Opioid Crisis" https://journalofethics.ama-assn.org/article/how-fda-failure...

1. Failure to Properly Enforce Marketing Regulations

2. Failure to Obtain Evidence of Long-term Safety and Effectiveness

3. Failure to Manage Conflicts of Interest

And my personal favorite:

> An FDA official who led the approval of OxyContin got a $400,000 gig at Purdue Pharma a year later

Special mention:

> A former senior U.S. Drug Enforcement Administration official who spent three decades at the DEA, specializing in preventing the diversion of prescription drugs like OxyContin, is now paid to advise one of the largest opioid manufacturers in the country, Purdue Pharma


Yes, I read that part. It doesn't counter any of my points, they actually enforce them.

Oxycontin was safe to use for their intended purposes, but they were not used that way. It wasn't the approval process that was problematic, just the follow-up.


Who would you trust? People that -are-doing-it-their-whole-life- that have an interest to lie to you, or a correct sounding reasoning and the wisdom of billions of people for milleniums?


Wisdom like what? "Sky daddy solves everything"?

For the record, there's lots we can learn from ancient and indigenous knowledge, but let's not pretend that everything would be better if we cast off the whole lot of modern society.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: