20 Comments

We really do have to talk some time. This is awesome.

Expand full comment

Nice summary. I don't see a link to the original preprint you're discussing? Also what areas do you have in mind: "There are areas where fraud is almost unknown in medical science"?

Expand full comment
author

WHOOPS! Thanks for pointing that out, I totally forgot to link to James' preprint!

I think most areas where auditability is required have a low rate of fraud. Trials of novel pharmaceuticals where independent bodies look at the raw participant data. Large multicentre studies across countries where an independent statistician (or more than one) reviews results from multiple places and will notice any issues. Even smaller studies in areas like cancer research where every patient is carefully monitored and it's very hard to pretend that you have more people enrolled than you see in your clinic. Oversight is the biggest fraud prevention device, because if more people see the data then more people can notice issues!

Expand full comment

From my years in molecular biology, it was quite the opposite: mine and competing labs operated on the system of *mistrust*: we treated everything others published as definitely wrong and they treated our research the same way. We repeated their steps and wrote how wrong they were in their conclusions. They repeated ours and wrote how wrong we were.

An outsider following the network of publications on the topic could think all that research was "wrong", but it was actually all fundamentally correct and supporting across the labs, polishing the theory with every year of seemingly vicious back-and-forth. Science is truly powered by our nerd drive to correct each other, same thing that powers wikipedia.

But we also all knew who sold their integrity, abandoned pursuit of knowledge, and publishes paid BS in MDPI journals. No lab would bother with anything they wrote let alone rely on their conclusions in our own work, but again, unfortunately very hard for the public, or even for scientists working on unrelated subjects, to spot those internal, often unspoken, dynamics.

Expand full comment

We know that very few researchers are willing to provide raw data alongside their papers and although they claim confidentiality for participants, in reality the overwhelming majority of these can and should be anonymised. But that's just the manipulation/fraud side of things. When we get to sheer, unbridled incompetence, that's got to be off the charts, especially in the social sciences, where all kinds of bullshittery gets past ethics review, gets funded, gets published, and ends up crapping all over the poor clients, via therapists who fervently believe in the gold-plated guru they've invested themselves in.

Expand full comment

Are you aware of Ioannides (2005, https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124)? Because publication is generally conditional on a “positive” finding, false positives in published research will be several multiples of the unconditional threshold of 5%, even with the best of effort and intention. Add incompetence and fraud and then you realise most of it is quackery. Nevertheless, the scientific method is our best tool for narrowing the boundaries on “truth”… fallible and adversely incentivised researchers make vigilance and verification all the more important.

Expand full comment

A clear example of scientific fraud that are not retracted, is the PACE-tria published in The Lancet in 2011. The outcome meassure for recovery were lowered from the original protocol (SF-36 60 or higher at publication vs 85 or higher in the protocol) so that the meassure for recovered was lower than the study inclusion criteria (Sf-36 60 or higher recovered, SF-36 65 or lower sick enough to be included). All to boost the researchers' preferred treatments. Read more about it here https://me-pedia.org/wiki/PACE_trial

Expand full comment
author

PACE is actually a good example of how much of this is about grey areas. The researchers in this case switched their outcome measures after they had access to some of the data, which is known as Hypothesizing After Results are Known (HARKing). Reanalyzing under the original protocol produced similar but statistically insignificant results. This is one of those bad research practices that, depending on the situation, some people will still defend. And while it reduces the confidence in the trial's conclusions, I don't think it's accurate to term HARKing as fraud in the same sense as the work of, say Yoshitake Fujii who admitted to creating datasets out of thin air. In the case of PACE, they did an actual trial and no one disputes that the data is real, people instead dispute which analysis of the data is correct.

Expand full comment

Thank you very much for your reply where you make it clear what mistakes were made in the PACE study and that it was wrong to call it fraud as in the case of Fujii

But unfortunately it is still the case that many pasients, not only in the UK, but in other countries as well, including Norway where I live, are denied disability benefits because they have not tried cognitive behavioral therapy and graded exercise therapy, treatments that in most user surveys of tried treatments, carried out in different countries and at different times, are the treatments where most patients report worsening. That is also why CDC's "Pathway to prevention" report on ME/CFS concludes "Specifically, continuing to use the Oxford definition may impair progress and cause harm" The Oxford definition was the one used in the PACE study.

That is why it, for the patients, are important that the study is retracted , which probably will not happen as long as Richard Horton is editor of The Lancet.

Expand full comment

I'm sorry but your 14% questionable studies figure does not come anywhere near to supporting that "most of science is fake" as claimed by your title. However, such slipshot writing on your part seems to prove your thesis.

Expand full comment
author

That is quite literally not what the title says.

Expand full comment

Today friends of mine told me that research papers are so fraudulent that we can't believe anything. They seem to be putting their faith in faith or instinct or 'nature.' or something.

One tells me that he watched a video from a 'reliable' source that Pasteur faked his experiments.

I asked if this means they no longer trust canned goods that have been heat treated.

They are convinced that no one has ever proven the existence of viruses, and that the germ theory of infection is just an idea that "they" enforce without evidence.

It''s nice having friends.

Expand full comment

Most of the time someone in the colleague pool is aware...politics as anywhere

Expand full comment

The Carlisle study you mentioned in the beginning of your article (https://doi.org/10.1111/anae.15263) in of itself is shoddy research IMHO.

As a research engineer at a top biomechanics research center, I am thoroughly confused why the author manually reviewed tabular data. The tabular data should have been analyzed by automated statistical methods. The author readily admits he missed data until the 3rd or 4th iteration and even more astonishing used exceptionally primitive tools for his analysis. Our center employs statisticians and epidemiologists for statistical analysis. After 20yrs in this field I cannot come up with a reason for manual statistical analysis by an MD. Particularly when the given subject is shoddy science.

Expand full comment
author

If you believe you can do better than one of the most renowned scientific sleuths, who is responsible for many hundreds of retractions of fake studies, then by all means do give it a go. The field needs more people.

Expand full comment

Attack the message and not the messenger. I would expect better from a fellow researcher. Your response is why I rarely engage outside of professional forums.

Do you believe manual iterational review of tabular data is rigorous analysis? If so, you are an outlier within epidemiology & statistical mathematics. The author even admits he "missed things" until the 3rd or 4th review. Why are you defending it?

Expand full comment
Sep 27·edited Sep 27

Is there a breakdown anywhere of which areas of medicine are the worst, and which are the least fraudulent? A league table perhaps?

My impression is that areas afflicted by pseudoscience and integrative/CAM medicine rank highly in this regard, but that might be a biased conclusion on my part.

Expand full comment

I don’t doubt there’s a problem but it’s hard to envision how this could be so high for experimental science that requires a lot of investigators. There’s a whole team of people. Then everyone tries to replicate so then this would also be a red flag. Everyone is talking to everyone else. They would have to collude to fake the data wouldn’t they? This is how a number of the frauds have been exposed, isn’t it? The PI does fishy stuff and the others working on it report him or her. But there could be fudging and so on and low-quality journals that just post fabricated studies because they are not interested in any of the standards but only at getting money for the journal. There’s so many review processes in medicine it’s probably not that easy to put forward too many fraudulent results?

The issue doesn’t seem to be primarily one of quantity but of impact. Many studies do not have impact but there are so many studies. The really bad thing would be to get research money if you are not on the level but as for your study making a bunch of things happen—it’s not so clear, is it?

Expand full comment
author

Fraud is indeed much less common in places where there are many investigators who all have access to the data. But there are many areas of medicine - like anaesthetics - where it's not uncommon to see only one or two researchers being in charge of all of the numbers. And what we tend to find is that those areas of research have a very large quantity of dodgy shit, while things like novel pharmaceutical research have far fewer fake studies.

Expand full comment

Oh, dear! Wow.

Expand full comment