Exact match. Not showing close matches.
'[TECH]:: Why most published research findings are '
|The claim that 'most published research findings are false',
while demonstrably true, is so utterly contrary to intuition
and to what we think we know about research methods,
statistical analysis and more as to be rejected out of hand
This sounds like an April fools hoax or similar - it's not.
Most published research findings are false.
This subject, and the report that it is based on, is of
crucial importance to any engineers or scientists or others
who are interested in understanding how accurate or
believable the results of well conducted and apparently well
conducted research may be and how much they can be depended
on to reflect actual reality. It can be and has been shown
that the majority of claims and research results in even top
class peer reviewed journals are in fact incorrect.
In 2005 a seminal analysis with the title "Why most
published research findings are false" was published in
PLoS medicine ( PLoS Medicine (2:e272) by an
epidemiologist John P. A. Ioannidis (Department of Hygiene
and Epidemiology, University of Ioannina School of Medicine,
Ioannina, Greece, and Institute for Clinical Research and
Health Policy Studies, Department of Medicine, Tufts-New
England Medical Center, Tufts University School of Medicine,
Boston, Massachusetts). This paper (published as an "essay"
was essentially well received by the scientific community
and (AFAIAA) no major attempts have been made to refute it's
claims. I am aware of some subsequent papers which appear to
take exception to its completeness in some areas but what I
have seen seems more an attempt to join the band-wagon than
to destroy it.
I'll write a brief summary here.
This is necessarily a generalisation and to some extend
overdone - better that you get the point and read the report
than "feel safe".
The intent of the following points is well supported by the
original analysis and AFAIK no major claims have been made
to refute them. If you do research, read research papers,
depend on research results etc then you really want to look
into these results. They apply especially in the medical
research field for reasons commented on in ref 1 below,
would probably be equally or more true in softer* or soft*
science areas (cognitive, psychology, theological,
biological general) and still highly applicable if possibly
less severe in the hard science areas.
[[My rough definitions: Soft - human mind, behavioural,
mental etc. Softer biological and living systems. Hard:
Depend on core 'laws of physics'.]][[No denigration
intended - just trying to scope applicability]].
- Based both on actual analysis of results AND studies of
how results are arrived at MOST published research results
- Small studies are more liable to be false.
- Even small studies with excellent statistical support are
liable to be false.
- When many studies are done in a field the chances of false
results being produced grows until it becomes almost certain
that every major hypothesis is covered by reports claiming
- Journals tend to accept papers "going against the flow"
only when they make large and grand contrary claims.
- Better results are obtained by very large studies, by many
coordinated but independent studies of the same basic
premise and using the same premises and approaches.
- Studies which study another researcher's hypothesis are
more liable to be correct than those which study the
researchers own hypothesis.
- All the factors and more that one may suggest may cause
inaccuracies do, and more. Attributions of the effect of
perceived bias on results tends to often enough prove true.
Corollaries from the original report:
Corollary 1: The smaller the studies conducted in a
scientific field, the less likely the research findings are
to be true.
Corollary 2: The smaller the effect sizes in a scientific
field, the less likely the research findings are to be true.
Corollary 3: The greater the number and the lesser the
selection of tested relationships in a scientific field, the
less likely the research findings are to be true.
Corollary 4: The greater the flexibility in designs,
definitions, outcomes, and analytical modes in a scientific
field, the less likely the research findings are to be true.
Corollary 5: The greater the financial and other interests
and prejudices in a scientific field, the less likely the
research findings are to be true.
Corollary 6: The hotter a scientific field (with more
scientific teams involved), the less likely the research
findings are to be true.
1. While the paper (ref 2 below) is not too too complex
mathematically and is moderately easy to read, it's not as
clear to the layman as it could be. A good starting
commentary can be seen at
*** READ THIS COMMENT FIRST ***
2. The original paper is (gratifyingly) available for
free online under a creative commons licence.
Two versions at
Ioannidis has previously identified statistical and
problems based on high-throughput techniques such as
microarrays that can
lead to gene-disease predictions being no better than chance
(see the Dec.
20, 2004, issue of The Scientist). He has also followed the
fate of research
findings to quantify their falsification rate, demonstrating
example, that five of the six most cited epidemiological
studies since 1990
have already been refuted (JAMA, 294:218-28, 2005).
4. Supporting comments
"He has done systematic looks at the published literature
and empirically shown us what we know deep inside our
hearts," said Muin Khoury, director of the National Office
of Public Health Genomics at the U.S. Centers for Disease
Control and Prevention. "We need to pay more attention to
the replication of published scientific results."
5. PLoS medicine
PLoS Medicine is a peer-reviewed, international, open-access
journal publishing important original research and analysis
relevant to human health.
6. Various on this result
Wall Street Journal Sep 14, 2007
7. OK blogs thereon
... Sometimes it's OK for results to be wrong ...
... more & bigger studies is better ... [[But he
already says that ]]
9. Gargoyle sez
|On Mon, 21 Jul 2008 14:50:15 +1200, Apptech wrote:
:: Corollary 4: The greater the flexibility in designs,
:: definitions, outcomes, and analytical modes in a scientific
:: field, the less likely the research findings are to be true.
well this seems plain logical to me. If you accept that there is no
such thing as absolute certainty, (Heidelberg was it?) then you can
only opt for ' that which seems most likely at the time given the
information available ' . Empirically this would also be seen to be
true, when one considers all the scientific 'facts' that have been
found, only for 8 months later someone else to discover they're
probably wrong, and in all that time a gazillion people around the
world have declared themselves, fitter/better etc.
:: Corollary 5: The greater the financial and other interests
:: and prejudices in a scientific field, the less likely the
:: research findings are to be true.
I think this comes about due to the way funding is provided,
especially for Universities. Much research is done off the back of
other, so you only need, and error in the original research for it to
be continued through.
As 'them' in charge are more likely to allocate funds only for those
projects that are liklet to succeed or get big headlines, thereby
amassing more money thorugh government grants, I wouldn't be
In fact, despite those who will leap in, I don't deride people who
think they may or have discovered things such as Perpetual Motion.
1. it depends on the definition of PM,
2. every now and then some bright spark changes some of the laws of
maths and physics, so there is no guarantee that one day someone will
a 'have an Einstein' moment, and discover the apparently immutable
laws of physics aren't.
I'd be interest in seeing that paper about TMA's, until recently I
provided equipment and software for researches who use that method.
cdb, btech-online.co.uk on 21/07/2008 colin
Web presence: http://www.btech-online.co.uk
Hosted by: http://www.1and1.co.uk/?k_id=7988359
> :: Corollary 4: The greater the flexibility in designs,
> :: definitions, outcomes, and analytical modes in a
> :: field, the less likely the research findings are to be
> ...If you accept that there is no such thing as absolute
> certainty, (Heidelberg was it?)
If you pressed me, I'd say no. But who can tell for sure?
I think the point he is making is that increased freedoms in
asking the actual question leads to a greater chance that
the answers are wrong. ie where the method of study and the
definitions and accepted results that define success and
failure are tightly constrained the answers are more likely
to be correct than when the opposite is the case and each
researcher is free to establish their own guidelines.
In one of the other papers I cited (one of the two wannabees
I think) someone suggests that society accepts being fed
incorrect results if the subject is important enough (or was
that unimportant enough?). AFAIR it was 'important enough'
which seems the opposite of what I'd expect. In skimming I
may have missed his point.
|Contrary to intuition? We are dealing with ventures into the
qualitatively unknown, of the sort that often are also previously
unimaginable, and you never know when.
And you're applying to it the most flexible and inventive device we've
ever seen, able to generate dozens of hypotheses and ad-hoc models a
minute, and immediately rely and use any of them, if only to save its
life, or its research grants.
And you find it unintuitive they're mostly wrong?
In my line of work there are several methods used to minimize that
- Data analysis has to quantify all the sources of noise, background
events and all other possible causes of measurement inaccuracy,
including how they might combine forces to generate artifacts
masquerading as new findings.
- The measurement data itself is often inaccessible to the researchers
until the last moment of the analysis. It is kept in a "locked box"
until all the software and procedures are in place and only after they
have been satisfactorily demonstrated on simulation and control data,
it is run on the real data, to prevent people from optimizing the code and
trying to "improve their accuracy" after seeing potentially biasing
- In order to claim a discovery, the researchers must show that the
chances that the outcome was a results of a statistical fluke is less
that one in ten million.
- In large experiments the results usually spend another year or so
undergoing internal scrutiny by other experiment members which are not
part of the original team. This is usually the harshest step, where
every impossible scenario is checked. Nobody likes wrong results
published under his name.
- Peer reviews are another step. No one would like to see you fail more
than the competition.
- Results of a single experiment are never accepted, no matter how
convincing they look. That is why on the LHC ring there are two large
experiments, identical in their requirements, and as different as
possible in their technology and implementation. And still this is
considered a sad state of affairs. Originally there was supposed to be
the SSC with its different energy range, location (and therefore
And we expect to be wrong. What is often perceived as the conservatism
of the scientific establishment has its basis in the exact opposite.
There wouldn't be any point to this job if not to break current theories
and establish new, truer ones. But the human mind being what it is (with
the only thing more susceptible to self-deception being groups of human
minds), exhilarating as any result might be it takes years from initial
publication until it is accepted (and the prizes granted), and only
then, after it has been throughly cross-checked, we might expect to be
wrong only half of the times.
On Mon, Jul 21, 2008 at 02:50:15PM +1200, Apptech wrote:
On Mon, Jul 21, 2008 at 07:20:48PM +1200, Apptech wrote:
> In one of the other papers I cited (one of the two wannabees
> I think) someone suggests that society accepts being fed
> incorrect results if the subject is important enough (or was
> that unimportant enough?). AFAIR it was 'important enough'
> which seems the opposite of what I'd expect. In skimming I
> may have missed his point.
It appears to me that indeed society's tolerance to bovine solid waste
is strongly and positively correlated with the importance of the
subject. It is clearly visible even here on the list, where people are
quick to correct others' mistakes when it comes to e.g. component
parameters, but when presented with evidence that the troposphere might
be on the brink of thermal runaway, you'd be hard pressed to find
anyone who will bother to investigate in which direction it might
I'd even say that rational discussion is deferred on practically all
topics which are very important - energy, pollution, economic and
social disparity, race and gender inequality, the exponential nature of
technological progress, radical life extension, ownership laws of
information (i.e. subsets of the natural numbers), etc. etc. On most of
these it is almost impossible to even find raw facts which are of decent
The reason is probably that even though democracy and liberalism have
been quite successful in the past couple of centuries, the basic
instincts of the common man are still those of a slave. We expect to be
lied to when at war, we expect to be lied to when interested parties are
involved (compare medical journals to mathematical, for example), etc.
Often we only discover something to be important by first noticing
the hand waving and verbal acrobatics. Often our first protest isn't in
defence of honesty - we resent when being lied to _needlessly_.
(Admittedly the middle east might be a vantage point that introduces
some bias in itself...)
> Contrary to intuition? ...
> And you find it unintuitive they're mostly wrong?
You don't know me overly well, do you ? :-).
(I have no problem with that, I'm about half a world away).
>> ... while demonstrably true, is so utterly contrary to
>> ... as to be rejected out of hand by many.
The wording was chosen with more care than may be obvious.
I'm trying to slip through mental filters here, not just on
list but also for one chosen BCCee. And in that case there
are several aspects being addressed at once. Important ones
that I want to try and get the subject's "foot in the door"
and not rejected out of hand without thought.
BUT the list was my main audience. It is very easy for
people to throw up their hands in disgust and walk away from
such grandiose and sweeping claims, so a certain amount of
preparation is required before just springing the subject on
the unwary. If I'd just provided a subject line and a web
link it would have taken me much less effort. Hopefully this
way maximises the audience.
Lessons learned from this may/should serve people a
I did / do like your list of checks and balances.
A shame they are not (or, alas, cannot) be applied in a few
other 'sciences' and areas of endeavour that I won't even
name here for risk of starting a conflagration.
|On Mon, 21 Jul 2008, cdb wrote:
> 1. it depends on the definition of PM,
> 2. every now and then some bright spark changes some of the laws of
> maths and physics, so there is no guarantee that one day someone will
> a 'have an Einstein' moment, and discover the apparently immutable
> laws of physics aren't.
I am not suporised most papers are wrong. Thats the nature of science,
throwing theories out there and letting them fight to the death. :-)
As far as overturning the laws of physics, for a long time now we have
just been adding more details on to the laws, not overturning them.
The laws of orbital motion are still good enough for Nasa to use in
plotting spacecraft. Einstien didn't blow the old laws of gravity away..
the results of both theories are nearly identical. And when the next one
comes along (quantum gravity maybe?) I fully expect it will give results
nearly exactly with Newton as well, just differing at the far end of the
Now.. it's all still just observation and theory, so yes, someone COULD
come up with a theory that turns something major upside down. But looking
at the progression of science, it seems pretty unlikely.
PS. I would be thrilled and estatic to read that some major physical law
just got proven wrong, as would most every researcher. I just don't see
it happening. Kind of like my feelings on winning the lottery. Would be
nice, but the chances are too low to waste money buying a ticket on. :-)
> I did / do like your list of checks and balances.
> A shame they are not (or, alas, cannot) be applied in a few
> other 'sciences' and areas of endeavour that I won't even
> name here for risk of starting a conflagration.
Maybe we could mosey on over to the [OT] tag and discuss 'science
areas' that are truly bogus, such as psychology and psychiatric
> > A shame they are not (or, alas, cannot) be applied in a few other
> > 'sciences' and areas of endeavour that I won't even name
> here for risk
> > of starting a conflagration.
> > Russell
> Maybe we could mosey on over to the [OT] tag and discuss
> 'science areas' that are truly bogus, such as psychology and
> psychiatric medicine.
Is that you, Tom Cruise?
For further interest, an in relation to the more recent PIClist thread
regarding mobile phones and cancer, please see also
> The claim that 'most published research findings are false',
> while demonstrably true, is so utterly contrary to intuition
> and to what we think we know about research methods,
> statistical analysis and more as to be rejected out of hand
> by many.
I read this paper and the authors make some pretty amazing leaps of
logic. This does not mean they are wrong and I believe they are
probably right ; still it makes me realize what a cat's bag of
confusion most information packets are these days. Probably for the
past 10,000 years as well.
More... (looser matching)
- Last day of these posts
- In 2008
, 2009 only
- New search...