Thunderbolts.info legacy page  
     homeaboutessential guidepicture of the daythunderblogsnewsmultimediapredictionsproductsget involvedcontact
 
 
 

thunderblog

 

davesmith_au - the lighter side of eu

 
 

Peer Review or Poor Review? - You Decide
by Dave Smith

April 24, 2011
 
“If peer review was a drug it would never be allowed onto the market”
 
In 2008 we published a thunderblog which highlighted some of the misgivings of peer-reviewed science. More recently a paper has been published in a peer reviewed journal which examines and explains many of the pitfalls of peer review. The paper titled "Classical peer review: an empty gun"[1] by Richard Smith is a study which is focused on medical review but which applies to all scientific disciplines.
 
From Smith's study:
'If peer review was a drug it would never be allowed onto the market,' says Drummond Rennie, deputy editor of the Journal Of the American Medical Association and intellectual father of the international congresses of peer review that have been held every four years since 1989. Peer review would not get onto the market because we have no convincing evidence of its benefits but a lot of evidence of its flaws.
[Emphasis added - DS]
 
Yet, to my continuing surprise, almost no scientists know anything about the evidence on peer review. It is a process that is central to science - deciding which grant proposals will be funded, which papers will be published, who will be promoted, and who will receive a Nobel prize. We might thus expect that scientists, people who are trained to believe nothing until presented with evidence, would want to know all the evidence available on this important process. Yet not only do scientists know little about the evidence on peer review but most continue to believe in peer review, thinking it essential for the progress of science. Ironically, a faith based rather than an evidence based process lies at the heart of science.
Much ado is made about the value of peer-review to science, yet could it be that such faith is unwarranted? If science is about having evidence to support claims made, do we really have any evidence to support the system which supports the sciences? It seems "No" is the only truthful answer to that question.
... Drummond Rennie writes in what might be the greatest sentence ever published in a medical journal: 'There seems to be no study too fragmented, no hypothesis too trivial, no literature citation too biased or too egotistical, no design too warped, no methodology too bungled, no presentation of results too inaccurate, too obscure, and too contradictory, no analysis too selfserving, no argument too circular, no conclusions too trifling or too unjustified, and no grammar and syntax too offensive for a paper to end up in print.'
One could easily be forgiven for thinking Rennie was talking about cosmology, astrophysics, particle physics, solar physics or any other discipline which allows conjecture and bias to trump good scientific method. In fact, if this is the case in the medical sciences where people's lives and quality thereof is at stake, how much more is it likely to be the case in disciplines where the only real stakeholders are those whose qualifications and continued employment depend on the promotion of a particular paradigm, however speculative?
We have little or no evidence that peer review 'works,' but we have lots of evidence of its downside.
Frequently those who spend their (spare?) time touring forums and comments threads of science "news" sites, will see demands for peer reviewed papers to back up dissenting claims. These demands are most commonly made by those with a close bond to the status quo or what could be termed "conventional wisdom". This is in spite of the fact that there is little evidence of the efficacy of the process which is supposed to vet published material.
 
Smith has identified five key areas within which the current peer review system has serious issues:

1. Cost

Firstly, it is very expensive in terms of money and academic time. ... The Research Information Network has calculated that the global cost of peer review is £1.9 billion [10]. The cost in time is also enormous, and many scientists argue that time spent peer reviewing would be better spent doing science.
Not only is the cost of the system outrageous in relation to the benefits, if any, of it, there is also the cost to individuals who seek to have their material peer reviewed. It can cost hundreds to thousands of dollars to successfully submit a paper for review. Whilst such costs are usually covered when one's research is funded, independent and unaffiliated researchers are left to find the funds themselves, often out of their own pocket, as it were. This alone creates a strong bias toward establishment science.

2. Slow

Secondly, peer review is slow. The process regularly takes months and sometimes years. Publication may then take many more months. A friend of mine, a fellow of the Royal Society, has written a paper that I think very important for global health. As I write, it is still unpublished after two years of being reviewed by several 'top' journals. None of the reviewers have raised a major flaw with the study.
The often unnecessary delays in publication of work can have the effect of hampering further study related to the work. One could easily be working on several papers at once, each to some degree riding on the successful publication of a previous paper. If the initial research is rejected, the further work becomes redundant. This is terribly inefficient.

3. Lottery-like

Thirdly, peer review is largely a lottery. Multiple studies have shown how if several authors are asked to review a paper, their agreement on whether it should be published is little higher than would be expected by chance [11].
What justification can there be for continuing with a process which whilst costly and slow, is also little more beneficial than rolling dice? This is perhaps one of the most damning flaws with the system.

4. Does not detect errors!

A fourth problem with peer reviews is that it does not detect errors. At the British Medical Journal we took a 600 word study that we were about to publish and inserted eight errors [13]. We then sent the paper to about 300 reviewers. The median number of errors spotted was two, and 20% of the reviewers did not spot any. We did further studies of deliberately inserting errors, some very major, and came up with similar results.
One of the catch-cries of those who espouse that peer reviewed papers are the only scientifically valid sources of information is that the process ensures 'correctness' of the research. Clearly, this is not the case. Again it is worth mentioning that this study is based on medical peer-review, where errors could have broad and tragic consequences. If errors are not detected in the medical peer-review process, is it any more likely that they would identified in other disciplines?

5. Bias

The fifth problem with pre-publication peer review is bias. There have been many studies of bias - with conflicting results - but the most famous was published in Behavioural and Brain Sciences [14]. The authors took 12 studies that came from prestigious institutions that had already been published in psychology journals. They retyped the papers, made minor changes to the titles, abstracts, and introductions but changed the authors' names and institutions. They invented institutions with names like the Tri-Valley Center for Human Potential. The papers were then resubmitted to the journals that had first published them. In only three cases did the journals realise that they had already published the paper, and eight of the remaining nine were rejected - not because of lack of originality but because of poor quality. The authors concluded that this was evidence of bias against authors from less prestigious institutions. Most authors from less prestigious institutions, particularly those in the developing world, believe that peer review is biased against them.
Bias in peer review whilst often claimed, is frequently played down, especially by establishment science adherents. The studies Smith cites demonstrate clearly that peer review is biased in favor not only of establishment views, but even more distastefully of the perceived prestige of the organization the researchers are affiliated with.

6. Easily abused

Finally, peer review can be all too easily abused. Reviewers can steal ideas and present them as their own or produce an unjustly harsh review to block or at least slow down the publication of the ideas of a competitor. These have all happened.
This is perhaps the least recognized of the flaws of peer review, but in many respects, it is possibly the most important. If publication of research is open to being either plagiarized or blocked by a competitor, people whose ideas are sound yet controversial will be less likely to submit their research to peer-review. Those who do, risk their work being stolen or being discredited without good reason.
 
Recently in the discipline of climate science, it was revealed that a group of scientists not only sought to suppress dissenting work, but also to discredit a journal in it's entirety, a clear-cut case of both bias within and abuse of the peer-review system.
 
So where does this leave us? The public at large have, quite understandably, grown suspicious of many of the sciences. One result of this is the explosion of a plethora of web sites questioning much of what we are led to believe. But with much information comes much disinformation and it is becoming more difficult to draw the line between sound criticism of a popular paradigm and dissent for the sake of dissent.
 
In his study Smith specifically refers to pre-peer-review, or the review of papers before publication which is by far the most common route to scientific recognition. As an answer to the problems identified above he suggests post-peer-review may be more successful at identifying sound science. The idea would be to allow publication of everything (within reason) and thus allow the many hundreds of interested scientists and publications to decide what is important, rather than just one or two making such a decision.
 
This author is of the opinion that several other measures should also be considered to help improve the current peer-review system. Reviewers could be identified whilst authors and their affiliations retain anonymity until publication. This would have the two-fold effect of allowing the research to stand on it's own merits and add accountability to the reviewers. All comments regarding a paper could be published to allow third parties to evaluate any criticisms offered.
 
After all, under the current system, who reviews the reviewers?
 
Dave Smith.
 

References:

1.  Smith R: Classical peer review: an empty gun. Breast Cancer Research 2010, 12
    (Suppl 4):S13. doi:10.1186/bcr2742 PDF here.
 
 
Permalink to this article.

Email this article to a friend

Public comment may be made on this article on the
Thunderbolts Forum/Thunderblogs (free membership required).

For a comprehensive central repository of links to study Plasma Cosmology/Electric Universe please visit: PlasmaResources.com
 

 
 

"The Cosmic Thunderbolt"

YouTube video, first glimpses of Episode Two in the "Symbols of an Alien Sky" series.
 

 

And don't forget: "The Universe Electric"

Three ebooks in the Universe Electric series are now available. Consistently praised for easily understandable text and exquisite graphics.
 
 
 
SITE SEARCH
 
 
 

 
  This free site search script provided by JavaScript Kit  
 
SUBSCRIBE
 
  FREE update -

Weekly digest of Picture of the Day, Thunderblog, Forum, Multimedia and more.
 
 
*** NEW DVD ***
 
  Symbols of an Alien Sky
Selections Playlist

 
 
E-BOOKS
 
 
An e-book series
for teachers, general readers and specialists alike.
 
 
VIDEO
(FREE viewing)
 
  Thunderbolts of the Gods

 
 
PREDICTIONS
 
  Follow the stunning success of the Electric Universe in predicting the 'surprises' of the space age.  
 
MULTIMEDIA
 
  Our multimedia page explores many diverse topics, including a few not covered by the Thunderbolts Project.  
 
OUR VISITORS:
 
   
 
 

 
davesmith_au
Dave Smith (davesmith_au) is an independent researcher and Managing Editor of the Thunderblog.

My Archives

Chronological Archives

Archives by Author

Archives by Subject

Thunderblogs home
 

 
 
 

 
 
Authors David Talbott and Wallace Thornhill introduce the reader to an age of planetary instability and earthshaking electrical events in ancient times. If their hypothesis is correct, it could not fail to alter many paths of scientific investigation.
More info
Professor of engineering Donald Scott systematically unravels the myths of the "Big Bang" cosmology, and he does so without resorting to black holes, dark matter, dark energy, neutron stars, magnetic "reconnection", or any other fictions needed to prop up a failed theory.
More info
In language designed for scientists and non-scientists alike, authors Wallace Thornhill and David Talbott show that even the greatest surprises of the space age are predictable patterns in an electric universe.
More info
 

 
EXECUTIVE EDITORS: David Talbott, Wallace Thornhill
SENIOR EDITORS: Donald Scott, Annis Pepion Scott
CONTRIBUTING EDITOR: Michael Goodspeed
EDITORIAL CONSULTANT: A. P. David
MANAGING EDITOR: Dave Smith
WEBMASTER: Brian Talbott
 
© Copyright 2011: thunderbolts.info
 

 
Disclaimer - The opinions expressed in the Thunderblog are those of the authors of
the material, and do not necessarily reflect the views of the Thunderbolts Project.
The linking to material off-site in no way endorses such material and the Thunderbolts
Project has no control of nor takes any responsibility for any content on linked sites.
 
top ]
 
thunderbolts.info

home   •   picture of the day   •   thunderblogs   •   multimedia   •   resources   •   forum   •   updates   •   contact us   •   support us