I saw an article in the popular press recently about problems in universities in general. The author is not a scientist, but a university scholar in a non-scientific field, but one of his points was that pressure to publish and peer review are perceived to be obstacles to excellence in his own field. I find similar problems in science, including ornithological sciences, including in university-based science programs.
I have noticed on a personal level a disturbing number of published papers in mainstream journals that I feel are shockingly weak, full of biases and unreliable. These include products of ornithological science (raptor and raptor migration and raptor genetics research originating with UC Davis' genetics lab in collaboration with the Golden Gate Raptor Observatory) . I wonder if part of the problem is that such papers are reviewed by peers who are too close to the authors in the sense that the reviewers and editors are largely uncritical of important aspects of the science involved.
As an example, I recently read three different papers published in three different journals that were components of a recent PhD dissertation in the general field of conservation genetics as related to birds. All three papers contained what appeared to be excellent lab work in terms of genetic sequencing and analysis, which may have been the expertise of the peer reviewers. But all three papers suffered from eggregious sampling areas regarding the field work that led to the lab processing. Sampling was variously nonrandom, nonrepresentative of the populations being assessed for gene flow, inadequately distributed amongst regions under comparison, and strongly biased among geographic regions being assessed. As an example, one object of the study was to assess regional genetic traits of subpopulations of multiple widespread subspecies, but instead of random sampling of subjects known to reside in those separate regions, the vast majority of the samples were conveniently gathered from birds of unknown origin as they happened to be in transit (migration) at migration study sites. There was documented reason to believe that such transient birds were known to regularly travel between the regions under analysis during the sampling period and so there was no likelihood whatsoever that the sampling could accurately represent local regions to be compared with each other. This was a terrible case of "convenience sampling" or biased sampling at its worst. A careful analysis of the results of the study clearly showed this bias to be evident in the study itself, yet the study presented the results as "accurate" and even made management and conservation suggestions on the basis of the flawed analysis!
A major concern of mine is that peer reviewed publications provide precedents for research to follow. An author of weak research often uses one of his/her publications to build on for future work, that may also be similarly flawed. Other researchers may read the research line by line, sentence by sentence, and cite in their work or utilize in their own work poor methodology or accept weak findings on the ground that they were published resources . Many authors will pick and choose portions of published literature to support their own studies, methods, research premises, etc without reading the entire papers and discovering for themselves the weaknesses and flaws that are duplicating. Scientists are so busy nowadays that publications are often read, but rarely studied in depth in any sort of critical or questioning manner.
My fear is that peer reviewers may have the same mindset as authors in focusing on technical aspects of science, such as statistical methods, laboratory methods, etc. and not look closely enough at research concepts, study design, field methodologies prior to data analysis, and so forth. At minimum, peer review ought to be more of an adversarial process than it seems to be. I often get the impression that many editors and peer reviewers are more interested in spell checking that in in-depth analysis of the quality of science. Many peer reviewers may not understand the technical aspects of publications in this age of growing reliance on technology and statistics as well. Because publication of science invites replication of that science, one bad publication invites more of the same.
Somehow these problems in peer review need desperately to be minimized and corrected. An additional problem that I can mention only briefly is the reluctance of journal editors to address published error. Not long ago, a colleague and myself became concerned about poor science in a mainstream journal and we contacted the editor of the journal for opportunity for a rebuttal. The editor reluctantly offered us a tiny allotment of space in comparison with the original publication to make a rebuttal case, but then refused altogether to publish the rebuttal because the original authors would not agree that our rebuttal was justified! In this case, I know that worldwide experts on the species involved agreed with myself and my colleague on the problematic quality of the science, the the journal editors sided with the authors, who have never published before or since on the species involved. Peer review and editing need to be of high quality in our system of sharing of scientific knowledge.
This is a matter that deserves serious attention, in my opinion.
Stan MooreSan Geronimo, CA