Thursday, August 19, 2010

Organized Science and Its Dirty Little Secret

If there is a recognized deity in the world of organized science, one of its names is “Peer Review”. Before results of any serious scientific investigation are published in a reputable journal, an associate editor farms out the submitted paper to “peers” of the hopeful author(s) (in a few cases the editor may reject a submission without review). Much of the time this process works fairly well; the reviewers provide an evaluation of the would-be contribution: its novelty (has somebody else has produced the same result?), evidence (and further testability), coherence (do the data support the interpretation), and citation of appropriate sources, and makes a recommendation: accept as is (rare), accept with modifications (more common), or go back to the salt mines and here’s why (most common). The editor may solicit numerous reviewers and from the typically one or two returned reviews makes his or her decision on the submission.
In the case of this particular scientist, the rejected papers stack higher than the accepted, but, in most cases, the rejection reviewers were probably correct. If my contribution was truly original, but rejected, it was incoherent or the illustrations were inadequate. In one case one supposedly anonymous reviewer cited a key paper I had missed (I subsequently sent him a thank you note as the Word document review included his name under File>Properties; he never responded, but he has been known to be a little testy over the years.) And, I made sure I referenced that paper in subsequent submissions.
I have two research interests – one is applied and is work-related. On occasion, with company permission, I’ll present some of it a professional meeting and proceedings. The other is more academic; it has little practical application for a Fortune 500 company, and (still with company permission) I submit occasional papers and abstracts for journals and meetings. Even before I joined my present employer I prepared a very long, elaborate manuscript attempting to connect a bunch of phenomena: anomalous volcanism, deformation in the interior of continents, and the motion of tectonic plates over the past 130 million years. Two reviewers responded (and identified themselves – always a nice touch). One reviewer was mildly supportive, but indicated her reservations about the coherence of my argument. The other reviewer, whose work from early in his career was cited favorably by me, was much harsher: I was approaching the research the wrong way. I was going top-down – data to inference; his preference is inference (partial differential equations) to data. And, the editor indicated the submission was much too big for publication in any case.
I was grateful in a way that the big manuscript was rejected, for it gave me the motivation to expand the paper into a monograph. Besides, an improved plate reconstruction model had emerged, fitting my data even better than before. In big science, certain monographs can get published by reputable firms without extensive peer review; the firms make their profit from sales to research libraries and specialists in the field. I prepared my prospectus, submitted it successively to several firms, one of which agreed to publish it. In the middle of filling out the manuscript and preparing the illustrations, I had an epiphany. I saw a new way to interpret the data that NO ONE had recognized before. In my exuberance I went so far as to coin a new term to contain the inference (probably a mistake) and sent the opus off. Email interaction with a patient copy editor led to the book’s publication (yes, it’s still in print and now in paperback and KindleTM). And, it was now desirable to promote it.
I prepared a variety of brief summaries for a variety of science news outlets – the avenues that most of my peers read faithfully. Rejected, rejected, ignored… we’re reviewing your submission. Inquiry… we’re still reviewing. Inquiry… embarrassment... Finally, one promotional article is accepted and published in the most important newsletter (hooray!) A few months later, a couple of government research scientists comment on the paper in the same newsletter – favorably.
Time to develop the idea further… New evidence was coming to light, so a prospective review article is written. I send it to a commercial peer review journal (because I didn’t want my company to have to pay publication charges for a long paper). The journal had just adopted a new technology that combined Word docs and illustrations into a single pdf document. Unfortunately, the technology didn’t manage my illustrations that well – they shrank to postage stamps when printed and were virtually unintelligible. The individual illustrations were still accessible, inconveniently, to reviewers as separate downloads – twenty-some of them.
So, back came the reviews: the illustrations were “illegible,” “incoherent!” Why didn’t you cite these papers? And, from the associate editor – don’t bother resubmitting this. I identified the non-cited “anonymous” reviewer easily. I had not explicitly cited his work precisely because the data I showed were inconsistent with his numerous contributions (you might say I cited him implicitly), and, more importantly, he left a lot of critical evidence out of his publications (did they not fit his model?). In fairness, the non-cited reviewer is in an unenviable position, you see. He has not had a tenured position since receiving his terminal degree. He is on “soft-money.” Further, the line of research he has followed from his dissertation forward is based on a core computer program written by his progenitors and onto which he has attached further refinements. While the computer code is presumably accessible (I’ve never tried to get it), it hasn’t had much competition over the years. It's not clear whether his and his collaborators' results have ever been reproduced by independent investigations. In any case, the code produces results which do not match the data I assembled. Which should be trusted: data or code? More troubling was the appearance just a few months later of a review article based on that code which included my latest manuscript’s non-cited reviewer and, more significantly, the very associate editor of my manuscript as coauthors. Oops, I surely submitted that manuscript to the wrong place!
One more time, another very different manuscript is sent off. The editors can’t find willing reviewers. He apologizes, but persists. Finally, a couple reviewers come through, and the paper is accepted with the clear order: I must discuss other models, such as that I had explicitly left uncited in the particularly unfortunate manuscript. So, I add in my arguments against other models and the paper is published.
Yes, most of the time peer review works. But sometimes things go south. When bad science is submitted (incoherent, illogical, unsupported…) it should be rejected. But when a paper is sent off for review and that paper treads on current orthodoxy or, more critically, the livelihood of others… watch out! Even the pride of the tenured can be an insurmountable barrier to publication. The current turmoil in climate science (not my field) over peer review is not limited to that narrow branch of science and technology. The god of “Peer Review” can have its name taken in vain just as can the one true Deity. There is ultimately only one Truth, and we cannot claim to own and understand it all, scientist by scientist. A little humility, please…

No comments:

Post a Comment