The Little Study in the Forest of Reviews (a modern Brothers Grimm Tale)When reading the very first page of a scientific paper, one is usually provided several dates:
The submission date — the day, month, year, when the authors forced their brain-child into the Forest of Reviews, so it may be judged; and eventually cleansed, when deemed to be valuable, from as many errors as possible by the friendly (or hostile) Wizards of the Forest (colloquially known as peers).
The revision date — the day, month, year, when the authors (more or less) obligingly submitted their revised (or re-revised, or re-re-revised) version to the Lord of the Forest (colloquially known as editor) handling the paper.
The decision date — the glorious day, month, and year, when the Lord makes the decision that the child has grown to be a man (or woman, the German word for study is female) and can be released into the World, so that it may inspire the common people (or at least get you an extension, new grant money, etc.)
And since we are living not anymore in the Medieval Ages, but in a digital one, there is now also the date, when the paper surfaces the first time (unless, you used a pre-print server such as arXiv, bioRxiv, or PeerJ PrePrints to get your news out before they may get a bit lost, or vanish, in the Forest of Review).
Here’s an example from my own research (Grimm & Denk 2012): Received 13 April 2011; received in revised form 2 November 2011; accepted 16 January 2012; available online 26 January 2012.
It must have been quite a rugged thing, seven moons raised in the time our brain-child was purified and became better and better. Right? Well, no. We needed hardly a month to do the revision, but the editor’s decision came only in October, the 10th (six months after submission). And the revision was not what took us a month, but writing a detailed point-to-point response to the many critiques of one of the anonymous peers, considered by some an expert on using plant proxies for palaeoclimatology.
[I remember us sitting in the office till late and trying to figure out, what he tried to criticise. We even got into fights about it.]
Thanks to review process confidentially, this response is lost in the Impermeable Fog.
[Would be, I leaked it because the way our study was presented in Hoorn et al. 2012. It includes several interesting, but also revealing points, e.g. the assumption of the expert peer that trees in the mountains of China occur today outside their actual climatic niche!]
What happened in the six months between original submission and the submission of our revised version? Even as authors, we don’t know. Because ... you know, the Fog again.
[The bushdrums from distant lands, beyond the Forest of Reviews, whispered something about a peer, who wanted the paper to be rejected and had to be heard (for fairness) being the main addressee of our critique. This particular Wizard of the Forest might have needed half a year to make up his mind and, ultimately, a brainstorming meeting at an international yearly gathering of his magic circle (and others) to find reasons to turn down our study. The meeting took place on the 1st–4th October that very year, and only thereafter, he could hand in his devastating report. That'd the week before we got our decision, but this was pure co-incidence and the story is not true and like Ygrette said to Jon Snow, we know nothing.]
Why reporting review return dates should be obligatoryBelow a bar-chart, I used as last slide in one of my last talks entitled “Living under the bridge – tales of a geologist-geneticist” about the (scientific) benefits and (professional) perils of cross-disciplinary research.
My research connected quite different fields of science (molecular phylogenetics, non-trivial data analyses, palaeobotany, biogeography and dating), so parts of our papers (or the entire paper) must have been alien to not a few of our peers. Editors not rarely struggled hard to find peers at all. In a few cases, the editors simply forgot deciding on the paper (and we eventually forgot to inquire for its status every month). As consequence, these papers spend a lot of time roaming alone in the Forest (red bar), although there was very little to do to make them publishable (green bar).
But we also had other papers, where to-the-point reviews and editor decisions were quick, but it took us (or just me) a substantial time to revise them. With the review process being confidential for most journals, it is impossible to know whether a long review phase was due to the authors taking a long time to revise, peers to report, or editors making a decision. But shouldn’t this information be visible as a service to authors to decide where to submit their papers?
Realising this problem, some journals such as Systematic Biology now give the date when the reviews were received, and not the day the editor received the revised version. This allows hopeful authors to find a quick and efficient receptacle for their work, and is generally much more author-friendly than the standard approach.
But there is a catch, and Systematic Biology (SB), is a good example.
I participated in three papers published in SB, and here’s the information, you’d be able to see:
- Renner et al. (2008, SB still used the traditional date set): Received 01 November 2007; revision received 28 January 2008; accepted: 15 July 2008 — indetermined time under review (can’t remember exactly, too long ago, but less than three months, obviously)
- Potts, Hedderson & Grimm (2014): Received 31 May 2013; reviews returned 19 July 2013; accepted 2 August 2013 — 1½ month under review
- Grimm et al. (2015; a paper I'm particularly proud of, because an old and well-merited palaeontologist, and stone-age-cladist, called it “methodological flim-flam”): Received 7 October 2014; reviews returned 20 November 2014; accepted 1 December 2014 — 1½ month under review
Fact is, we submitted Potts et al. (2014) to SB on the November, the 2nd 2011, 1½ years before it was published. The reviews were returned 3½ months later with the verdict “Reject, resubmission encouraged” (one generally positive, one very negative review). We resubmitted over half a year later (January, 24th, 2012; delay due to various reasons including scientific) and waited another two months for a review report and the same decision (“Reject, resubmission encouraged”). This round, we did not do a lot except a bit re-texting and a lot rebutting the critical anonymous expert’s opinions (acting as peer in the first and second round), who “still had not been impressed” by our 2nd much re-written version (or our approach, in general). So quick it was, but only the final round.
For the Grimm et al. (2015), we only circled once, not twice (1st submission on July, 6th, 2014; and a bit over three months till we got the first decision “Reject, resubmission encouraged”).
From these (my) experiences two things can be learned.
If you get a “Reject, resubmission encouraged” from the editor of Systematic Biology, a quite high-flying and prestigious journal, DO IT! It simply means that the editor wants to publish your paper, but it may take some time to get it fit. So, he/she needs you to come back to avoid a line like this: Received 2 November 2011; reviews returned 15 May 2012, 2 April 2013, and 19 July 2013; accepted 2 August 2013. It may not work out always – this paper of ours (my first first-author paper) was rejected by a long-gone editor, after we handed in our revised version (pretty much the finally published version) – but it’s (still) worth the shot. The Impact Factor may be going down, but still is much higher than most other journals in this sector of science-space.
The other is: Clearly the best solution is to just make the review process transparent. Then everyone who is interested in the review process (e.g. to decide where to publish) can see how long the different steps took, and also see into the reason why (e.g. a lot had to be done, or it took a long time to find peers because of the particular topic of the paper).