The grey zone between obvious and less obvious scientific crankery

A tweet pointed me to a post with an interesting title "How to spot palaeontological crankery" by Mark Witton which includes (in the second part) "10 Red flags and pointers for spotting crank palaeontology" for non-experts. As an expert, I cannot help but to note that most of the ten points also apply to proper palaeontological science as well.

[27/2 – added some links and infos]

Witton's "red flags" can easily be propagated to what has been and is published in (usually "confidentially" peer-reviewed) journals and what I experienced in my 15+ years as a tax-paid professional scientist.

Red Flag 1. The creation of a problem to solve

In science, we euphemistically call this "hypothesis testing", and the easiest papers to publish are those testing a hypothesis that is either trivial or deals with little data at hand (you can find a few examples in my bad science category). An important point to keep in mind as an author when writing introductions is to make sure to create a problem that our data will solve. We can't just write: "Oh, we found the material, and took a look at it, because no-one else did so far, and this is what we found."

Red Flag 2. Avoidance of conflicting data or fields of study 

... is widespread in professional science and particularly easy in a rather soft science like palaeontology. A nice example, and for a long time tolerated by peer-reviewed scientific journals, is the so-called "Coexistence Approach" (see here and here). Many published phylogenies are also proof. I would even argue that the common practise of relying exclusively on parsimony strict consensus cladograms (Please stop using cladograms!) to infer the phylogeny of extinct organisms is, literally, the "avoidance of conflicting data". See also my upcoming post on the Genealogical World of Phylogenetic Networks (goes online March, the 4th). 

Red Flag 3. Over-confidence 

It should be like Witton writes, and not few of us keep a critical distance to their own research. But career-wise, over-confidence is an asset; especially when writing papers that need to pass the Forest of Reviews. There are basically three sorts of peers:
  1. the I-don’t-give-a-shit type;
  2. those who want to kill your paper (for various reasons); and 
  3. those who, altruistically, (try to) do their job (you usually get nothing out of it).
Type 3 the still common enough, but Type 1 is becoming more and more common, while Type 2 has always been there. Now, we usually don't know who it will be or was, because most journals use "confidential" peer review. Type 1 is uncritical for the publication process (as an author afraid of the Forest's many beasts), but Type 2 and 3 need a closer look. If you discuss the limitations of your data in the submitted draft from all possible angles, Type 2 will use this against you to turn your paper down. Type 3 may get further thoughts. This can be beneficial for the study, because it helps to elaborate on a deficit. But also detrimental, because they come up with hard to please requests.
On the other hand, if you just pretend everything is fine, the killers need finding the weaknesses by themselves (usually don't have the time to waste, so come up with something easy to rebut) and the proper peers will tell you what you missed.

Red Flag 4. An embarrassment of scientific riches

Just check out the publication records of highly productive vertebrate palaeozoologists — this is what the public takes most interest in, as seen by the examples used in Witton's post, it says "palaeontological crankery" but meant is crankery, frauds, in vertebrate palaeozoology. And then count the number of (first-time) studied individuals used for each paper. You need to over-sell what you have, and, to my experience, vertebrate palaeozoologists know how to play that card (why they have the highest paid palaeontologist per known fossils ratio in all of palaeontology).

An anecdote from my time at the NRM in Stockholm. There was an EU-funded exchange programme (SYNTHESIS) to allow researchers coming to our museum; people from all across the EU and outside handed in an application, our committee ranked them and granted them a number of days (the total number of days was limited by the amount of money, we would get from the EU for the programme; I think we got 300 or so). The most hilarious application was that of a primate palaeozoologist (100+ publications, very famous member of what we call the "hominid mafia") who wanted three full weeks to study a single partial lower jaw of some already known primate species claiming this will be a huge leap forward for primate science and his next high-fly paper (being no holotype, he could have just made a loan). In contrast, the palaeobotanists and invertebrate palaeozoologists usually applied for one, max. two weeks to look at entire collections (never looked at before) with hundreds of fossils and were much more modest (not rarely too modest) regarding what this means for science.

Red flag 5. An abundance of self-citation 

Show me a paper from a nigh-expert in palaeontology without an abundance of self-citations! Even those that contributed a lot to science, cannot help it. A nice example of one-sided self-citation is a review ("redux") on early angiosperm fossils published in one of Nature's many offshots (see my comment) co-authored not only by one, but three nigh-experts. An in the more fringy palaeosciences, where the paid palaeontologist/ number of unstudied fossils ratio is much worse than for dinos and hominids, it may be inevitable. Because there is only one active researcher left studying that particular group. Abundance of self-citations is a very bad criterium for those without insights in the field.

Red Flag 6. Knowing your authors 

See e.g. the example above (Red Flag 5). Witton writes "In science, what is said matters more than who says it, but when a questionable claim is made the integrity of the author can be a useful indicator of credibility. Whether we like it or not, reputation matters." It indeed matters a lot who said what. But reputation is a double-sided sword, because it can compensate for lack of quality. For instance, even a dead American co-author with some reputation in the field (and not even palaeontology) can get a poorly designed (and written) paper through the peer review.

Names are game-changers, especially in a relatively soft science like palaeontology, where one has to work with suboptimal and strongly limited data. Believes and thought schools played an important role in palaeontology from the very start (e.g. the long resistance against Darwin's and Wallace's abominable theory; the clinging to parsimony strict consensus cladograms as the only valid means to infer phylogenies). In the old days, most ground-breaking theories came from cranks: Everyone knows Galileo, Darwin, Wallace and Wegener, who was not only a crank at his time, but also unqualified to do what he did — telling geologists how it really works, highly merited members of science academies who back then believed Earth is cooling and constantly shrinking to explain it all. [Apparently some modern-day cranks and frauds, still believe it; how else could dinosaurs become so big, if not Earth was much bigger, hence, gravity much smaller?]
But there are more: Franz Hilgendorf was expediated from the University of Tübingen because he dared to included a heretic evolutionary tree in his 1836 Ph.D. thesis, one of the first ever conceived and published (three years later). And he kept on doing things that many palaeontologists consider still to be heretic: using a network instead of tree as the basis for an evolutionary hypothesis (as a fellow heretic, I know how certain nigh-experts react to such an insane idea; but mine got published, too, and in proper journals, except for this one; it's just harder to, but not impossible).

Knowing your authors may help you avoid complete nutballs (like creationists or people that use dinosaurs to infer shrinking Earth's "palaeogravity") but just because a big name pops up in the author list, doesn't mean the research is of quality. However, a good indication is where the big name stands:
If the big names are in the middle, and flanked by a lot of names, you've never heard of, some caution may be warranted.

Red Flag 7. Misleading credentials and other trickery 

A good point because this is something, professional palaeontologists can't really afford. Although one should not be overly impressed by the status or position a scientist holds. Like in any other job, there is more than one way to get to the top in fundamental science. Hard work helps a lot; playing your cards right, preying on the research of others, associate yourself to big names, doesn't hurt either. But if a M.D. is telling you how climate change works (many of Fox Fake News "experts" have some academic title, it's pretty easy to buy one, here's the first hit when you duckduck "U.S universities selling Ph.D." or run into a complete fraud like Trump University) or a former highly-merited isotope geochemist turned politician (the English Wikipedia has the basic info and links, too), who's research revolved mostly around dating ancient rocks, it's likely nonsense. Science has fragmented in millions of tiny bits, to only start to understand the complexity of processes, you need to be specialist. And for really complex stuff, you need very different specialists working together (which is still rare, hence a pseudo-science like the above mentioned "Coexistence Approach" could establish itself as a European "standard" for palaeoclimate reconstruction, although every ecologist, evolutionary biologist, and plant vegetation scientist will tell you that it cannot work). Stories like that of Wegener are very unlikely to happen these days. Science was much smaller back then.

Red Flag 8. A predilection for criticism and personal attacks of scientists

I can't find the link anymore, but there used to be a page listing animal names (mostly, some plants) that had no other purpose than to humiliate a scientific opponent. A recent example of the sport (but not directed at a competing scientist and very fitting) is Dermomophis donaldtrumpi (its Twitter account) for a recently discovered primitive amphibium from Panama, living in darkness. A company bought the right to name it (clever idea to fund your research) and chose the name because its mode of living reminds one the character of the U.S. president, who also puts his head in the sand when it comes to the topic of global warming.
During my time as a professional scientist, I got a couple of (officially anonymous, of course, but you usually have an idea where they come from or the editor leaks the info being embarrassed by the tone of the report) reviews, which boosted fine pieces of personal attacks and unfounded criticism, in the one or other case just because those peers didn't like the institute you hailed from.

But there is indeed a difference between professionals and fraudsters. Professionals usually (but not always) do the nasty thing (criticising without cause) secretly, e.g. hiding behind peer review confidentiality. Those doing it in the open and excessively, are usually frauds. Or ex-scientists, but not without facts and less excessively. It can take more time to proof somebody wrong than to publish something wrong. First hurdle is usually getting the data (hence, Red Flag 11).
 Red Flag 9. The Galileo Gambit 

Who fears the Spanish Inquisition? The Galileo Gambit is rare, but occasionally found in (very) elderly scientists. At my German alma mater, we had two cases, who played that card during my time (I started 1992 and left Germany 2008). The one was a highly intelligent theoretical physicists/ chemist, who went a bit crazy — his most spectacular action was probably running through the town with the yellow star of David used in fascist Germany to denounce the Jews. The other case was a famed long-retired palaeontologist, who didn't understand that Nature and Science didn't want to publish anymore his big theories based on little tangible data (like they used to) and used the Studium Generale to complain about the prosecution. But in both cases, it was rather entertaining and no real harm was done (except what they did to themselves).

Red Flag 10. Beware of Big Palaeo! 

Probably a good criterium, especially to sort out the kind of conspiracy theorists who rant about "liberal science". "Mainstream science" is the counterpart to "mainstream media". But like in the latter case, there is some grey zone between the unfounded and valid critique. The publish and perish pressure favoured the establishment of science consortia, and like all big organisations, they are not invulnerable to stagnation and corruption. Like in the media “mainstream bias” exist but struggles hard to stop proper publication of contradictory views. As somebody who has published quite a bit cross-disciplinary and out-of-the-box-thinking studies (GoogleScholar), I had more than one sparring round with peers arguing that we should follow the beaten path (which is nothing else than "mainstream science") and should not criticise scientific "standards" (e.g. the example of Point 2, the our critique of the Coexistence Approach). In addition to the CA-syndicate, who forces it pseudo-scientific method on all NECLIME members ("We all know it's nonsense, but as long as they pay for my research and travels, what should we do?"), I learned about e.g. the "hominid mafia" and "stomata density mafia" (the are not organised like the actual mafia, especially the "hominid mafia" is a very heterogenous bunch, but some of their techniques are). The Willi-Hennig Society (founded 1980, thirty years after Hennig published his Kladistik, although we really should distinguish between cladistics and Hennig) has long protected the Faithful against heretics like "phenetic" distance approaches (the things I did a lot) and probability methods (which I also fancied). Just check out this anecdote of Dan Gaur, Once Upon a Time at a WHS Meeting and the comments by leading members of the WHS to it and by Joe Felsenstein, I think I should make some comments ..., , for a long time the WHS' incarnation of all evil. That's why the tree space in which parsimony, the WHS's Holy Grail, will always fail, and probability methods have a 50% chance to get it right, is called the "Felsenstein Zone". The tree space in which parsimony succeeds (and any other tree inference method, distance-based or using a probabilistic framework), the "Farris Zone", after the WHS's demigod James Farris (I saw the Mighty One a few times during lunch sitting on the next table, never saw a single person daring to address him; I never dared, either, as a tiny hellish minion, I'd be sure he'd burn me to ashes). It's highly interesting what stories people you never met before can tell you about the science business, when you are an outspoken, open-minded person (although not good for a scientific career, professional science, like any other part of human society, doesn't really like Nestbeschmutzer [][Linguee]). You tell your dark stories in the science business (I always used comical rage, mixing hard facts with swearwords), and you will hear theirs.

But eventually, we always found a non-predatory peer-reviewed journal that published our heretic (to some, e.g. members of the WHS) ideas. It just takes more effort and time to pass the peer review; and maybe a couple of years till being finally accepted.

My GoogleScholar citation profile
It can, however, be tedious and annoying to see what others can still publish in sheer ignorance of what has been shown and refuted; which is the main reason I quit professional science and now indulge in blogging a about science here and on the Genealogical World of Phylogenetic Networks.

But there is something very important, Witton missed that can help to differentiate between fraudulent, biased and proper science and also works for the inexperienced as a measure.  

Red Flag 11. Check the data documentation 

Do the said-scientists provide their data as open data? You may not be able to check the quality of their data yourself, but any scientist who puts the data online sends a clear message:

I'm not afraid somebody else will be looking at it. 

Transparency is always the best criterion to distinguish between pseudo-science and real science.

To differ between frauds and pseudo-science (what Witton calls a bit misleadingly, in a historical perspective "cranks") and normal scientists, is probably the number of flags

0–1 Red Flags: No to little worries.

2–3 Red Flags: Biased science
(not uncommon in rather soft science fields like palaeontology)

4–5 Red Flags: Bad science
(i.e. science for the trash bin, but still done by scientists)

6–7 Red Flags: Pseudo-science
(it may still look like science to the inexpert eye, but it's not; this is the most dangerous stuff)

8+ Red Flags: Frauds and nutballs
(can be very amusing, because so insane, that no one with common sense would fall for it)

No comments:

Post a Comment

Enter your comment ...