I have to admire how the full, semi- or quasi-predatory publishers never run out of new ideas to lure the needing (stuck in the single-blind peer review), unwitting (younglings who don't know better) or cheating (pseudo-scientists and those on the payroll of e.g. the pharmaceutical or medicinal industry) to their allegedly scientific and peer-reviewed publication platforms.
#FightTheFog (16) ancestors (3) animals (3) artwork (7) Austria (2) bad science (6) Beall's legacy (6) bias (2) branch support (3) Bundestagswahl (6) comment (9) curiosities (1) data links (3) European (3) France (9) free science (5) funny things (3) Germany (7) in Deutsch (20) infographics (23) introduction (1) Ireland (1) Köppen-Geiger (3) Landtagswahlen (6) languages (5) lost science (2) not science (7) oddities (13) open access (1) open data (2) palaeontology (9) peer review (10) Philosophisches (4) phylo-networks (13) plants (12) politics (25) public interest (15) satire (9) science-related (15) Sweden (4) terminology (4) tips (18) travelling (1) USA (18) Wahl-O-Mat (5)
Another treasure, an offer one can not possibly reject, found in my junk folder. The "Oasis Publishing Group" wants to cherish my "outstanding contribution to the scientific community" with an autobiography.
... fragte die SZ dieser Tage und ludt zur Online-Diskussion ein (powered by Disqus). Mein Kommentar (hier mein Disqus-Profil) wurde dummerweise als "Spam" gekennzeichnet und nicht publiziert (bisher), vermutlich wegen den Links zum Political Compass (non-profit) und unserem Genealogical World of Phylogenetic Networks blog (wir verdienen auch kein Geld damit; es lebe die algorithmen-gestützte Zensur). Deswegen hier der Volltext (leicht verändert).
As my first Easter Egg post, I advertised the most important French hebdomaire, weekly newspaper, Le Canard enchainé (lit. the Chained Duck), something you can't find in the virtual world. This year it is something more peculiar, literally a virtual (version of our) world. Well, worlds, the bygone ones.
A search led me to a question on ResearchGate (RG) issued five years ago: How can I interpret bootstrap values on phylogenetic trees built with maximum likelihood? Quite a bunch people answered it, but, to my mind, only provided easy answers, not the critical ones.
A tweet pointed me to a post with an interesting title "How to spot palaeontological crankery" by Mark Witton which includes (in the second part) "10 Red flags and pointers for spotting crank palaeontology" for non-experts. As an expert, I cannot help but to note that most of the ten points also apply to proper palaeontological science as well.
Recently, my favourite journal (PeerJ), policy and handling-wise, picked a half-rotten apple sharing the fate of other science-before-profit publishing projects such as the Public Library of Science and Frontiers-in: the more people jump into the boat, the higher the chance the peer review fails. But thanks to peer review transparency, we can see why.
Modern science thrives on pretention. We can't just publish something interesting, we always feel compelled arguing why it's important and stress its ground-breaking novelty. On the other hand, everyone can use computers, and those computers can do fancy analyses provided you have some data. And they always get it right, so why should editors and reviewers bother about the results?
I found a new twitter account by the Spectator Index posting funny lists based on polls, studies etc. Such as: how long you have to work to buy a burger. One last week was a Gallup poll asking (U.S.) Americans how they judge the ethics of professionals. A nice piece of unscripted satire.