Peer review transparency reveals scientific provincialism

Recently, my favourite journal (PeerJ), policy and handling-wise, picked a half-rotten apple sharing the fate of other science-before-profit publishing projects such as the Public Library of Science and Frontiers-in: the more people jump into the boat, the higher the chance the peer review fails. But thanks to peer review transparency, we can see why.

There's no need to do what you can't

Modern science thrives on pretention. We can't just publish something interesting, we always feel compelled arguing why it's important and stress its ground-breaking novelty. On the other hand, everyone can use computers, and those computers can do fancy analyses provided you have some data. And they always get it right, so why should editors and reviewers bother about the results?

In Nurses We Trust (and elect the opposite)

I found a new twitter account by the Spectator Index posting funny lists based on polls, studies etc. Such as: how long you have to work to buy a burger. One last week was a Gallup poll asking (U.S.) Americans how they judge the ethics of professionals. A nice piece of unscripted satire.