Library Blog

Blog

Science Mag sting of OA journals: is it about Open Access or about peer review?

Yesterday, Science Magazine scooped with a story on a sting with fake articles that passed peer review of 157 Open Access journals. In the article “Who’s afraid of peer review?” John Bohannon, a contributing correspondent to Science, tells how he concocted a fake article and submitted that to hundreds of Open Access journals. His fake article contained very obvious methodological flaws and mistakes. Out of 255 papers that underwent the entire process from submitting to acceptance or rejection, 157 were accepted and 98 rejected. That seems very serious and it is. Actually it is not that surprising considering the many predatory publisher start-ups. But before drawing any conclusions one should consider these points.

1) No control group. The flawed paper was submitted to Open Access journals only. Bohannon says he considered a broader sting including journals from all types of publishers, but decided to focus on Open Access publishers only to save time. This of course means that you have no baseline to weigh the results. Are these low publishing standards typical of Open Access journals? We can not tell. There is something rotten in the state of Open Access publishers, but is it better in the world of traditional publishers? We don’t know. It’s like evaluating students papers from a science class and saying that many papers by male students were sub par. What’s the use of that knowledge without evaluating the papers written by female students? Of course you might say that the baseline is zero: any number of journals above that is unacceptable. But then the conclusion you can draw is just that this and that journal is bad, not that Open Access journals have poor quality control.

2) Bias in selection. Bohannan’s selection of journals to submit his paper to is based on the Directory of Open Access Journals. Even if you consider this list as complete, which it is not, the selection Bohannon made is very skewed. Only journals with fees (article processing charges) were considered, thereby throwing out 75% (leaving only 2054 of 8250 titles). This introduced a bias, because all publishers who are just in it to make easy money are concentrated in this group and not in the 75% that does not charge. Then the author threw out all journals that were not aiming at general, biological, chemical or medical science. This meant another reduction with 85% (from 2054 to 304), introducing another bias, because the pressure to publish is highest in these fields, making them prime targets of ‘predatory’ publishers. In short, by making the selections, chances are high that (unintentionally) Bohannon focussed on a group with a relatively many rotten apples.

By the way, the article makes good reading and there is some interesting detail. For instance, results are not clear cut. Some of the Open Access journals with failing peer review are owned by reputed traditional publishers such as Elsevier, Sage and Kluwer. And some Open Access journals did perform rigorous review and rejected the paper, such as PLoS One and a  journal from Hindawi. And if you dive into the interactive map with all results and email correspondence, bank account locations and more you’ll find a lot of interesting stuff that lays bare how predatory publishers try to gain or maintain their status. Bohannon should be complimented for revealing these patterns.

So, this study shows there are 157 Open Access journals with failing peer review. That’s it, and that is serious enough. To me this also shows the potential benefits of finally opening up peer review. That way the rotten journals could be weeded out much more easily, although there are more important reasons for introducing open peer review. But that is another story.

Some other reactions to the Science Magazine article:

Peter Suber | Björn Brems | Kausik Datta | Michael Eisen | Martin Paul Eve | The Economist | OASPA | Scholarly Kitchen | Retraction Watch | John Hawks | Guardian | NPR | Inside HigherED | LA Times | Curt Rice | Der Spiegel | Chronicle HE | Ernesto Priego | Gunther Eisenbach | Fabiana Kubke | Zen Faulkes | Mike Taylor | Libération | Rue89 | Le Monde | Nu Wetenschap | Nieuwsblad.be | Independent | Toronto Star | Lenny Teytelman | Tagespiegel | McBlawg | Jason Koebler | Salon.com | Bradley Fikes | Michael Schoon | SPARC | Paula Callan, ABC Science | Tracey Brown | Ernesto Priego, id. on LSEimpact blog | Steven Novella | Dan Vergano at Nat. Geographic | Stevan Harnad | MSN nl | The Telegraph India | Jacob Kastrenakes on The Verge | Michelle Meyer (Harvard Law) | Brian Wood (comparison with impact factors) | Bob O’Hara in The Guardian | Philip Moriarty at physics focus | Ulrich Herb | Gavia Libraria (on alternatives) | Gary Daught | Alessandro Delfanti | Science @ORF | Adam G. Dunn | Fernando Blanco | chat session with John Bohannon, Michael Eisen and David Roos | Graham Steel on Figshare blog | Neurobonkers I and II | NRC 1 but later 2 (for UU only) | De Volkskrant 1 but later 2 (for UU only) | Mark Liberman on Language Log | DOAJ (second reaction) | John Bohannon (reaction to the reactions)

@jeroenbosman

[a slightly different and updated version of this post has been published in Dutch in the Utrecht University online magazine DUB: http://www.dub.uu.nl/artikel/opinie/science-testte-vooral-slechte-tijdschriften.html]

16 reacties to “Science Mag sting of OA journals: is it about Open Access or about peer review?”

  1. Roos

    Hoi Jeroen, dank voor je werk en het delen met ons! Jouw analyse zou trouwens goed passen in de wetenschapsbijlage van NRC… Die berichtte namelijk ook wel over het Science artikel, maar er zat geen enkele analyse aan vast. Groet, roos

    Beantwoorden
  2. Matt Cockerill

    The non-random sampling is one of several *major* problems with this “research”, and it is actually worse than you indicate above. The larger publishers such as BioMed Central, Hindawi, Elsevier and Sage had dozens of appropriate journals he might have submitted the article to, but he only chose one per publisher. How did he choose which to submit to? He doesn’t say, other than that he selected based on appropriate subject area. But that provides huge scope for conscious or unconscious experimenter-bias to creep in – e.g. attempting to pick the weakest-looking journals in order to generate the desired result. Certainly the BioMed Central journal to which the spoof article was submitted (and which rejected it) was not an obvious choice in terms of scope and may have been chosen for other reasons. Also, choosing one journal per publisher is in any case a poor way to sample journal space, as there is such a long tail of very small publishers which make the result unrepresentative.

    Beantwoorden
  3. Jeroen Bosman

    Thanks for that reaction Matt. Had not even realized that, but is so true. Even within the selection of English language, fee-asking, life science, OA journals a random choice of journals might have given completely different results. Of course he chose this approach in order not to raise suspicion. But Bohannon should at least have raised this issue of extreme bias in his article.

    Beantwoorden

Geef een reactie

Het e-mailadres wordt niet gepubliceerd. Vereiste velden zijn gemarkeerd met *