The statistics that they cite are indeed indicative of some serious issues in the open system they implemented: only 5% of authors who submitted work during the trial agreed to have their papers opened to public comment; only 54% of those papers (or a 38 of a total of 71) received substantive comments. But I have to wonder whether the experiment wasn’t rigged from the beginning, destined for a predictable failure because of the trial’s constraints.
First, no real impetus was created for authors to open their papers to public review; as I noted back in June, the open portion of the peer review process was wholly optional, and had no bearing whatsoever on the editors’ decision to publish any given paper. And second, no incentive was created for commenters to participate in the process; why go to all the effort of reading and commenting on a paper if your comments serve no identifiable purpose?
What I want to ask at this point is what MediaCommons can learn from the ostensible failures of Nature’s experiment. How can we develop a successful open peer-review process with adequate author and reviewer buy-in?