Getting Serious About the Online Part of Research Online

crossposted from MediaCommons:

Today’s Inside Higher Ed features an opinion piece by Sara Kubik, urging academics to “get serious” about online forms of research publication.

While it once made sense to equate print with quality, it’s time to embrace newer forms of communication as valid. If they need academically sound forms of verification and procedures for citation, let’s get to work.

I could not agree more — and yet it’s important to note, in the comments that follow, one of the reasons why such getting-serious is easier said than done: in response to Kubik’s insistence that online publishing would help to alleviate the horrific time-lags between the completion of research and its dissemination, Sandy Thatcher, Director of Penn State University Press, responds by saying that it’s peer review that takes so long, and thus the digital won’t speed things up all that much.

This kind of response is precisely the reason a project like MediaCommons is so necessary, I believe: if we are really going to get serious about online scholarly publishing, we have got to get outside the paper-based model of what publishing is. What Thatcher’s response misses (and what I’ve attempted to follow up with in a comment myself) is that it makes no sense to port paper-based procedures into a digital publishing process. In conventional publishing, peer review has to come before publication, due to the material scarcities involved, whether the limited number of pages that can be published in a journal or the limited number of volumes that can be published by a press. These scarcities do not obtain in networked environments; there are no limitations on the number of texts, or the length of texts, that can be published. What is scarce, instead, is time and attention. What we need is a peer review process that works toward maximizing those scarcities, rather than using paper-based models of gatekeeping.

What I’ve been arguing for some time now is that we need to let everything be published, and transform peer review into a post-publication filtering process. Right now, a monograph that will only reach a dozen interested readers simply can’t be taken on by a traditional press — but why shouldn’t that monograph be able to find its dozen readers online? Isn’t it imaginable that those dozen readers might gradually, through their resulting publications, persuade many more that they’d overlooked something important in that original monograph?

So open the floodgates. Let’s develop a system that helps that dozen readers find the texts they’re looking for, and vice versa. And in the process, let’s crowd-source peer review. Right now, the process is slow in no small part because of how the “peers” involved are determined — they’re a very small number of hand-selected, overworked, and undercompensated readers. Why shouldn’t we allow any reader who genuinely engages with a text to become a “peer”? In so doing, we not only spread the labor of peer review out in a more just fashion, but we also recognize that readers and readings change, and thus that review should be an ongoing, rather than a one-time-only, process.

I’m hopeful that MediaCommons, by creating a new publishing process from the ground up, might be able to help transform our ideas about online publishing, to help us work with rather than against the net-native modes of producing “authority.” But in order to do so, we need your help. Publish things here, whether as blog posts or uploaded documents. Help us imagine the projects we should be taking on. Give us your feedback about the site, its structure, the features you’d like to see, and how we might develop and implement a genuinely peer-to-peer review process.

We’re getting serious. We hope you will, too.

5 thoughts on “Getting Serious About the Online Part of Research Online

  1. Why shouldn’t we allow any reader who genuinely engages with a text to become a “peer”?

    I think this is the crux, and I think it can’t simply be overlooked. I think Slashdot and Kuro5hin and others give examples of why “just anyone” or even “most interested” are not great models.

    The majority does a really good job of getting important stuff wrong. Do I want a peer group that still believes there were WMD in Iraq deciding whether this works.

    Wikipedia is the counter example–it does pretty OK, but I think that’s because there is a bit of a built in selection beyond self-interest.

  2. Absolutely — but there’s a lot of space between the current mode of peer-selection, in which two to three people make the decision on behalf of all of the rest of us, and “just anyone.” And I don’t mean, in invoking crowd-sourcing, that judgment and responsibility go out the window. The problems with the kinds of systems you mention are precisely the reason that I argue (in a chapter I hope you’ll get to see soon) that in a peer-to-peer review system, the most important element is less the review of the texts involved, than the review of the peers, such that readers come to know which other readers’ judgments they trust. I don’t mean to oversimplify what’s undoubtedly going to be a complex, difficult process, but I do believe that the crux issue in creating genuinely functional digital scholarly publishing systems is going to be the degree to which they engage their communities in an ongoing process of self-creation. And making peer review everyone’s responsibility is one aspect of that process.

  3. I like the “instant review by peers” instead of “peer review” concept, but there are some issues to be worked out. The first is the syndrome, when reviewers of academic books sometimes post what they do without much regard for the actual quality of the book. You mentioned readers who “genuinely engage” with a book: who decides what will constitute this genuine engagement?

  4. Again, I think that determining what constitutes genuine engagement, what makes a good review, and so forth — all of these things would have to be subject to community-determined standards, and assessed by the community at large. I’m not trying to avoid answering the question by deferring everything to “the community,” but rather to indicate that scholars, as a community, already have a series of processes through which we assess one another’s work and hold one another accountable; making those processes more transparent — and even more, making those processes themselves subject to the same kind of assessment and accountability — seems to me a necessary component of the future of peer review in digital environments.

  5. This sounds very interesting and exciting.

    Not sure I’m too worried about the great unwashed leading us astray. “The majority does a really good job of getting important stuff wrong.” But the elites do/did an equally good job.

    But, surely, the real question is why provide an in-house voting system for anyone? If what we’re worried about is a ranking or filter system that makes one paper more accessible or authoritative than another, just don’t rank or filter. But do allow some sort of brief association, maybe (“so and so flagged this as interesting”).

    Or maybe not. I only care about the reviews of reviewers I trust. I usually get referred to papers/webposts via others’ bibliographies, references and hotlinks. My online community is fully capable of alerting me to online finds. (Yes. I’m sure I miss things, but that’s my fault, my risk.) If I want to promote a paper or post, I talk about it in my blog, on Facebook, through group emails, etc. to others in my community.

    Will quality win out in a web-based market place of ideas? Probably not. But material and ideas will become available in ways they haven’t for a long, long time.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.