The Future of the University Press

My friends at MPublishing have released a new issue of the Journal of Electronic Publishing, guest edited by the director of the University of Michigan Press, Phil Pochoda, and including extremely insightful essays from a number of key thinkers in contemporary scholarly publishing. Jen Howard reports on the issue for the Chronicle, thinking through the issue’s collective implications.

I’m very pleased to see that so many of the predictions and recommendations contained in the JEP issue align with my own, from the final chapter of Planned Obsolescence. It’s my expectation that there will be fewer of what we currently think of as “university presses” in the coming years, as the failure of the press-as-revenue-center model spreads, but that there will be a great increase in university publishing services, as more and more institutions realize that if they are going to require their faculty to publish, they’ve got to take responsibility for providing the means of publication. These publishing services are, true to that label, likely to focus more on a broad range of services and less on producing physical objects for sale. And they’re going to have to be supported, at the faculty level, by real innovation in thinking about how published work is evaluated, and at the administrative level, by an actual functioning budget rather than an expectation of cost recovery.

All of the essays in the JEP issue are worth attending to; I hope that faculty and administrators will do so, and will press these conversations forward.

Peer-to-Peer Review and Its Aporias

Over the course of last week, a huge number of friends and colleagues of mine posted links and notes on Twitter and around the blogosphere about Mike O’Malley’s post on The Aporetic about crowdsourcing peer review.

It probably goes without saying that I’m in great sympathy with the post overall. I’ve invested a lot of time over the last couple of years in testing open peer review, including the experiment that we conducted at MediaCommons on behalf of Shakespeare Quarterly, which has been written about extensively in both the Chronicle of Higher Education and the New York Times. And of course there was my prior experiment with the open review of my own book manuscript, whose first chapter focuses in great detail on this new model of peer review, and which has been available online for just over a year now.

It’s gratifying to see other scholars getting interested in these wacky ideas about reinventing scholarly publishing that I’ve been pushing for over the last several years. In particular, the entry of scholars who are relatively new to the digital into these discussions confirms my sense that we’re at a tipping point of sorts, in which these new modes, while still experimental, are beginning to produce enough curiosity in mainstream academic circles that they’re no longer automatically dismissed out of hand.

All that said, I do feel the need to introduce a few words of caution into these discussions, because the business of open peer review isn’t quite as straightforward as simply throwing open the gates and letting Google do its thing. O’Malley argues that Google “is in effect a gigantic peer review. Google responds to your query by analyzing how many other people — your ‘peers’ — found the same page useful.” And this is so, up to a point; Google’s Page Rank system does use inbound links to help determine the relevance of particular pages with respect to your search terms. But what relationship Page Rank bears to the category of folks you might consider your “peers” — however democratically you construct that term — needs really careful consideration. On the one hand, Google’s algorithm remains a black box to most of us; we simply don’t know enough about how its machine intelligence self-adjusts to take it on faith as a reliable measure of scholarly relevance. And on the other, the human element of Page Rank — the employment of Search Quality Raters who evaluate the relevance of search results, and whose evaluations then affect the algorithm itself — and the fact that this human element has been kept so quiet, indicates that we haven’t yet turned the entire business of search on the web over to machine intelligence, that we’re still relying on the kinds of semi-secret human ratings that peer review currently employs. [1]

To put it plainly: I am absolutely committed to breaking scholarly publishing of its dependence on gatekeeping and transforming it into a Shirkyesque publish-then-filter model. No question. But our filters can only ever be as good as our algorithms, and it’s clear that we just don’t know enough about Google’s algorithms. O’Malley acknowledges that, but I’m not sure he goes quite far enough there; the point of opening up peer review is precisely to remove it from the black box, to foreground the review process as a discussion amongst peers rather than an act of abstracted anonymous judgment.

That’s problem number 1. The second problem is that peer review as we currently practice it isn’t simply a mechanism for bringing relevant, useful work into circulation; it’s also the hook upon which all of our employment practices hang, as we in the US academy have utterly conflated peer review and credentialing. As a result, we have a tremendous amount of work to do if we’re going to open peer review up to crowd-sourcing and/or make it an even partially computational process: we must simultaneously develop credible ways of determining the results of that review and, even more importantly, ways of analyzing and communicating those results to other faculty, to administrators, and to promotion and tenure committees, such that they will understand how these new processes construct authority online. It’s clear that the open peer review processes that I’ve been working with provide far more information than does the simple binary of traditional peer review’s up-or-down vote, but how to communicate that information in a way that conventional scholars can hear and make use of is no small matter.

And the third issue, one that often goes unremarked in the excitement of imagining these new digital processes, is labor. Most journal editors will acknowledge that the hardest part of their job is reviewer-wrangling; however large their list of potential peer reviewers may be, a tiny fraction of that list does an overwhelming percentage of that work. Crowdsourcing peer review presents the potential for redistributing that labor more evenly, but it’s only potential, unless we commit ourselves to real participation in the work that open peer review will require. It’s one thing, after all, for me to throw my book manuscript open for review — a process in which I received nearly 300 comments from 44 unique commenters — but what happens when everyone with such a manuscript uses a similar system? How much time and energy are we willing to expend on reviewing, and how will we ensure that this work doesn’t end up being just as unevenly distributed as is the labor in our current systems of review?

This difficulty is highlighted by the fact that many of the folks who have written excitedly about the post on The Aporetic are mostly people who know me, who know my work, and yet who were not commenters on my manuscript. Not that they needed to be, but had they engaged with the manuscript they might have noted the similarities, and drawn relevant comparisons in their comments on this later blog post. This is the kind of collaborative connection-drawing that will need to live at the forefront of any genuinely peer-to-peer review system, not simply so that the reviews can serve as a form of recommendations engine, but in order that scholars who are working on similar terrain can find their ways to one another’s work, creating more fruitful networks for collaboration.

There are several other real questions that need to be raised about how the peer-to-peer review system that I hope to continue building will operate. For instance, how do we interpret silence in such an open process? In traditional, closed review, the only form of silence is a reviewer who fails to respond; once a reviewer takes on the work of review, she generally comments on a text in its entirety. In open review, however, and especially one structured in a form like CommentPress, which allows for very fine-grained discussion of a text section by section and paragraph by paragraph, how can one distinguish between the silence produced by the absence of problems in a particular section of a text, the silence that indicates problems so fundamental that no one wants to point them out in public, and the silence that results from the text simply having gone overlooked?

And that latter raises the further question of how we can keep such a peer-to-peer review system from replicating the old boys’ club of publishing systems of yore. However much I want to tear it down, the currently existing system of double-blind peer review was in no small part responsible for the ability of women and people of color to enter scholarly conversations in full; forcing a focus on the ideas rather than on who their author was or knew had, at that time, a profoundly inclusive result.

That blind review is now at best a fiction is apparent; that it has produced numerous flaws and corruptions is evident. It’s also clear from my work that I am no apologist for our current peer review systems.

But nonetheless: I’d hate to find us in a situation in which a community of the like-minded — the cool kids, the in-crowd, the old boys — inadvertently excludes from its consideration those who don’t fall within their sphere of reference. If, as I noted above, our computational filters can only ever be as good as our algorithms, the same is doubly so in a human filtering system: peer-to-peer review can only be as open, or as open-minded, as those who participate in it, those whose opinions will determine the reputations of the texts on which they comment and the authors to whom they link.

[1] Most of this information came to me through a conversation with Julie Meloni, who also pointed out that for a glimpse of what a purely machine-intelligence driven search engine might produce, we can look at the metadata train wreck of Google Books. For whatever reason, Google has refused to allow the metadata associated with this project to be expert-reviewed, a situation that becomes all the more puzzling when you take the Search Quality Raters into account.

The Stein Taxonomy

Bob Stein, founder of the Institute for the Future of the Book and key supporter of MediaCommons, has posted a provocation entitled “Proposing a Taxonomy of Social Reading,” in conjunction with his presentation at the Books in Browsers gathering, which wrapped up yesterday. It’s great to get this glimpse of what took place there, and to have a venue for further discussion of this text in the form for which it argues.

To Read: How Not to Run a University Press

In the category of things that I used to post to the blog that now land on Twitter instead: the link. In an effort to maintain a better archive for myself, I’m experimenting with moving these things back here again.

Today, Chris Kelty’s post on Savage Minds, “How Not to Run a University Press (or How Sausage Is Made)”. In this post, Kelty thinks through the reported demise — or, more accurately, the institutional doing-in — of Rice University Press. Among the issues he raises, perhaps the most significant is the university’s refusal to understand that publishing requires actual labor and financial support:

If you judge the experiment in digital publishing on these facts, it’s sure to look like a failure, but the failure is not in the vision or ideas articulated by the press, but a simple failure to maintain good business judgement. It speaks volumes about how university administrators and many others (including many academics) see academic publishing: as something where no labor is required, only a great big print-a-book machine, a warehouse and some stamped envelopes.

This assessment resonates strongly for me, as in chapter 5 of Planned Obsolescence I focus on the role of publishing within the university, and the university’s responsibilities with respect to publishing. My fear is that universities will take on this responsibility without committing resources to it, assuming (as Rice appears to have done) that because the new mode of publishing is digital, it must be cheap.

The fact is that while the costs involved in publishing can be reduced in some areas, the costs of labor cannot — and, if anything, digital publishing requires more, and more kinds of labor.

This is perhaps not the moment at which institutions want to hear that they have to make additional investments in something that feels optional, but they really need to hear this:

  • If you expect your faculty to publish, you must provide the means for them to do so.
  • If you expect scholarly publishing to turn a profit, or even break even, you may want to stop holding your breath.
  • If you allow commercial entities to take over scholarly publishing, because they can afford to do so, you must expect their predatory, monopolistic practices to encroach on the access you have to your own faculty’s work, and to diminish the impact that their work can have both inside and outside the academy.

There is no solution to this conundrum except for institutions to recognize that they must become responsible for supporting scholarly communication, and that this support will require treating the technologies and the labor involved in publishing as part of the institution’s infrastructure.

Anthologize

I’m way more pressed for time than I’d like right now, finishing up a bajillion details involved in moving myself and a subset of my stuff across the country for the next ten months, but I want to be sure to take a second to note the absolute awesomeness of Anthologize, the new WordPress 3.0 plugin developed by the One Week | One Tool workshop, sponsored by the NEH’s Office of Digital Humanities. The plugin is designed to take you from blog to book — or, even better, from many blogs to many kinds of book-like outputs. I’ve only just begun playing with it, but can easily imagine it become a key part of my Intro to Digital Media Studies class, and I can also see its utility in repurposing thematically-linked blog posts in more permanent, more “official” form.

Huge congratulations to the Anthologize team, and I look forward to watching — and participating in — the project’s further development.

MediaCommons, Shakespeare Quarterly, and Open Review

[Crossposted from MediaCommons.]

Today’s Chronicle of Higher Education brings us a wonderful article from Jennifer Howard, exploring our recent experiment in open peer review, conducted on behalf of the eminent journal, Shakespeare Quarterly. This review process, which is at the heart of MediaCommons Press’s experiments in new modes of publishing for scholarship, has been so successful for SQ that, as the article notes, the journal’s editors plan to use it again for future special issues.

One interesting point in the article is the comparison between the Nature experiment with open review conducted in 2006 — an experiment declared by its editors to have been a “failure,” and used by many in scholarly publishing since then as evidence that open review can’t work — and the SQ review. Howard notes one participant’s sense the “the humanities’ subjective, conversational tendencies may make them well suited to open review — better suited, perhaps, than the sciences,” and yet, of course, the humanities have in general been very slow to such experimentation.

We at MediaCommons are extremely proud to be taking the lead in developing new models for transforming scholarly communication in the humanities, and we’re thrilled to have had the opportunity to work with a journal as important as Shakespeare Quarterly, modifying the open review process that we used (and advocated for) with my own Planned Obsolescence for the journal’s needs. Thanks to SQ‘s editors, and especially special issue editor Katherine Rowe, for making such a successful experiment possible.

We very much look forward to collaborating with scholars, journals, and presses on future such projects!

What a Press Can Add in the Age of DIY Publishing

What follows is a rough transcript of the talk I gave this past weekend at the annual meeting of the Association of American University Presses. The panel was organized and chaired by Eric Zinner, Assistant Director and Editor-In-Chief at New York University Press, and the presentations before mine were by Monica McCormick, Program Officer for Digital Scholarly Publishing at New York University Press, and Shana Kimball, Co-Director of the Scholarly Publishing Office at University of Michigan Libraries. Their presentations had focused on library-press collaborations, and Monica in particular had mentioned the difficulty she had with hearing press representatives refer to what they do (in contrast to what libraries do) as “real” publishing, pointing out the equal realness of library-based publishing initiatives. I began by connecting my remarks to that comment, saying that authors themselves are producing a number of online publishing ventures that are similarly real, and that need to be treated as such if they’re going to be adequately understood.

—–

I come to the question about digital publishing that we’re discussing today from the perspective of an author, rather than a publisher, which is to say that your mileage as editors and publishers will no doubt vary. But I want to begin by being clear we are in the age of DIY publishing, even in scholarly circles. More and more journals are being founded in platforms like Open Journal Systems, which allow their scholarly editors to do the work they have done all along, while making the results of that work freely and openly available to the scholarly community and the broader world beyond. And more and more scholars are developing online presences via platforms like blogs that allow them to reach and interact with an audience more quickly, more openly, and more directly, without the intermediary of the press. Continue reading What a Press Can Add in the Age of DIY Publishing

Open Access Publishing and Scholarly Values (part three)

There’s a fascinating exchange around open access publishing and the reasons scholars might resist it developing right now, beginning with Dan Cohen’s post, Open Access Publishing and Scholarly Values, which he wrote for the Hacking the Academy volume, a crowd-sourced book he and Tom Scheinfeldt are editing (to be published by the University of Michigan Press’s Digital Culture Books). Dan argues for the ethical — as well as the practical — imperative for contemporary scholars to publish their work in openly distributed forms and venues.

Stephen Ramsay then published a response, Open Access Publishing and Scholarly Values (continued), in which he points out that the ways we substitute what we now understand as “peer review” for real evaluation and judgment by our peers, particularly at the stage of tenure and promotion reviews, so overwhelms this ethical/practical imperative that we never even really get to the stage of deciding whether publishing openly could be a good thing or not.

I’ve left a comment on that response, which got lengthy enough that I thought I’d reproduce and expand upon it here. Steve writes, in the latter paragraphs on his post,

The idea of recording “impact” (page hits, links, etc.) is often ridiculed as a “popularity contest,” but it’s not at all clear to me how such a system would be inferior to the one we have. In fact, it would almost certainly be a more honest system (you’ll notice that “good publisher” is very often tied to the social class represented by the sponsoring institution).

My response to this passage begins with a big “amen.” At many institutions, in fact, the criteria for assessing a scholar’s research for tenure and promotion includes some statement about that scholar’s “impact” on the field at a national or international level, and we treat the peer-review process as though it can give us information about such impact. But the fact of an article or a monograph’s having been published by a reputable journal/press that employed the mechanisms of peer review as we currently know it — this can only ever give us binary information, and binary information based on an extraordinarily small sample size. Why should the two-to-three readers selected by a journal/press, plus that entity’s editor/editorial board, be the arbiter of the authority of scholarly work — particularly in the digital, when we have so many more complex means of assessing the effect of/response to scholarly work via network analysis?

I don’t mean to suggest that going quantitative is anything like the answer to our current problems with assessment in promotion and tenure reviews — our colleagues in the sciences would no doubt present us with all kinds of cautions about relying too exclusively on metrics like citation indexes and impact factor — but given that we in the digital humanities excel at both uncovering the networked relationships among texts and at interpreting and articulating what those relationships mean, couldn’t we bring those skills to bear on creating a more productive form of post-publication review that serves to richly and carefully describe the ongoing impact that a scholar’s work is having, regardless of the venue and type of its publication? If so, some of the roadblocks to a broader acceptance of open access publication might be broken down, or at least rendered break-down-able.

There seem to me two key imperatives in the implementation of such a system, however, which get at the personnel review issues that Steve is pointing to — one of them is that senior, tenured scholars have got to lead the way not just in demanding the development and acceptance of such a system but in making use of it, in committing ourselves to publishing openly because we can, worrying about the “authority” or the prestige of such publishing models later. And second, we have got to present compelling arguments to our colleagues about why these models must be taken seriously — not just once, but over and over again, making sure that we’ve got the backs of the more junior scholars who are similarly trying to do this work.

It comes back to the kinds of ethical obligation that both Dan and Steve are writing about — but for the reasons Steve articulates, the obligation can’t stop with publishing in open access venues, but must extend to working to develop and establish the validity of new means of assessment appropriate to those venues.

The Late Age of Print, Audio Edition

From Ted Striphas comes news of an exciting project: the crowd-sourced production of a text-to-speech audiobook version of his fantastic book, The Late Age of Print. Ted has opened a wiki for the project, through which interested volunteers can help him clean up the text for audio conversion. Instructions and details are available on the wiki.

This is an exciting project, not least for its attempt to manage the labor involved in creating a public resource that will be given away freely. I hope you’ll get involved.

Two Bits of Recent Work

I’ve got that cringing feeling that I haven’t been getting enough work done lately, but I at least have a few links to remind myself otherwise.

First, both the slides and the audio of the talk I gave at the University of Michigan a few weeks ago are now online, courtesy of Deep Blue.

And second, an article about the open review experiment with Planned Obsolescence coming out in the next couple of weeks in Information Systems Quarterly. I’m particularly happy to share this as the managing editor emailed it to me saying explicitly that I had the right to share it as I wish, including depositing it in my institutional repository or posting it on my blog. So here it is!