Tools and Values

I’ve been writing a bit about peer review and its potential futures of late, an essay that’s been solicited for a forthcoming edited volume. Needless to say, this is a subject I’ve spent a lot of time considering, between the research I did for Planned Obsolescence, the year-long study I worked on with my MediaCommons and NYU Press colleagues, and the various other bits of speaking and writing I’ve done on the topic.

A recent exchange, though, has changed my thinking about the subject in some interesting ways, ways that I’m not sure that the essay I’m working on can quite capture. I had just given a talk about some of the potential futures for digital scholarship in the humanities, which included a bit on open peer review, and was getting pretty intensively questioned by an attendee who felt that I was being naively utopian in my rendering of its potential. Why on earth would I want to do away with a peer review system that more or less works in favor of a new open system that brings with it all the problematic power dynamics that manifest in networked spaces?

In responding, I tried to suggest, first, that I wasn’t trying to do away with anything, but rather to open us to the possibility that open review might be beneficial, especially for scholarship that’s being published online. And second, that yes, scholarly engagements in social networks do often play out a range of problematic behaviors, but that at least those behaviors get flushed out into the open, where they’re visible and can be called out; those same behaviors can and do take place in conventional review practices under cover of various kinds of protection.

It was at this point that my colleague Dan O’Donnell intervened; by way of more or less agreeing with me, Dan said that the problem with most thinking about peer review began with considering it to be a system (and thus singular, complex, and difficult to change), when in fact peer review is a tool. Just a tool. “Sometimes you need a screwdriver,” he said, “and when you do, a hammer isn’t going to help.”

Something in the simplicity of that analogy caught me up short. I have been told, in ways both positive and negative, that I am a systems-builder at heart, and so to hear that I might be making things unnecessarily complicated didn’t come as a great shock. But it became clear in that moment that the unnecessary complications might be preventing me from seeing something extremely useful: if we want to transform peer review into something that works reliably, on a wide variety of kinds of scholarship, for an array of different scholarly communities, within a broad range of networks and platforms, we need a greatly expanded toolkit.

This is a much cleaner, clearer way of framing the conclusions to which the MediaCommons/NYU Press study came: each publication, and each community of practice, is likely to have different purposes and expectations for peer review, and so each must develop a mode of conducting review that best serves those purposes and expectations. The key thing is the right tool for the right purpose.

This exchange, though, has affected my thinking in areas far beyond the future of peer review. In order to select the right tool, after all, we really have to be able to articulate our purposes, which first requires understanding them — and understanding them in a way that goes deeper than the surface-level outcomes we’re seeking. In the case of peer review, this means thinking beyond the goal of producing good work; it means considering the kind of community we want to build and support around the work, as well as the things we hope the work might bring to the community and beyond.

In other words, it’s not just about purposes, but also about values: not just reaching a goal, but creating the best conditions for everyone engaged in the process. It’s both simpler and more complex, and it requires really stopping to think not just about what we’re doing, but what’s important to us, and why.

If you’ll forgive a bit of a tangent: I mentioned in my last post that I’d been reading Jim Loehr and Tony Schwartz’s The Power of Full Engagement, which focuses on developing practices for renewing one’s energy in order to be able to focus on and genuinely be present for the important stuff in life. I only posted to Twitter, however, the line from the book that most haunted me: “Is the life I am living worth what I am giving up to have it?”

At first brush, the line produces something not too far off from despair: we are always giving up something, and we frequently find ourselves where we are, having given up way too much, without any sense of how we got there or whether it’s even possible to get back to where we’d hoped to be.

But I’ve been working on thinking of that line in a more positive way, understanding that each choice that I make — to work on this rather than that; to work here rather than there; whathaveyou — entails not just giving up the path not taken, but the opportunity to consider why I’m choosing what I’m choosing, and to try to align the choice as closely as possible with what’s most important.

In the crush of the day-to-day, with a stack of work that’s got to be done RIGHT NOW, it can be hard to put an ideal like that into practice. And needless to say, the opportunity to stop and make such choices is an extraordinary privilege; thinking about “values” in the airy sense that I’m using it here becomes a lot easier once things like comfort, much less survival, are already ensured.

But this is precisely why, I think, those of us in the position to do things like create new programs, or publications, or processes, need to take the time to consider what it is we’re doing and why. To think about the full range of tools at our disposal, and to select — or even design — the ones that best suit the work that is actually at hand, rather than reflexively grabbing for the hammer because everything in front of us has always looked like a nail.

So, an open question: if peer review is genuinely to work toward supporting our deeper goals — not just getting the work done, but building the future for scholarship we want to see — what tools do we need to have at our disposal? What of those tools do we already have available, even if we’ve never used them for this purpose before, and what new tools might we need to imagine?

16 thoughts on “Tools and Values

  1. Hi Kathleen,

    I enjoyed reading your post. I agree that Dan O’Donnell’s description of peer review as a tool rather than a system is a useful one. At the NEH, of course, we do a lot of peer review, namely to help us choose grant awardees. Our system (and I use that term deliberately here) is described in some detail on our website (see: ). At its heart, it involves flying scholars to DC for in-person meetings to discuss the merits of applications. Based on the many joint grant programs my office does, I’d say our system is quite similar to those used by other grantmaking agencies and councils in the US and abroad.

    Like a lot of our peer agencies, though, there has been discussion about moving away from in-person meetings as a cost saving measure. That is, using teleconferencing or videoconferencing as a cheaper alternative to the in-person meeting.

    But this pressure to save money has led to some interesting discussions about peer review itself; I find myself asking questions about what is “essential” to the peer review process, exactly? In other words, if we take as a given that in-person peer review can be replaced wholesale with teleconference, one must wonder what other changes could be made? (I don’t take that as a given, by the way.)

    I was struck by your line “for an array of different scholarly communities, within a broad range of networks and platforms, we need a greatly expanded toolkit.” Should a funder have a monolithic system for all kinds of peer review? Or should we be considering different tools for different types of grant programs? I was recently at a THATCamp and heard Michael Edson from the Smithsonian do a short presentation on radical new approaches for peer review. Michael suggested funders consider running peer review panels using techniques borrowed from Scrum, which is a team collaboration method used largely by people in software development. I smiled at the notion of explaining that one to my colleagues, but nevertheless suggested to Michael that we talk more about his idea, as I do sometimes wonder if there aren’t entirely new approaches (plural) that might be used for different kinds of reviewing tasks. Here at the NEH, we review grants that run the size gamut from very tiny to pretty small (sorry, couldn’t resist). They are for a wide variety of projects, from documentary films to solo-authored books, to archaeological digs, to software development projects. So I do wonder if we shouldn’t have more tools at our disposal for reviewing ranges of projects in the best possible way?

    So I certainly welcome any discussion on this topic (or ideas anyone has about how funders might further discuss these kinds of issues). Thanks for posting.

    1. Hi, Brett. Thanks so much for engaging with the post and for raising these questions! I can absolutely imagine the pressures to reduce costs in an agency like yours by eliminating the in-person panel meetings, but my sense is that replacing those meetings with videoconferences isn’t an ideal alternative.

      Part of what’s so useful about those meetings is their push-and-pull, the tensions that arise and get worked out as panels figure out who else is in the room, how those others read, where their values lie, and so forth. I can imagine that some form of online engagement could be used to replicate, or even augment, that process, but I’m not sure that the literal replacement of one form of synchronous conversation with another will do it.

      I like the idea of using techniques borrowed from Scrum, not least in that multiplicity of approaches that it makes possible. What if one program within a funding agency — particularly a program meant to reward risk-taking forms of experiment — were to test out a range of tools each year, and thereby make some recommendations to the agency at large? For one particular funding round, readers might be asked to use a password-protected site to annotate applications; perhaps annotations would only be visible to their authors at first, but would be opened up to the panel in general once complete. Panelists could then be encouraged to discuss the responses with one another in that same asynchronous space, which might permit all the voices involved to develop, rather than just those who are best at thinking quickly. You’d also in the end have detailed commentary that you could refer to in writing not just recommendations for funding but also feedback for applicants.

      This is just an idea based on a couple of tools I’ve already used. No doubt other new tools will open new possibilities. I’d love to keep talking with you about how you might experiment with them!

  2. Thank you for this piece, Kathleen! You have given me much to think about today. My mind is rushing with ideas and the possibilities for work at the level of departments, the library, specific journals, the deans and professional organizations.

    The link between venue and peer review approaches remains particularly knotty for me, though. When a few journals matter more in X or Y fields, and they have no vested interest in changing their model, what do we do? Elite departments seem to benefit from the status-quo since they stand to benefit the most from the ruminating machine. Those fighting the good fight (without a good sense of the odds) at the Ivies and flagships would like to believe that a subsequent trickle down is not our only hope. Notice the bind. For the trickle down to work, we must hold a position of prestige; for us to hold a position of prestige, we must dominate the prestigious journals…

    You say values, and I agree, hoping a breath of fresh air would blow, but what if we found conflicting values? What if what we are ultimately afraid is to avow prestige, the one value to rule them all? I can bear witness to many free of that bug, but I would be blind not to see a larger group paying tribute to The Prestige, both consciously and in unmindful practice, at all levels of the stack. As you imply, we seem to be living in times when folks forgot the dance of means and ends, to the point where argumentation has no hold. “It is what it is,” is our motto. Recently, I had a brilliant scholar stall in her weak defense of the print monograph and revert to the following sentence, “but I want my mother to see my name on a cover.” Now how can I go against such values?

    Maybe I’m looking at this all gloom-like in my pre-coffee stupor, caught by your words somewhere in between pragmatism and systems thinking. Considering I’m not one to back down from a position of privilege to affect change for shared rational values, I will finish my comment with hope—Hope that we can follow those taking risks on the margins of Mordor.

    1. I love this comment, Alex, particularly in light of my thoughts this morning about being wrong. Yes, we are afraid of letting go of the markers of prestige to which we are accustomed. I think we’re also afraid of venturing opinions and ideas that haven’t already been made as bulletproof as possible. The risk involved in being seen to be wrong — whether in something under open review or in a review conducted in the open — is huge…

  3. Thoughtful post, and I agree that any form of peer review is a tool, not a universally applicable system. But to complicate the analogy, tools do more than just undertake a singular function – they also create byproducts (like sawdust), or enable processes that themselves are desirable (or not) outcomes.

    Blind peer review can create sawdust of hierarchical discipline, paranoia & reticence, and unaccountable mean-spiritedness. It might work as a tool to improve the quality of scholarship, but not without its costs. Open peer review might produce all of those types of sawdust too, but it can produce another effect that blind peer review cannot: dialogue. In my experiences with open peer review, the benefit of such public conversation has outweighed any other byproducts, and might even be viewed as an end in and of itself beyond the (attempted) gatekeeping function of peer review.

    1. As you might guess, I completely agree. One of the best side effects of the open review processes I’ve participated in has been the ways they’ve aerated otherwise unspoken assumptions by conducting those conversations in the open and by inviting a wider range of participants.

  4. I can see the appeal of understanding peer review as a tool insofar as it simplifies a complex activity and makes changing that activity appear less daunting.

    Of course, tools themselves are never as simple as they may seem.

    In addition to the points that Jason Mittell makes above, tools are always bound up in complex ways with the products they produce. This makes it impossible to separate the tool from the result the tool produces. Just as means are always bound up with ends, processes with products, so too are tools bound intimately up with the things they make.

    Our approach with the Public Philosophy Journal is almost the exact opposite. Rather than thinking of peer review simply as a tool, we are trying to take it seriously as an academic activity, as itself a practice of scholarship.

    This allows us to think further about the values that animate the practice — values of fairness, generosity, and rigor, to name three — and it suggests one of the main limitations of closed peer review systems: they hide from public view an enormous amount of excellent scholarship. The scholarship of peer review is compelling and important because it is fundamentally dialogical. It unfolds always as a response to an argument or position. This anchors it in the work of others and makes the scholarship of peer review collaborative at heart.

    The work that goes into thoughtful peer review has long been hidden from view; and its invisibility means that it can count neither toward tenure nor toward one’s broader scholarly reputation. Open peer review, as we envision it at the Public Philosophy Journal, will count toward both tenure and reputation, but more fundamentally, it will be designed to help cultivate disciplinary communities capable of enriching the work of those who participate.

    1. Thanks for this comment, Chris! You’re of course right about the relationship between the tools and what they produce, and you’re right to turn our attention back to the actual act of using the tools. I am quite excited about the ways that your journal is working to think about peer review as itself an act of scholarship — a key reframing, I think, if our review practices are not only to be sustainable but energizing. I’ll look forward to hearing more about how things develop!


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.