I find myself in that state again, in which I have a particular writing task — in this case a talk — with a pressing deadline, one that’s pressing enough that I really need to be working on it whenever I have time to write. (Being a talk, its deadline really can’t be blown.)

But for a whole series of reasons I won’t dig into too much right now, I’m struggling with the talk. It’s taking far longer to write than it should, and it’s just painful to work on. And so, as it drags on, the things that have been pushed aside in order to work on the talk are getting pushed further and further aside, and more deadlines are beginning to loom.

I’m caught in that eternal dilemma: put aside the most pressing thing in order to work on less pressing stuff that I might actually be able to knock off the list, but run the risk of not getting the talk done, or at least not getting it right? Or press on with the talk, hope a breakthrough comes quickly, and let the less pressing stuff continue to wait?

I have never found a satisfying solution to this particular kind of stuckness. What do you do when you’re caught in this deadline double bind?

Train of Thought

The funniest part of yesterday’s post — at least it’s funny to me — is how it got written: on my iPhone, on the subway. I remembered yesterday that, back when I started posting here semi-regularly again in the early summer, I began by jotting down some thoughts in this way, often standing with one elbow hooked around a pole, trying to keep my balance. I’d finish the posts started this way once I got in front of my computer. So I thought I’d try it out again, and yesterday’s post was the result.

Could my train of thought literally be a train of thought?

It’s more likely that these bursts of productivity on the train have to do with getting myself to start thinking before I get to my computer — in an environment with no network connectivity, where external circumstances often make it a good idea to pull inward and divert your attention from your surroundings.

I usually manage that by listening to French podcasts, which require a certain kind of concentration, but writing — perhaps a couple of quick paragraphs during the trip downtown — works even better, not least for helping to train my focus where I need it before I get to the office.

It’s easier to stay focused once I get there if I arrive with an idea already clearly in mind — one of those lessons that I think I need to relearn often.

Annals of Comment Spam

A few days back, I tweeted an amusing bit of comment spam I’d received that morning:

But there’s amusing comment spam and then there’s amusing comment spam. I’m not going to reproduce it here, but yesterday I received a comment that could conceivably have slipped past me, had Akismet not caught it. The comment was left on a recent travel-related post, and it related a travel anecdote, asking for advice on how to handle a somewhat bemusing interpersonal issue. And while my post seemed a strange place to ask that particular question, the story was well-enough written, and the concern seemingly sincere enough, that I might have let it get through. Akismet, however, flagged the address that the commenter left in the URL field, and so into the filter it went.

I find myself both relieved and troubled. While it would be great to get fewer comments telling me how helpful and brilliant and pretty and useful my blog posts are (or alternately that I should really work a bit harder on them), those are quite easily spotted and dispatched. If spammers start actually taking the time to ask substantive questions and post them in plausible places, will it become increasingly difficult to recognize spam when we see it?

It occurs to me that in fact I probably wouldn’t have missed the spammish nature of this particular comment, precisely because I didn’t recognize its author — even if I had been taken in by the tale, I wouldn’t have been ready to engage with the teller. Something in that leaves me both relieved and dissatisfied. On the one hand, I’m glad that relationships and the communities they create can help us weed out bad actors in networked spaces. On the other hand, if we find ourselves in a situation in which we close folks whom we don’t (yet) know out of our conversations, how can those communities continue to develop?

Lies, Damned Lies

The elevators in our office building have these little monitors built into them, on which are displayed random tidbits of pseudo-news and other glossy distractions. Because god forbid we should be bored on the ride to the third floor.

Anyhow, the other day as I was leaving the office, the monitor was showing a little infographic that showed a steep decline in the number of hours per week Americans are working — from 40-point-something in 2009 to 34-point-something in 2011. The other person on the elevator and I looked at one another and both said “that can’t be right,” but there was no context other than “the number of hours per week Americans are working” and a series of numbers associated with years. We wondered at the time whether “Americans” meant all adult Americans or adult Americans with jobs, such that the steep decline indicated more people out of work. As I wandered off toward the subway, it hit me that even if the figures referred to adult Americans with jobs, the steep decline could indicate the growing part-time-ification of the workforce — in which case the drop suggests a growing under-employment problem, and not that Americans are opting to spend more time in the Hamptons.

This stupid infographic has annoyed me since I saw it, not least because it demonstrates everything that’s wrong with the ways that statistics get used by the mainstream media: Look! A number! It must Mean Something! What gets missed, of course, is that the gap between numbers and meaning can only be bridged by interpretation, and that such interpretation requires serious critical and analytical skills. That while numbers may have a demonstrable basis in empirical reality, what they mean is not at all evident, and many interpretations of them may simply be wrong.

All of this has me thinking about some of the claims that have been leveled at the digital humanities in recent days, in particular that it’s a mode of “processing” texts that attempts to bring literary studies fully into the empirical, by counting rather than reading. And sure, there is some work in the field whose results look awfully numeric. But by and large, while the wide range of work covered by the umbrella term “digital humanities” has one foot in the digital — in the kinds of tools and methods many have associated with scientific or engineering or other empirically-oriented fields — the best of it has the other squarely in the humanities, and in the kinds of questions and concerns and modes of analysis and interpretation that arise out of those fields. And it seems to me today that one key role for a “worldly” digital humanities may well be helping to break contemporary US culture of its unthinking association of numbers with verifiable reality, by demonstrating the ways that such numbers only open the process of meaning-making.

The Public Scholar’s Two Bodies

I started this blog as an assistant professor, under conditions that were never fully pseudonymous but were perhaps semi-veiled, at least by the fact that very few people knew me, and even fewer of those who did knew anything about blogs. All of my colleagues, that is to say, were looking in the other direction, and so I was able to say more or less what I wanted. Only gradually did this odd collection of writings and reflections come to be associated with a professionally known me.

Even after that, it seemed perfectly reasonable for the persona I inhabited on the blog to be a bit personal, to think through problems I was actually facing, to be at times a bit worried and not entirely secure. I was, after all, an assistant professor, in an online community composed primarily of other assistant professors, and thinking in public through the anxieties associated with that role was part of the point — we were using the blog format to demonstrate to one another that however isolated we may have felt, we were not alone.

Nearly ten years have gone by, however, and I’ve not only been tenured and promoted (twice!) but I’ve moved into a new position, one that calls on me to take on a new kind of leadership role. And those changes now have me reassessing the kinds of writing that I can — that I should — be doing in a space like this one. A post like yesterday’s, exploring some concern that I’ve got about my relationship to my work, can leave me feeling overexposed today in ways that it never would have eight years ago. Or even five years ago: even after I was tenured I felt that it was important to model a way of being an scholar that didn’t hide the messy process of working out ideas behind the polished completeness they eventually take on, that didn’t disavow the insecurities and anxieties of academic life in favor of a self-doubt-free public persona.

But in my new role, I’m increasingly aware that there are two Kathleen Fitzpatricks in the world: on the one hand, one that’s taken on a form of public service, that represents a large and important organization, that has a mission focused on something bigger than myself, and on the other, one that’s… just me. It’s something a bit more than the usual public/private divide; it’s a split between a self that speaks with a voice that’s larger than itself, and a self that seems always too small, too local, ever to be spoken for publicly.

And so while I still find myself wanting to push back against what I’ve always found to be a pretty gendered mode of being an academic — always projecting confidence, being convinced of one’s rightness, putting forward arguments that are never anything other than unimpeachable — and instead model a kind of self-questioning that I am convinced is necessary for real intellectual and personal growth, I now increasingly wonder whether I can or should continue do so as myself. There are questions to be asked about that mode of writing in and of itself, of course — is it possible to take on a project of open self-questioning without falling into an equally gendered mode of self-doubt and insecurity? — but there are also pressing concerns to be raised about whether the kinds of introspection the blog has long inspired in me can co-exist at all with the public role I have now chosen to occupy.

This wouldn’t be the first time I’ve announced an attempt to reconcile the blog with this public persona, and that I haven’t managed to do so yet bespeaks the difficulty of the project. But — in a stroke of what’s either meaningful irony or mere coincidence — I’m actually writing right now, for a public venue, about the importance of taking the work that gets done on scholarly blogs seriously. And that juncture, or disjuncture, depending on your view of it, has me thinking about the changing function of the public platform at the various stages of a career, the ways in which we all produce different voices at different moments, and the degree to which a coherent self can ever speak, or be spoken.

More Than Mere Polyester Would Suggest

Earlier this year, I attended a conference at which I was given a really nice fleece jacket. Really nice.

I’ve known for a while that I harbor a somewhat extreme love for this fleece jacket; it’s become my comfy home top layer of choice, getting some wear pretty much every day. But I hadn’t realized quite how important it was to me, I guess.

Because I was surprised this morning to awaken from a dream about the fleece jacket. I was on a very small plane, and there wasn’t room for my stuff in the cabin, so they took everything from me — including the fleece jacket — and put it in some exterior compartment of the plane, kind of like the luggage compartment of really big busses.

When we landed, what they handed back to me was not my fleece jacket. It looked like it, but it was a small, and I was pretty sure mine was a medium. I tried it on to check, and while the small did fit, it wasn’t quite as comfortable as I remembered my fleece jacket having been. And then I remembered the slight oddity about how my zipper pulls are attached, and realized that I could tell if this fleece jacket was mine by checking those pulls.

This was not my fleece jacket.

So I circled back around to where the rest of the passengers — maybe a hundred of them, which is weird considering how small that plane was — were waiting for their bags, and asked someone who had handed me this fleece jacket. Everybody pointed to one guy, who at first seemed to be about ten feet tall, and then appeared to be sitting up on a high shelf. I yelled up to him about the fleece jacket dilemma, and tossed up the one that wasn’t mine. He looked around and tossed me back down… a t-shirt. One I’d never seen before.

When I woke up, I was in negotiations with him to at least get back the small fleece jacket, if I couldn’t actually get mine. And was surprised by the level of relief I felt upon discovering that it was only a dream.

I am super curious what this fleece jacket — which I am wearing as I write — has come to stand in for in my unconscious. Whatever it is, I’m clinging to it pretty fiercely.


One of the things that I find fascinating just about every time I travel around Europe is the music playing in the background in restaurants, bars, hotels, stores, and so forth. It’s not terribly surprising that a bunch of it is American pop music, of course, but I’m frequently caught off-guard by what American pop music is playing.

I wouldn’t pay it much attention at all, I think, if it were relentlessly current — the stuff that’s being pressed on all of us, all the time — but what I hear here is often oddly dated, and yet not anything that would fall into the category of obvious “classics” that could simply fade into the background. There was one summer in Paris, for instance, when we heard George Benson’s Give Me the Night everywhere we went. And not just one song off of the album, which might have rotated onto some weird retro playlist, but the entire album.

Here in Prague, it’s Tracy Chapman’s 1988 eponymous album. In one bar, it played start to finish, but I’ve also heard selections from it in at least three other places here, including our hotel’s lounge — and not just “Fast Car,” but several other singles as well.

What’s that all about? How is it that a 22 year old album rotates back into currency this far from its origins?

On the Impossibility of Naive Reading

The recent New York Times Opinionator column by Robert Pippin, “In Defense of Naive Reading”, has had me thinking for the last week or so. I knew I wanted to respond right away, but I wasn’t sure how, exactly; there’s an awful lot in the post that I’m quite sympathetic to, and yet something in it rubbed me exactly the wrong way.

Part of the irritation arises from the degree to which the humanities as they’ve been studied for the last several decades are under attack. Again. (Including from within.) Pippin himself begins with the culture wars of the 1980s, a grim reminder of the repeated cycles within which academic practices within the humanities, and particularly within literary studies, come under scrutiny, especially in times of economic crisis. There’s no doubt a degree of “here we go again” in my annoyed response.

But there’s more to it than that, because I think there’s more at work in Pippin’s critique than any kind of simple attack on those silly humanists. “In Defense of Naive Reading” bears deep connections to a proliferating set of arguments calling for a revaluation of amateur experiences of literary reading, arguments for which I have a tremendous amount of sympathy; Ted Striphas’s The Late Age of Print: Everyday Book Culture from Consumerism to Control and Jim Collins’s Bring on the Books for Everybody: How Literary Culture Became Popular Culture are two of the most thoughtful texts in this category. Even more broadly, however, Pippin’s argument connects to the anti-institutional “outta my way, prof!” rallying cry of Anya Kamenetz’s DIY U and YouTube’s An Open Letter to Educators. And it’s a precariously fine line from valorizing extra-academic reading experiences to dismissing scholarly work on literary subjects as wasteful, pointless, and worthy of elimination.

So I find myself in the somewhat perplexing position of wanting to make a strong argument on behalf of public engagement with the materials of humanities research, and especially literature, while at the same time defending the importance of scholarship in the field, including that scholarship that involves a kind of discourse of the sort that might exclude the uninitiated. I want to defend the kind of close reading that Pippin celebrates, but I also want to defend the theory that he dismisses. The question, of course, is how to do both of these things at once, which then turns into a larger question: What is the function of literary scholarship, and how does it inform or distinguish itself from reading-in-general?

A key aspect of literary scholarship, and the part that perhaps most informs what goes on in the literature classroom, has to do with making what seems to be obvious instead appear strange, to require the reader to step back from something that seems familiar and look at it from a new angle. The point is less to get the reader to think in some particular different way about the object than it is to get her to think differently about her own perspective with respect to that object.

And the key aspect of that endeavor is getting her to recognize that she has a perspective in the first place, one that is, by definition, non-neutral. And it’s this that makes me most want to argue with Pippin: not that I want to dismiss or displace the close, careful wrangling with primary texts, but instead to insist that no such reading can ever be naive, except in a not-so-faintly pejorative sense.

Every reading presupposes a theory, even where that theory is about the transparency of representation or about the existence of a text with defined borders. “Close reading” isn’t just careful reading with attention to detail; it’s a theoretical argument about where a text’s meaning is to be found, how it can be understood, and, perhaps most importantly, who is responsible for having put it there.

In that sense, the refusal of theory is not just a refusal of difficulty or abstruseness, but instead a refusal to lay perspective bare, or even to admit that there is perspective involved in the reading process in the first place. And lest it need be said: the admission of perspective in the reading process is not a slippery slope to some mythical anything-goes mode of postmodernist free-for-all. There is still evidence, analysis, and argument required in defending any particular interpretation of a text. But the point is that there is no singular, correct, perspective-free interpretation of a text.

In that sense, the value of literary theory has been in helping scholars and students tease out not how to read, but how they do read, how a lifetime of encounters with particular kinds of representations train us to understand future texts. And, not incidentally, to help students think about other potential readings, and what they might reveal about the default positions of our culture.

The challenge for literary scholars, I would argue, is not to return to the kind of naive, untheorized reading that Pippin seems to espouse, but instead to find ways to express the significance of theoretical insights to a wider audience. That is to say that we should neither dig in our heels on the issue of difficulty, nor give up the kinds of work that we have taken on, but instead that we need to find better ways to convey — to our students, of course, but even more importantly to the reading public at large — why the work we do matters, and why the ways that we do that work matter as well.

In a time of crisis such as we now face, dismissing that public as anti-intellectual would be an enormous mistake — but so would be giving up on the kinds of rigor that much theoretical discourse can produce. The trick lies in finding ways to bring a broader audience into our arguments, and finding ways to make those arguments that demonstrate to that audience why they should care about them, and about the future of our fields.

[P.S.: Just as I finish this, I see that my friend and colleague Kevin Dettmar has posted about the same article. Great minds, etc.]

Why I Can’t Wait to Get My Hands on My New iPad, All You Haters Notwithstanding

So yes, I did pre-order an iPad, or actually pre-reserve one with my college’s bookstore. And I intend to pick it up first thing tomorrow morning. And I absolutely cannot wait.

This is not a cool thing to admit in at least some of the circles I travel in. The open source/open content folks I know are understandably concerned about the iPad’s status as a tethered device, closed to programs and content not Apple-approved. I get that, and I’m concerned about it, too. At least for the couple of hours it’ll take before somebody posts a jailbreak for it, the iPad will be a closed system.

Except: there’s that web thing. While web apps on the iPhone haven’t been quite as flexible as one might like them to be, those difficulties have been due at least in part to the restrictions on browser window size, and in part due to the inconvenient crashiness of Safari. I have no sense, of course, that the latter problem will be fixed on the iPad, but the former certainly will be. And not having to use restricted mobile versions of web apps might change the game entirely; using Gmail in all its non-mobile glory might make me not care that it’s a web app. And as more and more of the stuff I do becomes browser-oriented, there’s decreasing cause for me to be concerned about the restrictions Apple places on the app store.

The other concern that many folks I know have voiced is that the iPad isn’t just tethered; in Jonathan Zittrain’s term, it’s appliancized. It’s a device primarily meant for consumption rather than production. And the more we allow our computers to devolve into appliances, the less likely they are to be generative devices, devices that allow for unexpected uses, for productive surprises, for hacking.

I agree with that logic, generally, but not as applied to the iPad in particular. The iPad is indeed primarily meant for consumption — which means that it can’t really replace the computer, and indeed shouldn’t. At least not yet, in any case; the iPad as it will be released tomorrow is a device that one can program for, but not yet a device that one can program on.

But that doesn’t mean that it will always be so. As Stephen Fry reminds us in his article in Time, the Mac was at its release “derided as a toy, a media poseur’s plaything and a shallow triumph of style over substance,” but the creativity that the Mac inspired transformed the landscape of personal computing; similarly, the iPhone was seen “as a plaything, but it transformed the smart-phone landscape.” None of us have any way of knowing what people will do with their iPads as yet, but don’t count ingenuity out. Engaging devices have a way of producing unexpected results.

I also take issue with the consumption/production divide that, as Matt Kirschenbaum pointed out this morning, is being reified by much of the technorati’s response to the iPad. On the one hand, I want to say “what’s so bad about consumption, anyhow?”; I’ve never been upset with my television for not allowing me to broadcast. And on the other hand, I also want to note the myriad ways that consumption has always led to production, has always been a necessary stage on the way to production. Writing is something we should all aspire to, but writing without reading is an impossibility. Devices that can provide for more engaging reading — and I mean that in the broadest sense, not just in the interaction with text but with images, audio, video, games — will inspire new kinds of writing, new kinds of creative production, in forms that we can’t as yet even imagine.

Play is inspiring. And as of tomorrow morning, I hope to be inspired, in new ways.

The Rise of the Landscape Web

I’ve noticed over the last couple of months that several of my favorite websites were becoming, well, wide. It’s become increasingly common, in fact, for me to find myself scrolling sideways as well as up-and-down when out there browsing, and frankly, it was getting to be a bit annoying.

But with my entry (yes, at last!) into the ranks of those who are getting to play with the Google Wave preview, it hit me: the fundamental orientation of the web is changing. And Wave may well cement that change.

Here’s the thing. Early web pages were composed vertically, in portrait layout, partially because of the limitations of screen width and partially because of the rear-view mirrorism that caused us to think about these new digital forms as “pages.” That concept has proven surprisingly sticky: web “pages” scroll vertically to this day, and very few sites have played with the horizontal axis.

Enter Google Wave, however (and possibly, as its necessary precursor, Google Chrome, though being a Mac user I can’t really speak to that at all).


Its three-column orientation demands horizontality — if the columns are too narrow, you lose a lot of the toolbar options, and everything just feels out of proportion.

So this makes me wonder, if Wave gets the kind of buy-in that the hype suggests, whether we’re seeing the fundamental orientation of the web switching from portrait to landscape — not that we won’t still be scrolling vertically rather than horizontally, but that the basic screen unit will be wider than it is tall.

This has deep implications for contemporary web design, I think, and not least for me; the other Planned Obsolescence works quite well in a wide window: you can stretch the main text and comments columns to be as wide as you would like. But it doesn’t work well here at all, as I’ve been using a fixed-width theme, and that ugly gray background block at right just gets bigger and bigger.

I’ll be curious to see whether this shift becomes — no pun intended — broader. Is the basic assumption of web layout becoming landscape? How do we organize a wider window?