Becoming Human

This brief presentation was part of a panel at MLA 2020 in Seattle entitled “Being Human, Seeming Human.” The panel brought together researchers from Microsoft with a couple of DH folks (me and Mark Sample) to talk about the history of research into artificial intelligence and conversational agents, some current experiments and challenges in the field, and the possibilities this work creates for literary artists today. The role I took on, as the last speaker in the session, was to raise some questions about how our engagements with these conversational agents might be affecting us.

Becoming Human

My presentation required me to open with a mildly mortifying revelation: When I was young, I took a lot of unconscious cues for how relationships were supposed to work from the ways they were represented on television.

Screenshot from Dynasty

This, perhaps needless to say, was a terrible mistake, which I discovered full-force the first time I ended an argument with an incisive, cutting one-liner and stormed out of the room. The person with whom I was arguing did not chase after me; there was no stirring emotional reunion. There was no sense in which I got to feel like I’d won. There was only a deep breach of trust, leading eventually to the loss of a relationship and the realization that so much of what I’d ingested as a child had been utterly wrong, that real connections between actual humans could not survive the kinds of dramatic behavior I’d been encouraged to think I was supposed to emulate.

This of course seems like a no-brainer now. Perhaps it’s just one of those things you shed in the process of maturing, but it’s hard for me today to imagine taking the relationships I see enacted onscreen to have much to do with my actual relationships in the world.

Screenshot from Friends

I know I’m not alone in my prior mistake, though; I have a close family member who once confided in me that she had been sorely disappointed to discover that as an adult she did not develop a cluster of relationships like those portrayed in “Friends.” I understand her disappointment; I was similarly saddened to discover that the world was not inclined to serve as a receptive backdrop for my self-dramatization.

What does this have to do with the current state of the development and deployment of artificial intelligences and conversational agents in online environments?

Screenshot from Her

Only this: as we engage with more and more non-human actors in technological environments, we may be prone to think of one another — and indeed ourselves — as less than human.

I want to be clear, though: like my failed understanding of the ways that relationships on television distorted and misrepresented actual emotional interactions among actually existing humans, the fault is not in the quality of the writing. “Better” television would not have produced a better understanding of human engagement.


Similarly, “better” conversational agents will not lead to more humane interactions online. The problem lies rather in a prior category error that makes it difficult for us to separate selves from self-representations. And it’s this category error that has led to what I increasingly think of as the failed sociality of social media.

That argument, in very brief, points to the ways that social media has promoted and benefited from a misunderstanding of and mistaking of connected individualism for real sociality.


Yes, we engage with one another’s self-representations on these platforms, but the engagements are not real sociality, any more than the self-representations are our actual selves. We are cardboard characters in a poorly imagined drama, often behaving toward one another in ways that real relationships cannot survive — in no small part because social media platforms are heavily based around and in turn feed our cultural tendency toward competitive individualism, a tendency that slides all too easily, inexorably, into the cruel.

This argument — that social media as we participate in it has never been and in fact could never be social — requires me in this presentation not only to acknowledge my somewhat mortifying childhood failures to discriminate between representations of relationships and actual relationships, but also to acknowledge my much more recent failures to think all the way through the potentials of the proliferating platforms we use for online interaction and the ways they might transform scholarly communication. My assumption in my earlier arguments was that such two-way, many-to-many communication would open up channels for new, better ways of working together. Today, I am far less sure. This is not to say that I want to abandon those platforms or the possibilities they present for communication, but it is to say that I now recognize the extent to which our networked interactions with one another are not going to transform the academy, much less our society, for the better until we become better humans. To the point of today’s conversation: a huge part of becoming better humans is bound up in how we recognize the humanity of others, and the representations we create of that humanity — whether dramatized on television or functionalized as conversational agents — not only draw heavily on our most unspoken assumptions about one another but also set the course for how we’ll treat one another in the future.

Code over a cyborg face

Here’s the thing: what we’re producing in more human-seeming agents is in fact more human-representation-seeming agents, which is to say portrayals of our ideas about what “humans” are. In the case of conversational agents and other kinds of AIs, the emphasis is on intelligence — and intelligence, at least in the ways it can be modeled, is not the same thing as humanity. And perhaps that’s all fine as long as those agents remain tools. But countless examples, from adorable kids talking to Siri and Alexa, to trolls online tormenting bots like Tay, demonstrate the ways we all blur the lines in our interactions with these agents. And I don’t think there’s that much of a leap between trolls tormenting Tay and Gamergate, or revenge porn, or swatting, or any of the other innumerable ways that new technologies have facilitated the violent, racist, misogynist, dehumanizing treatment of people online.

Shadows of people on a beach

So we have to ask some hard questions not just about the AIs and conversational agents being developed, and not just about the algorithms that allow us to interact with them, but also about the ways that we interact with one another on equally technologically mediated platforms. For what definitions of “human” are we building human-seeming agents, and why? If our models for the human mistakenly substitute intelligence for humanity, what becomes of emotion, of kindness, of generosity, of empathy? How do those absences in models for the human pave the way for similar absences in actual human interactions? And how does the consequence-free inhumane treatment of conversational agents encourage the continued disintegration of the possibilities for real sociality online?

65 responses to “Becoming Human”

  1. Paging @LMSacasas @SofiaCarozza @a_n_a_berg @mariachong @danieljbrunson @remingtontonar @DanielleMorrill and @juliagalef

    Yes, you’re an odd cohort to be grouped together. But I think you’ll all find something worthwhile in @kfitz‘s piece.

  2. Thank you, Kathleen, for your honest assessment. Time and space constraints prevent listing society’s ills caused by the dehumanization of communication. Depositories of digital knowledge cannot take the place of a mentor’s tacit knowledge. And journaling and letter writing are practically lost arts. We are like a blind person trying to lead another blind person. Only fools think abolishing printed matter is a good thing. (If you believe untrained eyes can navigate the Internet, do you also believe untrained eyes can fly an airplane, or perform heart surgery?) Social media’s “power” base continues to grow, despite evidence that its data can be easily corrupted. And fewer and fewer people know how to participate in “live,” face-to-face conversations.

  3. “If our models for the human mistakenly substitute intelligence for humanity, what becomes of emotion, of kindness, of generosity, of empathy? ”

  4. Thank you! Everyone, read this

  5. Kathleen

    Here’s a circuitous attempt at an answer to your question “For what definitions of “human” are we building human-seeming agents, and why?” Acts of kindness depend on obtaining the attention of those to whom we wish to be kind. I start with the transactional nature of kindness rather than intransitive empathy. If we believe there is a link between kindness and attention, we can entertain that cruelty is governed by a failure to gain attention. Those human-seeming agents must have as one of their purposes to help humans hone their skills at the phatic function of communication (testing the channel for connection) and the conative function (engaging the addressee).

    Remember that annoying paperclip guy (with option to display as a mini Einstein) from Microsoft Office? That sound of tapping against glass to attract attention? Or spooky HAL in 2001 A Space Odyssey? Both are predicated on the relation to the machine as one to a servant. You can see how difficult it is to imagine a machine as being more than a servant.

    But doesn’t our future humanness depend upon being about to “animate” the world of artefacts in a fashion similar to how we are learning to view natural habitats as offering ecological services? By “animate” I do not mean to ensoul. I mean to treat the object or subject before us as a carrier of history and worthy of some attention. Ironically to improve human-computer interaction, we on the human side may have to be kinder to things.

    Attention giving does not only come in the form of intense tête-à-tête. It does involve a little bit of cognitive headroom (microseconds on each task). Whether it is dealing with an object, tackling a task or contemplating an interaction with a person, our choices do, dump, delegate or defer. And to decide which of the four. Easy to map to a five digit hand.

    The machine is a playmate in this ongoing game of micro-theatre. How? By offering moments of serendipity enabling us to live our lives with sprezzatura — grace in all the details and kindness to all.


  6. “…the extent to which our networked interactions with one another are not going to transform the academy, much less our society, for the better until we become better humans.”


  • Tracey El Hajj
  • José Juan Castillo
  • Janet Simons
  • jasonrhody
  • Andre?s Ferus ? ? ?
  • ???
  • Patricia Hswe
  • Lisa Marie Rhody
  • Shannon M. Smith
  • Saikungreader
  • Robin DeRosa
  • Anastasia Salter
  • Katherine D. Harris
  • Matthew Cheney


  • Francois LachanceFrancois Lachance
  • Penelope Adams Moon
  • Rachel Arteaga, PhD
  • Sandra
  • Kreigh Knerr

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.