Here’s a circuitous attempt at an answer to your question “For what definitions of “human” are we building human-seeming agents, and why?” Acts of kindness depend on obtaining the attention of those to whom we wish to be kind. I start with the transactional nature of kindness rather than intransitive empathy. If we believe there is a link between kindness and attention, we can entertain that cruelty is governed by a failure to gain attention. Those human-seeming agents must have as one of their purposes to help humans hone their skills at the phatic function of communication (testing the channel for connection) and the conative function (engaging the addressee).

Remember that annoying paperclip guy (with option to display as a mini Einstein) from Microsoft Office? That sound of tapping against glass to attract attention? Or spooky HAL in 2001 A Space Odyssey? Both are predicated on the relation to the machine as one to a servant. You can see how difficult it is to imagine a machine as being more than a servant.

But doesn’t our future humanness depend upon being about to “animate” the world of artefacts in a fashion similar to how we are learning to view natural habitats as offering ecological services? By “animate” I do not mean to ensoul. I mean to treat the object or subject before us as a carrier of history and worthy of some attention. Ironically to improve human-computer interaction, we on the human side may have to be kinder to things.

Attention giving does not only come in the form of intense tête-à-tête. It does involve a little bit of cognitive headroom (microseconds on each task). Whether it is dealing with an object, tackling a task or contemplating an interaction with a person, our choices do, dump, delegate or defer. And to decide which of the four. Easy to map to a five digit hand.


The machine is a playmate in this ongoing game of micro-theatre. How? By offering moments of serendipity enabling us to live our lives with sprezzatura — grace in all the details and kindness to all.