We seem to be having a robot moment. Last month, The New York Times introduced us to Baxter, a robot designed not only to do assembly-line work but, with its roughly human form and big, widely-spaced eyes, to appear less threatening to the workers it will presumably be replacing. (Baxter's progenitors say Baxter is not meant to displace human workers, but they also note it can do menial tasks for about $4 an hour). In the recently released film Robot and Frank, a robot, working as a health aide to a retired jewel thief, becomes his partner in crime. And if you've got a newish iPhone, ask Siri to "Open the pod bay doors," repeatedly. She gets irritated, and as long as you're not on a mission to Jupiter, it's pretty funny.
Now there's research from Northeastern University, Massachusetts Institute of Technology, and Cornell University showing just how easily we're willing to ascribe human motives to robots, and to evaluate them using the same subconscious tools we use with people. The researchers started by isolating some of the body language that we rely on to determine whether or not we think someone is trustworthy.
For most of us, these gestures are slightly suspicious:
- Leaning back
- Touching your hands to each other
- Touching your face, especially your nose
- Crossing your arms
These gestures, by contrast, suggest genuine interest:
- Nodding your head as someone else is speaking
- Making eye contact (without staring)
- Leaning in slightly
- A bit of laughter
What the researchers from Northeastern found is a little scary: We maintain our reactions to body language, honed over eons, even if that body language is being exhibited by a robot.
The researchers didn't use any old bot. The experiments were done using Nexi, a robot designed to be particularly sociable by MIT's Media Lab. I thought this was ridiculous until I watched this 24-second clip of Nexi introducing herself. (Although the researchers refer to Nexi as "it," in the clips I've seen, the voice is clearly female).
Between her huge blue eyes, nodding head, and fluid hand gestures, Nexi makes Star Wars' C3PO look like, well, a robot. Nexi blinks and nods her head appreciatively. I'm sure there are good reasons for this, but I find it somewhat duplicitous. If we're going to be talking to a robot, why not just be honest about the fact that it's a robot? Take out the marketing angle, and why does a robot need to be cute? Why do Baxter's creators downplay references to 'programming' their robot, preferring instead to explain that it can be 'taught' certain tasks by humans? So we don't see it as a robot, I would think.
And why do both Baxter and Nexi have large, widely-spaced blue "eyes?" Consider that predators generally have eyes that are close together, giving them excellent depth perception. Animals that are more often prey tend to have eyes that are far apart or on opposite sides of their heads, giving them a more panoramic view and a better chance to escape whatever's coming to eat them. Put us humans in front of a creature whose eyes are far apart, and we're a lot more comfortable than we would be in front of a close-eyed one who might see us as a tasty snack.
Trusting a Robot
In the experiment, no one had to worry about being eaten. Instead, each participant chatted with Nexi for 10 minutes. The conversational topics were restricted to those Nexi, and sixth-grade essay writers, can handle easily: What did you do this summer? What do you like about living in Boston? In some cases, Nexi used no body language. She just sat there. In others, she gestured in ways that we often find suspicious: She leaned back, touched her hands to each other, touched her face, or crossed her arms.
When the researchers did the experiment using two humans, rather than one human and one robot, the two people got five minutes, rather than 10, to chat. The researchers thought an extra five minutes was enough to get people acclimated to Nexi.
Then the participants were asked to play a classic trust game with Nexi. They also filled out a questionnaire meant to determine how likable they found Nexi.
The trust game works like this: Each partner gets four tokens, worth $1 each. If they give a token to their partner, its worth grows to $2. In the most altruistic scenario, the two participants swap all their tokens, and they each end up with $8. But the best outcome for any individual is to keep all their tokens and hope their partner gives up all theirs. Then the selfish individual will get $12. Their partner gets nothing.
Here's the thing: The study participants gave fewer tokens to Nexi when she showed negative body language. They also expected to receive fewer tokens from her. We won't know exactly how many tokens Nexi got until the research is published (it's pending publication now), but the working paper says Nexi's body language made a significant difference -- even though Nexi, despite her charms, is clearly a robot and just does whatever she's programmed to do.
Interestingly, the negative body language didn't cause people to 'like' Nexi any less—at least on a conscious level. That could be because, in robots as in humans, there will always be individuals we like but don't trust with our money. Or it could be that the folks in the study were so surprised to be chatting with such an unusual robot that 'likability' was hardly an issue. It may take us much more than ten minutes to get used to robots like Nexi. -- KW
If you liked this story, you might also like:
Got a story idea? Think we're fabulous? Email us at more [at] onethingnew [dot] com, '@' us on twitter, or visit us on facebook. And spread the word. We need your help getting the word out about what we're up to!
Image courtesy of MIT Media Lab