May 2021 | Volume 22 No. 2
Cover Story
Unmasking the Machine
When a South Korean call centre decided to bring robots into the workplace, they asked Dr Sara Kim to study their employees’ response. The firm had told workers they would not be replaced, but it wanted to better understand their insecurities and perceptions of these new ‘co-workers’. Dr Kim’s preliminary results offer lessons on how to integrate people and AI-powered technology.
The key variable was people’s tendency to humanise their robot co-workers. Some were more likely to do this than others, and it made them feel more threatened by the technology. “Those who construe robots or digital agents to be more human-like tend to treat the robots like real people who can replace their job, whereas those who treat it like a machine are less likely to feel threatened,” she said.
For managers, that signified the need to sidestep the urge to put a smiling, winking face on a digital assistant or robot in certain environments. “There is some backfiring effect when human-like features are adopted for technological assistants,” she said. “The workplace is one environment where that can happen, but I think it extends to competitive atmospheres in general. Once the robot is seen as a competitor, you better not impose humanlike eyes or mouth or head, or else it can evoke uncomfortable feelings like insecurity.”
An exception that she found proves the rule: if the company culture is highly collaborative and evaluates teams rather than individuals for promotion, human-like features on a robot will not necessarily be harmful. “But if the environment is competitive, you better have a box shape,” she said.
Thunder stolen
The findings echo earlier work by Dr Kim on user responses to digital assistants in computer games and education software that either had no human features (such as simply a laptop image) or had human features (such as a laptop superimposed with eyes and a smile). The assistant gave instructions or hints on playing a game or solving a math problem.
“Practitioners assume that adopting these human forms can create friendly warm images that help people interact more smoothly with digital agents and robots and while that is often true, it isn’t always the case. That is the core of my research,” she said.
With computer games, she found that human-like characters ruined players’ enjoyment of the game, possibly because they felt undermined. “A major reason why people play computer games is because they want to feel a kind of autonomy or ownership over the outcomes so they can feel good about themselves. That thunder seems to be a little stolen when human-like icons give hints,” she said.
The findings were more complex with education software because they depended on the subject’s own beliefs about intelligence. Working with an educational and developmental psychologist, Dr Kim tested college students who believed either that intelligence was fixed from birth or that it depended on effort. Those who believed intelligence could not be changed were negatively impacted by human-like digital assistants. “They felt bad about themselves and that they must be dumb, so they were not motivated to do the next task and they performed worse on it,” she said. “Non-human assistants did not make participants feel they were being judged, so they were not necessarily reluctant to get help from them.”
“They felt bad about themselves and that they must be dumb, so they were not motivated to do the next task and they performed worse on it,” she said. “Non-human assistants did not make participants feel they were being judged, so they were not necessarily reluctant to get help from them.”
But sometimes human features are preferred
Dr Kim hopes to replicate the findings about educational software with young children to determine what age they start to feel threatened by human-like figures. “Kids five or six years old might not have a problem with it,” she said. “I want to see when this belief starts to have an effect, although I don’t think it affects everyone.”
People’s feelings about their technological helpers can also create internal conflict when it comes to performing workplace duties. In medical settings, for instance, other researchers have shown that staff may feel their job and even their sense of human identity is threatened, so they may be reluctant to use robots even if robots do a better job for patients, such as helping them out of bed.
But Dr Kim is at pains to point out that there are instances when people prefer human-like characteristics on technology – for instance, patients may prefer human-like robots. And in another study she did, consumers derived more pleasure from products when they humanised them. “When you name your phone or your car, you tend to feel happiness from experiences with that product at a similar level to experiential products, which people usually prefer over material things,” she said.
That also gives rise to complex feelings about replacement. “Other researchers have shown that when people treat their products more like a human, they are less likely to discard it. If you want to discard it, better to treat the product like a tool and don’t give it a name,” she said.
Those who construe robots or digital agents to be more human-like tend to treat the robots like real people who can replace their job, whereas those who treat it like a machine are less likely to feel threatened.
DR SARA KIM