Analysis

November 19, 2018

Chatbots: should they pretend to be human?

Chatbots are being used increasingly across Europe — should they tell us that we are chatting with a bot?


Layli Foroudi

5 min read

Chatbots are becoming increasingly popular within businesses across the world, but a debate has emerged as to whether chatbots should identify themselves as chatbots, or pretend to be real human beings.

Before he built his chatbot, Sagar Gupta pretended to be one. He chatted with 40 users over three weeks, to see if they were open to interacting with a bot — or what they thought was a bot. He didn’t change the way he wrote to make himself more mechanical as he wanted the bot to “keep as friendly and human a style as possible.”

Now Gupta’s chatbot, Ally Chatbot (as in a supporter or friend, not short for Alison), is just over a year old. It provides advice via a pop-up chat around housing access, evictions and welfare for housing associations, the YMCA and it is about to launch two pilots — with the Sanctuary Trust and as the first government housing advice chatbot with Hackney Council. It was while working at an under-resourced YMCA homeless hostel in north London that Gupta noticed the information needed by residents was often quite straight forward. He thought a chatbot could help more people get the support they needed.

Advertisement

He’s not alone. Chatbots are being adopted by companies across industries in Europe, from banking to ecommerce to healthcare. According to a global survey by BI Intelligence, European countries were the most receptive to the idea of chatbots, with France being the most keen. In another survey by Oracle, 80% of people working in businesses in France, the Netherlands, South Africa and the UK said they had already used chatbots or planned to use them by 2020.

Hello! I’m a friendly robot, here to help with any issue you might have with your housing

Unlike Gupta and his bot charade, Ally is very clear about who — or what — it/he/she is: a bot. “Hello! I’m a friendly robot, here to help with any issue you might have with your housing,” says Ally the Hackney Council chatbot, by way of introduction. Meanwhile, Ally the advisor bot for residents of Southern and Origin housing group properties, sends a seal GIF followed by: “Sadly, I’m not actually a seal. I’m a robot — but a helpful robot.” (Honest, not funny.)

Ally’s conversation designer Jessica Moens — a former housing advice officer — feels that users should understand the service they are using. “People may be uncomfortable not knowing who they are speaking to,” she said, “so it works to create a sense of transparency… especially given the delicate nature of the subjects covered.”

Such transparency is going to be required by law in California. Last month, new regulation was passed, obliging chatbots to declare their non-humanity. The law was prompted by deceptive commercial and political bots that are causing harm — not helpful bots. But even so, problem-solving machines will need to self-identify as such when the law comes into effect in July 2019.

Encouraging transparency in AI can only be a good thing — if people are giving away information, they should know where it is going and who or what is receiving it. And as well as building trust, sometimes it actually helps to know you’re speaking to a bot. Kriti Sharma, an AI technologist at Sage whose bot Pegg acts as an accounting assistant for small companies, says that bots allow people to “ask stupid questions” that they may not ask a colleague.

It’s an asset not to be human

Julia Shaw, co-founder of chatbot Spot, agrees. “It’s an asset not to be human,” she said, in conversation with Sharma at an FT event in London last night. Shaw’s chatbot Spot uses a cognitive interview to help individuals document workplace harassment — either to send anonymously to their employers, or to just have on record while they still remember it. With a bot, she said, people may reveal things and report experiences without fearing judgement.

When testing Ally chatbot on YMCA residents, Gupta also found that the anonymity offered by an app was an asset. “People don’t want to share all of their information. We [the housing advisors] used to struggle to build up trust because they also saw us as their landlord.”

Ailsa, another housing advice chatbot at Shelter Scotland, tells people who she (it?) is at the start of the chat but this is an act of self-preservation more than anything. “We anticipated the fact that users would get frustrated. They’re in a tough situation, with heightened anxiety,” said Conrad Rossouw, digital manager at Shelter, adding that users still get frustrated with Ailsa, saying things like “you’re stupid”, “you’re not a real person” and “you’re not understanding what I’m saying!”

The question “are you a bot?” comes up regularly in the Shelter Scotland live chat transcripts (the live chat is a run by an undeclared human.) Rossouw thinks that people are comfortable with interacting with bots but they want to know so that they can tailor their questions.

But could it be something deeper, more existential? A 2016 study by Goldsmiths University and Mindshare found that 48% of Brits find it “creepy” when chatbots pretend to be human. In the same study, 25% said they would rather share sensitive information with chatbots than with humans.

Advertisement

My most recent interaction with a customer service chat was on the Barbican arts centre website. It was a seamless interaction: my questions were answered, my problem was solved but I wasn’t sure whether I was speaking to a good bot or a human.

I went back to the Barbican live chat and asked. Forty minutes later, Michael got back to me: “Hello I can confirm we are definitely human.”

Perhaps humans should be identifying themselves as well as bots?