advertisement
Evan Wheeler at Botcon 2016
Facebook
X
LinkedIn
WhatsApp
Reddit

How to stop your chatbot falling into the “uncanny valley”

Although many of us carry a touchscreen computer in our pockets – an everyday occurrence that, around 10 years ago, would’ve seemed like science fiction – many of us still have an eye on the future.

Where, we ask, is our flying car? Where is our teleportation device? Heck, at this stage, where are our robots like Rosy from The Jettsons or Robby from Lost In Space? Where’s R2-D2, dammit?

Instead we’re expected to make do with autonomous cleaning robots or the likes of Siri and Cortana, who aren’t exactly setting our world alight. Our robots, in short, could be better – and according Unicef’s CTO, Evan Wheeler, they may well be on their way to being so. We just have to get over our inherent fear of them.

The Second Chatbot Revolution

According to Wheeler, it’s an exciting times for bot creators and users. We’re in the middle of what he calls the second chatbot revolution. The first occurred at the turn of the century – you may remember Ask Jeeves – and then the bottom fell out of the .com boom. VC money dried up precipitating the AI winter.

The reason this new chatbot revolution is little different is because Natural Language Processing (NLP) and Machine Learning (ML) has improved vastly since the dawn of the noughts. Tools are largely open source, data is freely available and the community creating bots is bigger and a lot more open. Also, in the past bot development was off-limits to anyone who didn’t own a super computer, which isn’t the case today.

The challenge facing many developers, Wheeler says, isn’t the lack of input or tech available to developers. It’s the fact that human beings inherently mistrust bots – and by extension, AI and robots – the closer they begin resemble humans.

The Uncanny Valley

The Uncanny Valley is a term that was first coined by roboticist Masahiro Mori and it’s used to explain the revulsion or mistrust humans feel towards entities that are close to but not quite human – which may explain why some people have a fear of zombies or clowns, for example. We feel neutral towards robots that look nothing like us like say, assembly robots at a car plant or silver screen droids like R2-D2 and Wall-E.

But a robot like Sophia, made by Hanson Robotics sets our nerves on end. Wheeler says that a robot like Sophia plonks us deep into the uncanny valley; humans pick up on qualities that are out of place and our evaluation is unfavourable. In short, Sophia freaks us out and we don’t trust it.

So what has all of this got to do with the chatbot revolution? Well, Wheeler says that a current trend in development is to try to create bots that ape human responses – not just in terms of information passed from the bot to a human, but in terms of tone. The problem arises when the bot’s mimicry veers to close to the real thing – expressing say, sarcasm or sympathy. In these instances, human’s act inherently negatively to things that look or sound like us; we don’t trust them and we try to get away from them.

In short, we feel that when something tries to ape us we feel it is trying to trick us.

Acceptance through Discoverability

Wheeler says that the process of whether human interaction with technology becomes successful is called Discoverability – it’s basically how humans figure out how to use tech.

Discoverability can be broken down into four component experiences; affordance establishes our relationship with an item, signifiers point us in the direction of their use, constraints establish boundaries and peak capabilities, mapping enables functionality and the end feedback dictates our future use. If all conditions are met, an ongoing relationship is established – which Wheeler calls a conceptual model.

For example, a chair can provide you with a place to sit (affordance), and you note that the seat of it seems to be where you plant your backside (signifier). Once you sit down you won’t be able to move too far (constraint), the motion of sitting is pretty easy (mapping) and you’re comfortable once you sit down (feedback). So it’s likely you may use the chair again (conceptual model).

Work, damn you!

The other possible outcome to the above scenario is that one or many of the conditions isn’t satisfied, and this results in failed expectations. When this happens, we humans lose our shit. As an example, think of the door-close button on an elevator. Have you only ever pressed it once? Or, after initially pressing it and seeing nothing happen (no feedback) do you then proceed to hammer it until the elevator doors close?

That’s a completely human reaction – and the same thing happens with regards to software. When a program or app doesn’t behave as advertised humans react badly. Chatbots are in a higher risk category in this regard, since due to the uncanny valley effect, most of us don’t like them to begin with. This all makes bot development something of a logistical minefield for creators.

Avoiding the uncanny valley

Wheeler says, however, there are a couple of rules of thumb developers can bear in mind when creating bots that may help them in their endeavour. First, avoid uncanny behaviour; you don’t mind a bot telling you your bank balance, but you wouldn’t really want its consolations on the death of a loved one. Remorse and culpability simply aren’t believable when expressed by a bot.

Second, never make consumers start over. If an exchange between a bot and a human results in the latter having to restart an entire query process, development has failed.

A strength of personality for your bot can be an asset, depending on what field it’s deployed in. For example the developers behind Nabu, a bot deployed as a medical assistant, had a Hollywood screenwriter create an entire back story for the AI in order to deepen its interaction with patients. This helped immensely with consistency in order to build trust between the bot and the patient. Since the bot was housed in a cute cartoon-like shape, there was never any chance of patients descending into uncanny valley

Above all, Wheeler says, don’t think of bots as replacements for humans. They’re not – and at this stage that can’t be. Think of them the same way you would a calculator; the device alone is not going to progress in the field of maths, but it won’t make errors.

Take a human and a bot and you have a powerful combination. Bots aren’t replacements, they’re forced multipliers. That’s how they should be used.

advertisement

About Author

advertisement

Related News

advertisement