July 30, 2019

Conversational Chatbots

Today, everybody is trying to build a chatbot. With an array of new platforms and software packages, space has opened for average joes to create the technology negating the need for skilled developers. Developers can arguably do a better job with APIs and other connectivity tools and I wouldn’t advise against hiring them but for those just starting out, you can have a chatbot set up in minutes.

So, what’s the problem? If anybody can build a chatbot at low cost to facilitate a better customer experience and improve business efficiency, then why don’t they just go ahead and do it? The answer is that making an intelligent chatbot is still a huge challenge. Whilst it might be able to answer 100 questions about your company, ensuring it has the depth and contextual understanding to hold a conversational is rare.

Think about how you would talk to a friend. If you try to talk to a bot in exactly the same way, chances are it won’t understand. Have a go with your Alexa or Google Home. They quickly get confused. Enriching the conversational side of chatbots is the only way they are going to move forwards. Without that, we risk chatbots becoming purely a marketing tool and not a driver of business growth. There are plenty of examples where companies build bots for the sake of having a bot.

Chatbots rely on data

To start, chatbots rely on data. When a user asks a question, they need to have a knowledge base capable of answering it.

The conversation humans have with bots is powered by machine learning algorithms which breaks down your messages into human understandable natural languages using NLP (Natural Language Processing) techniques and responds to your queries similar to what you can expect from any human on the other side.

At face value, when using a chatbot, it looks quite simple but there is a massive amount of work going on in the background.

A chatbot needs to understand what a human has given as an input. This would be done using text classifiers, algorithms, neural networks, natural language understanding (NLU) and natural language processing (NLP). The bot will look for some sort of user intent and respond back with the appropriate message and attempt to sound like an organic and natural reply.

The chatbot is our interface and all the hard works goes on in the processing stage between the chatbot and some sort of database. This will likely be a cloud service such as IBM Watson, Amazon Web Services or SQL Azure. The more data stored in that database, the better the responses to the user will be. Actually, whatever front-end you use for your chatbot almost becomes irrelevant as everything is powered by the data that sits behind it.

The problem with many chatbots is that they are rule-based. They can only answer very specific questions, the ones that thy have been programmed to respond to. To create those chatbots, a dataset is loaded into the database with pre-defined questions and answers that it is able to work with. They would work using a decision tree type system like below.

Whilst these bots can be useful in handling frequently asked questions and basic customer interactions they are some way off creating a good customer experience. As soon as somebody deviates from the conversation, the bot will get confused and not be able to answer.

The most popular types of chatbot we see today go a step beyond this and tend to be what we would call retrieval based. A retrieval based chatbot would use machine learning algorithms to take what the user has said and find the nearest match to return an answer. It might not always be correct but it is less likely to throw up something irrelevant or nothing at all.

The best retrieval based system are loaded with a lot of data. Knowledge bases need to be continually updated to allow for new responses. Misuku which is considered to be one of the most advanced conversational chatbots has been loaded with over 300,000 different response patterns. That is a lot of different conversations but shows you the extent of what is required to create a useful bot.

The next level up from these methods are chatbots that can learn for themselves, known as generative chatbots. As we’ve said, the issue with a retrieval based chatbot is that it always has to be updated. A generative chatbot learns from everything that a user asks it and continually updates its database. Sometimes, it might validate with a human that it has added items correctly to the database (semi-supervised learning) but it is possible to leave them to their own devices. A model for this would look something like below.

These chatbots need to first be trained using a huge amount of previous conversational data. Often, you will start using a retrieval type platform to see what customers are going to ask. When there is enough data, usually thousands of conversations, they are classified or categorised. Some articles suggest you need millions of conversations. Going forwards, when a new conversation occurs, the chatbot can start generating its own classification for the text and return the best response without being programmed by a human to do so.

As so much data is needed for these unsupervised chatbots, many businesses cannot do it accurately which is why they revert back to retrieval methods.

A mix of retrieval and generative models is the best solution for a semi self-learning chatbot. Also called a “human in the loop” chatbot, the machine will prompt a human support agent when a conversation it doesn’t recognise happens and ask them to classify it appropriately. Over time, the human involvement should decrease as the volume of data increases.

One major limitation to chatbots to date has been in conversational flow or multi-layered conversations. It is another reason as to why fully generative methods are not mainstream in society yet.

Multi-layered conversations

To date, one of the biggest limitations with chatbots has been in multi-layered conversations. Let’s take a look at a practical example using Google and Bing search engines.

If I go onto Google and search for “How old is Tom Jones?” I am presented with the results.

If I try to continue the conversation and ask where he lives, I get a useless response.

We all think that Google has state of the art algorithms to enhance our searches but actually, this proves that it is in no way conversational. However, look what happens when I use Bing.

The Bing search engine has recognised I’ve asked a follow-up question even though I haven’t specified it is about Tom Jones in my keywords. Clearly, they have got some sort of conversational multi-layered AI going on in the background. That said, if you try a third search it stills get confused so it certainly isn’t quite there yet but there is potentially for understanding.

The point here is that if businesses what to use chatbots, they need to be multi-layered as that is how humans interact with each other. The aim would be to have multiple intents working together to create a conversation like the flow below from Amazon.

Some chatbot frameworks like Action.AI are now starting to incorporate these methods into their technology and it is making for a far more natural conversation. See the image below where the app fully understands that the customer has carried on with the previous conversation.

Summary

Building a chatbot is very simple and it is true that anybody can do it. The key takeaway from this post is that to build a quality chatbot takes time and data. It is important to collate everything that your customers might ask and train your chatbot appropriately before deployment. There is little worse than having a technology that doesn’t work. This is why truly conversational chatbots are still few and far between as companies strive to have enough data to develop an efficient platform. The next decade could be huge as businesses start to reach that point.

-
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.