Skip to main content

The Importance of Intents and Context in Chatbots

Humans are good in conversations, because they can understand the intent of any statement, also the context in which the statement is placed.

Today’s chatbots are equipped with intent recognition engine. Using machine learning, statements are mapped to intents, and the chabot can handle infinite variations of the same statement. Why is this technique useful? The following example cites this:

Consider you want to ask the contact details of a company. Now every person will ask it in a different way. Following are few of the variations that are possible:

  • Please share your contact details
  • Where do you reside
  • Where are you located
  • Can you share your location
  • Please provide your address
Now the intent of the above questions is “asking address”. So a answer can be mapped to the intent, rather than all the questions. This helps in managing the chatbot easy.

Before intent recognition came into existence, most popular being Wit.ai, recently acquired by Facebook, earlier chatbots were purely based on Artificial Intelligence Markup Language (AIML). AIML was purely a question-answering template. A lot of manual intervention was needed to make the chatbot sound intelligent. Based on pure pattern matching in the questions, it had a basic flaw - it was not able to handle the infinite variations in a question.

Context identification is still a difficult problem, and one of the core problems the AI community is working on. It is argued that, once this problem is solved, AI might have reached the Artificial General Intelligence level, which means the chatbot will be able to match up to human capabilities.

Intent recognition engines such as Wit.ai, Watson Conversation and Microsoft Bot Framework have changed the chatbot scene. Today it is very easy for anyone to develop a chatbot by training it on various intents. Context identification is handled by such engines using a manual process of creating stories or dialogues. A person who is training the chatbot has to write various dialogue flows to make the chatbot understand context.

We at Cere Labs, a Mumbai based startup in Artificial Intelligence, have developed successful chatbots that uses intent recognition. After training the chatbots on various intents, the chatbots have started sounding intelligent, and are able to answer most of the user’s questions. Look into this space as we elaborate on the process of intent recognition.

Let your users talk to your website by using chatbots. You can talk to our chatbot on Cere Lab’s facebook page - https://www.facebook.com/cerelabs/

Comments

  1. We can build your chatbot from scratch, instilling the best practices of a conversational user experience. From setting up a chatbot on your Facebook Page, to implementing analytics so you can understand how users are becoming customers. Malaysia chatbots

    ReplyDelete

Post a Comment

Popular posts from this blog

Implement XOR in Tensorflow

XOR is considered as the 'Hello World' of Neural Networks. It seems like the best problem to try your first TensorFlow program.

Tensorflow makes it easy to build a neural network with few tweaks. All you have to do is make a graph and you have a neural network that learns the XOR function.

Why XOR? Well, XOR is the reason why backpropogation was invented in the first place. A single layer perceptron although quite successful in learning the AND and OR functions, can't learn XOR (Table 1) as it is just a linear classifier, and XOR is a linearly inseparable pattern (Figure 1). Thus the single layer perceptron goes into a panic mode while learning XOR – it can't just do that. 

Deep Propogation algorithm comes for the rescue. It learns an XOR by adding two lines L1 and L2 (Figure 2). This post assumes you know how the backpropogation algorithm works.



Following are the steps to implement the neural network in Figure 3 for XOR in Tensorflow:
1. Import necessary libraries
impo…

From Cats to Convolutional Neural Networks

Widely used in image recognition, Convolutional Neural Networks (CNNs) consist of multiple layers of neuron collection which look at small window of the input image, called receptive fields.
The history of Convolutional Neural Networks begins with a famous experiment “Receptive Fields of Single Neurons in the Cat’s Striate Cortex” conducted by Hubel and Wiesel. The experiment confirmed the long belief of neurobiologists and psychologists that the neurons in the brain act as feature detectors.
The first neural network model that drew inspiration from the hierarchy model of the visual nervous system proposed by Hubel and Wiesel was Neocognitron invented by Kunihiko Fukushima, and had the ability of performing unsupervised learning. Kunihiko Fukushima’s approach was commendable as it was the first neural network model having the capability of pattern recognition similar to human brain. The model gave a lot of insight and helped future understanding of the brain.
A successful advancement i…

Understanding Projection Pursuit Regression

The following article gives an overview of the paper "Projection Pursuit Regression” published by Friedman J. H and Stuetzle W. You will need basic background of Machine Learning and Regression before understanding this article. The algorithms and images are taken from the paper. (http://www.stat.washington.edu/courses/stat527/s13/readings/FriedmanStuetzle_JASA_1981.pdf
What is Regression? Regression is a machine learning technology used to predict a response variable given multiple predictor variables or features. The main distinction is that the response to be predicted is any real value and not just any class or cluster name. Hence though similar to Classification in terms of making a prediction, it is largely different given what it’s predicting. 
A simple to understand real world problem of regression would be predicting the sale price of a particular house based on it’s square footage, given that we have data of similar houses sold in that area in the past. The regression so…