Kore’s proprietary natural language processing (NLP) technology detects intent of users in the given context & extracts entities with very high accuracy by processing user inputs against three different engines.
NLP Powered Virtual Assistants
Natural language processing is a computer’s ability to understand and process human language. In the realm of virtual assistants, NLP is used to determine a user’s intention, extract information from an utterance, and carry on a conversation with the user in order to execute and complete a task.
Utterances refer to anything a user says. With virtual assistants, utterances can consist of multiple sentences processed individually, in a logical order, or simultaneously based on the overall request of the user.
Intents refer to what a user wants to accomplish. Most intents are simple, discrete tasks like “Find Product”, “Transfer Funds”, “Book Flight”, and are typically described with a verb and noun combination. These types of intents will initiate a dialog with the user to capture more information, fetch and update data from remote systems, and inform the user of progress.
The goal of intent recognition is to match a user utterance with its correctly intended task or question. At Kore, we determine a user’s intention through several different training models that define the combinations of words that typically indicate an intent.
Entities are anything defining, shaping, or modifying the intent of the user, and are required to carry out the intent, such as dates, times, and locations. Entities are usually assigned names like “dateTime,” and sometimes referred to as slots.
The goal of entity extraction is to identify elements needed to complete the task. These elements can be simple items like numbers and dates to complex items like addresses and airport names to user-defined domains such as product categories. Out of the box, the Kore Platform supports the identification and extraction of 20+ system entities.
Multiple NLP Engine Approach of Kore
Kore takes a unique hybrid approach to understand user intent. We use a machine learning model-based engine, a semantic rules-driven model, and a domain taxonomy and ontology-based model. This approach allows our virtual assistants to not only understand a user’s input with a high degree of accuracy, but also to intelligently handle complex human conversations.
Machine Learning Engine
The machine learning (ML) engine uses statistical modeling and deep neural networks to train an intent prediction model from a set of sample sentences for each intent.
The ML model evaluates all the training utterances against each bot task and plots them into one of these quadrants of the task: True Positive (True +ve), True Negative (True -ve), False Positive (False +ve), False Negative (False -ve).Key Features:
- Uses deep neural network based text classification algorithm features including n-grams, entity marking, lemmatization, stop word exclusion, and synonyms
- Uses conditional random fields for named entity recognition (NER) and extraction with an option to use Deep neural network based NER too
- Trained using sample utterances for each intent & entities
- Supports supervised learning to monitor the bot performance and manually tune where required
- Can be visualized and fine-tuned to get the best outcome
- Allows auto training of utterances from the user's conversation
- Customizable Machine Learning pipeline
- Supports unsupervised ML to build analytics on the usage of the intents, flows, dropouts etc.
Fundamental Meaning Engine
The fundamental meaning (FM) model considers parts of speech and inbuilt concepts to identify each word in the user utterance and relate it with the intents the virtual assistant can perform. It creates a form of the input with the canonical version of each word in the user utterance.Key Features:
- Deterministic model that uses semantic rules and language context to determine the intent match
- Can be trained using synonyms, built-in and custom concepts and patterns
- Scores using various semantic rules including:
- Parts of speech
- Word match, work coverage, word position
- Sentence structure
Knowledge Graph Engine
The knowledge graph (KG) model enables you to create a hierarchical structure of key domain terms and associate them with context-specific questions and their alternatives, synonyms, and machine learning-enabled classes.Features:
- Turns static frequently asked questions (FAQ) text into an intelligent, personalized conversational experience
- Uses domain terms and relationships
- Requires lesser training
- Enables word importance and lesser false positives for terms marked as mandatory
- Capability to enable the ontology weighted features whenever ML gets confused
- Automatic conversational dialog for resolving appropriate answer
Ranking and Resolver: To Determine the Winning Intent
The ranking and resolver engine of the Kore.ai Platform is used to determine the winning intent based on the user utterance. Based on the ranking and resolver, the winning intent between the engines is ascertained.Features:
- Determines the best possible intent match based on the scores from all the models
- Definitive match vs possible match from each engine is ranked against each other
- In case of non-conclusive match, a disambiguation dialog is triggered
Conversational Intelligence Through NLP
Human conversations are never simple and straightforward. Context is the key to understanding what the human meant in the current input. While natural language understanding (NLU) engines help in identifying the intent and entities, choosing the right intent or entity in the given context is the key to handle conversations by a virtual assistant. Multiple NLP engine approach of Kore enables the virtual assistants to respond to complex conversations.
Advantages of Using Kore’s Multiple NLP Engine
The individual engines have many specialized capabilities but also have their own limitations. Kore’s proprietary NLP technology overcomes the weakness of any one individual NLP model. The three engines complement each other with different perspectives. Their results are correlated and resolved to accurately identify intents. This method is unique to Kore, while most other solutions depend solely on one.
|Capability||Only ML Engine||Only FM Engine||Only KG Engine||Multiple Engine Approach (Kore.ai)|
|Learning based on sample user utterances|
|Minimal training using task names and patterns|
|Use of synonyms|
|Use of canonical forms|
|Use of grammar and parts of speech|
|Decisions based on domain term priority and relationships|
The Following Are the Benefits of Using Kore NLP Technology
Virtual assistants interpret customer utterances accurately with fewer false positives
Virtual assistants communicate with users comprehensively
Virtual assistants communication can be personalized per user per channel
Virtual assistants are faster to train due to low code user interface
Virtual assistants require less training data to decipher natural language
Developers can resolve development gaps and false positives faster
Developers can resolve false positives faster
Developers can debug and improve the machine learning model using various demystification tools