Multi-Model NLU
to Deliver Flawless Understanding
to Deliver Flawless Understanding
Natural Language Processing (NLP) enables the understanding and processing of human language. In virtual assistants, NLP is utilized to discern a user’s intention, extract information from a statement, and engage in a conversation to fulfill a task.
Kore.ai introduces a multi-model NLU approach and offers multiple models to accurately detect user intent within a given context and precisely extract entities.
Zero-Shot Model
Employ Zero-shot Model to make your virtual assistants instantly effective with very little or no training required. This approach integrates with the Open AI GPT-3 model to process customer requests efficiently, including identifying intentions and extracting entities.
All of this can be accomplished without any training provided to the virtual assistant. This greatly decreases the amount of training required and gives your customers the finest conversational experience possible
Few-Shot Model
The Few-Shot Model utilizes Kore.ai’s Custom Fine-Tuned LLM to handle customer requests. It delivers greater consistency in responding to customer requests and allows for additional training to be provided with ease. Only 1/10th of the necessary training is required to put it into operation.
The model is both robust and secure, as it does not share data with third-party sources and does not require any additional costs for activation.
Training Data Suggestions
Creating effective training data is a huge task and requires continuous effort. Not anymore!
The Kore.ai XO Platform smart integration with language models offers training data suggestions to cover all possible variations that the end users may use. In addition to the overall speeding the whole training effort, it provides a wide range of training utterances to improve the intent detection rate.
The XO Platform takes a unique hybrid approach to understand user intent. It uses a machine learning model, a semantic rules-driven model, and a domain taxonomy and ontology-based model. This approach allows the virtual assistants to not only understand a user’s input with a high degree of accuracy, but also to intelligently handle complex human conversations.
The machine learning (ML) engine uses statistical modeling and deep neural networks to train an intent prediction model from a set of sample sentences for each intent.
The ML model evaluates all the training utterances against each task and plots them into one of these quadrants of the task: True Positive (True +ve), True Negative (True -ve), False Positive (False +ve), False Negative (False -ve).
Features:
The fundamental meaning (FM) model considers parts of speech and inbuilt concepts to identify each word in the user utterance and relate it to intents the virtual assistant can understand. It creates a form of the input with the canonical version of each word in the user utterance.
Features:
The Knowledge Graph (KG) model enables you to create a hierarchical structure of key terms and associate them with context-specific questions and their alternatives, synonyms, and machine learning-enabled classes.
Features:
The Ranking and Resolver engine is used by the platform to determine the winning intent based on the user utterance.
Features:
The Kore.ai XO Platform provides a simple no-code UI to train your intelligent virtual assistants (IVAs), making training for intents and entities easy.
Machine Learning allows you to train your Intents/Tasks with sample data and learns from them using Deep Neural Network models that accurately predict more varied inputs from the user. Ex: “I want to pay,” can be a sample utterance for the intent as Pay Bill.
Named Entity Recognition model allows you to mark entities from the user utterances and train the assistant to identify them accurately. System entities like Date, Number, Currency, Country etc are identified out of the box. NER training helps even in predicting the Custom or Domain-driven entities like Account Types, Card Type etc. Kore.ai allows a VA developer to choose the ML model relevant for their Data by providing the choice of cutting edge DNN based models like LSTM, CNN, KAEN. We can integrate any 3rd party ML engine too if required. Also, configure the agent to auto-learn from the successfully executed tasks.
The Knowledge Graph engine supports a large set of intents in the form of questions. This allows you to import large volumes of information quickly, group and annotate the key terms, add synonyms, and provide alternate queries. There is also the option to build a hierarchy.
Alternatively, you can download the Ontology generator from Github to automatically generate a knowledge graph for your virtual assistant. The Knowledge Extraction engine allows information to be extracted from unstructured documents and to understand queries.
Use the Fundamental Meaning engine to train idiomatic or command-like sentences. The engine uses a semantic approach to understand grammar and language nuances to empower virtual assistants to associate synonyms with task names. This mapping improves intent recognition rates and when combined with patterns, can be used to train VAs to recognize idiomatic expressions.
Traits
Traits are specific entities, attributes, or details that users express in their conversations. It is common for users to provide background information in natural conversations while describing a scenario. TheTraits feature uses this background information to identify the user’s intent and drive the conversational flow accordingly.
For instance, if “I am looking for a low-cost option to London” is the user’s utterance then using the Traits feature,the virtual assistant will be able to identify the intent as flight booking.
Each NLP model has many specialized capabilities but they also have their own individual limitations. The NLP technology that Kore.ai uses to overcome these weaknesses results in a combination of multiple NLP engines. The three engines complement each other with different perspectives and the results are correlated and resolved to accurately identify intents. This method is unique to Kore.ai, while most other solutions depend solely on one NLP.
Capability | Only ML Engine | Only FM Engine | Only KG Engine | Multiple Engine Approach (Kore.ai) |
---|---|---|---|---|
Learning based on sample user utterances | ||||
Minimal training using task names and patterns | ||||
Use of synonyms | ||||
Use of canonical forms | ||||
Use of grammar and parts of speech | ||||
Decisions based on domain term priority and relationships |
Guided Training
Virtual assistants perform best when you train them with accurate and adequate data. To help you provide superior training, the XO Platform provides proactive validations. It guides you to improve the VAs’ performance by constantly validating the NLU model and alerts you with errors and warning notifications.
The update focuses on validations related to the intent training with the ML & FM Engines for untrained intents, inadequate training, utterance does not qualify any intent, utterance predicts an incorrect intent, utterance predicts the expected intent with low confidence, incorrect intent patterns, short training utterances, incorrect entity annotations and more.
Choose Your Own NLU (CYON)
Are you migrating from a different platform to the Kore.ai XO Platform? This one’s for you! You can now start using external NLU data within the XO Platform and quickly migrate the training. It will save you a significant amount of time and effort.
Learn how Kore.ai products and solutions can drive ROI, reduce expenses and increase revenue.