Action task – Action tasks collect, modify, and post information in systems of record, like scheduling an appointment, searching for a product, or updating critical information.
Adapter – Adapters are an advanced method for making pre-trained AI models adaptable to new tasks without complete retraining. These modules save time, money, and resources by efficiently repurposing existing models for different tasks in areas like natural language processing, computer vision, and robotics.
Administration & analytics – The Kore.ai Platform provides enterprise grade visibility and control of all enterprise bots, user groups, and security, as well as a commitment to compliance in even the most highly regulated areas like healthcare and financial services. Learn more.
AI Copilot – An AI copilot is a type of artificial intelligence system that is designed to work alongside humans to help them with their tasks. AI copilots can be used in a variety of different industries and applications, such as software development, customer service, healthcare etc. AI copilots typically use a combination of machine learning and natural language processing to understand the context of the user’s task and provide relevant suggestions and assistance.
Alert task – Alert tasks deliver timely, relevant, and personalized information to customers and employees directly from enterprise systems. Bots poll the system for user requested updates in real-time.
Annotation – Annotation is the process of labeling data with additional information to help machine learning algorithms understand and learn.
API – An API, or Application Programming Interface, is a set of definitions, protocols, and tools for building application software. It helps developers by essentially providing the building blocks for a program.
Artificial Intelligence (AI) – AI is the development of computer systems that are able to perform tasks that normally require human-like intelligence, like decision making, speech recognition and understanding, translation between languages, and more.
Artificial General Intelligence (or “AGI”) – Artificial General Intelligence (AGI) refers to an AI system that possesses a wide range of cognitive abilities, much like humans, enabling them to learn, reason, adapt to new situations, and devise creative solutions across various tasks and domains, rather than being limited to specific tasks as narrow AI systems are.
Associative Memory – Associative memory refers to a system’s ability to store, retrieve, and process related information based on connections between elements, enabling it to efficiently identify and use relevant data for decision-making.
Auto-NLP – A term we use at Kore.ai to describe our synonym-based approach to natural language processing. It allows chatbots to communicate understand intent variations right out of the gate, thus being speech enabled “automatically”
Automatic Message Formatting – The pre-programmed responses for tasks built into Kore.ai’s NL engine.
Automated Speech Recognition (ASR) – Our Platform can integrate Automated Speech Recognition Engine to enable bots to process voice-driven interactions and communicate outside of traditional text-based interfaces.
Amazon Web Services (AWS) – A subsidiary of Amazon.com which provides on-demand cloud computing platforms to people and enterprises.
Bots (also see Chatbot) – Short for a computer program that acts as an intelligent intermediary between people, digital systems, and Internet-enabled things
Bot Builder Tool – Kore.ai’s web-based Bot Builder allows enterprises to customize chatbot use cases, channels, tasks, security, and more. It also provides a framework in which developers can design, test, and deploy chatbots in a consistent process at scale.
Bot Framework – A framework is a skeleton that provides some basic building blocks and generic functionality for building chatbots (like ML/ NLP or a Dialog Builder), but requires additional user-written code or other third-party services (to match the functionality of an actual platform). Frameworks often are composed of piecemeal components from different vendors.
Bots Platform – Kore.ai’s Platform is the only enterprise-grade chatbot platform offered as a SaaS which gives enterprises every component necessary for designing, developing, deploying, and managing AI-rich chatbots.
Bot Connector – For enterprises with systems behind firewalls, the bot connector is a tool that allows you to securely access the data in those systems for use in a bot.
Casual AI – Causal AI is a type of AI that can understand and model the causal relationships between variables. This is in contrast to traditional machine learning, which is typically concerned with finding statistical correlations between variables.
Channel – A channel is another word for any of the various communication platforms where a bot can live such as SMS, email, mobile apps, websites, messaging apps and more.
ChatGPT – A chat interface built on top of GPT-3.5. GPT-3.5 is a large language model developed by OpenAI that is trained on a massive amount of internet text data and fine-tuned to perform a wide range of natural language tasks. Example: GPT-3.5 has been fine-tuned for tasks such as language translation, text summarization, and question answering.
Cisco Spark – Cisco’s all-in-one communication platform, which is also a supported channel for the Kore.ai bots and an integration partner for chatbot development.
Cloud Connector – Provides an agent that runs behind your enterprise firewall that acts as a bridge to facilitate secure data exchanges between on-prem systems and Kore.ai’s cloud based infrastructure.
Component reusability – The ability for developers to use components they’ve already built in the Bot Builder, like APIs, synonyms, tasks, etc. and apply them to other bots.
Controllability – Controllability is the ability to understand, regulate, and manage an AI system’s decision-making process, ensuring its accuracy, safety, and ethical behavior, and minimizing the potential for undesired consequences.
Conversational AI – A subfield of AI that focuses on developing systems that can understand and generate human-like language and conduct a back-and-forth conversation. Example: A chatbot that can understand and respond to customer inquiries in a natural and human-like manner.
Conversational Commerce – A term coined by Chris Messina in 2016, which is another way of describing how digital economies will be driven by text and voice based interfaces and experiences.
Conversational UI – Another way of describing text and voice-based interfaces, which don’t require graphical elements for use, like Amazon’s Alexa or Apple’s Siri.
Cognitive Services (aka Conversational and Cognitive Services) – A collection of separate APIs, SDKs, and services (that run on a cloud infrastructure like Azure) which developers can use to build intelligence into apps and/or to construct a chatbot that can leverage AI capabilities. Unlike a platform, charge is typically separate for each service and each bot created.
Data Augmentation – Data Augmentation is a technique used to artificially increase the size and diversity of a training set by creating modified copies of the existing data. It involves making minor changes such as flipping, resizing, or adjusting the brightness of images, to enhance the dataset and prevent models from overfitting.
Data retention – The continued storage of an organization’s data for compliance or business matters.
Deep learning – A subfield of ML that uses neural networks with multiple layers to learn from data. Example: A deep learning model that can recognize objects in an image by processing the image through multiple layers of neural networks.
Deployment – The process of publishing a bot to communication channel where it will be engaged by users.
Deterministic Model – A deterministic model follows a specific set of rules and conditions to reach a definite outcome, operating on a cause-and-effect basis.
Developer experience (DevX) – Developer experience (DevX) is the experience that developers have when building, maintaining, and testing software. It encompasses all aspects of the developer’s workflow, from the tools they use to the processes they follow to the culture of the team they work in.
Dialog task – Dialog tasks are advanced tasks that developers design with logic-driven business processes and pre-established workflows. Bots key off the primary request intent to accomplish the task at hand, then go above and beyond to execute sub-intents and additional workflows.
Dialog Builder – The Kore.ai Dialog Builder gives designers and developers the flexibility to manipulate the entire dialog process of a bot interaction and string together complex workflows in a GUI-based tool.
Diffusion models – A type of generative model that learns to generate data by gradually adding noise to a blank canvas and then removing the noise in a controlled manner.
Discriminative Models – Models that classify a data example and predict a label. For example, a model that identifies whether a picture is a dog or a cat.
Encryption – The process of converting information or data into a code, especially to prevent unauthorized access.
End-to-end – A way of describing the Kore.ai Bots Platform which signifies that it includes all the component features to take enterprises from the very beginning of the chatbot development process, through deployment and management.
Enterprise analytics – The central dashboard within the Kore.ai Platform where administrators can get visibility into key metrics, pull detailed reports, and track bot usage (i.e. number of executed tasks, most popular channels, most active users, user enrollment, etc.)
Enterprise bots store – A bot store that an enterprise sets up for a select group of users to access any custom built bots.
Entity – Entities are the fields, data, or words the developer designates are necessary for a chatbot to complete the user’s request. An entity could be a date, a time, a location, a description or any number of things.
Entity extraction – This is the process by which the Kore.ai NL engine identifies words from a user’s utterance to ensure all available fields match the task at hand. If the chatbot needs an entity to complete the task after initial extraction, it will prompt the user for it.
FAQ – The primary data source chatbots use to pull information to complete knowledge tasks. Coming soon to the Platform will be the ability for website and data based knowledge and document-based knowledge.
Fine Tuning – The process of adapting a pre-trained model to a specific task by training it on a smaller dataset. For example, an image classification model trained on all intersection pictures can be fine turned to detect when a car runs a red light. At Moveworks, we’ve been fine-tuning LLMs for enterprise support for years.
Foundation Model – Foundation models are a broad category of AI models which include large language models and other types of models such as computer vision and reinforcement learning models. They are called “foundation” models because they serve as the base upon which applications can be built, catering to a wide range of domains and use cases.
Framework – A framework is a skeleton that provides some basic building blocks and generic functionality for building chatbots (like ML/ NLP or a Dialog Builder), but requires additional user-written code or other third-party services (to match the functionality of an actual platform). Frameworks often are composed of piecemeal components from different vendors.
Generative AI – Generative AI models create new data by discovering patterns in data inputs or training data. For example, creating an original short story based on analyzing existing, published short stories.
Generative adversarial networks (or “GANs”) – GANs are a powerful type of neural network capable of generating new, never-seen-before data that closely resembles the training data.
Glip – A supported channel for chatbots built on Kore.ai’s Bots Platform.
GPT Generative Pre-trained Transformer – GPT stands for Generative Pre-trained Transformer. It is a family of large language models developed by OpenAI. GPT models are trained on a massive dataset of text and code, and can be used to generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way.
Graphic User Interface (GUI) – A visual way of interacting with an app or system, such as buttons, images, windows, icons, menu forms, and more.
Grounding – Grounding is the process of anchoring artificial intelligence (AI) systems in real-world experiences, knowledge, or data. The objective is to improve the AI’s understanding of the world, so it can effectively interpret and respond to user inputs, queries, and tasks. Grounding helps AI systems become more context-aware, allowing them to provide better, more relatable, and relevant responses or actions.
Guardrails Framework – The Guardrails Framework is a set of principles and practices for developing and deploying AI systems in a responsible and ethical way. It is designed to help organizations identify and mitigate the risks associated with AI, and to ensure that AI systems are used in a way that benefits society.
Hallucination – Hallucination refers to a situation wherein an AI system, especially one dealing with natural language processing, generates outputs that may be irrelevant, nonsensical, or incorrect based on the input provided. This often occurs when the AI system is unsure of the context, relies too much on its training data, or lacks a proper understanding of the subject matter.
Hosting – Enterprises have the choice of hosting the Kore.ai Bots Platform on prem or in the cloud via AWS.
Information and Communication Technologies (ICT) – ICT refers to technologies that provide access to information through telecommunications. It is similar to Information Technology (IT), but focuses primarily on communication technologies. This includes the Internet, wireless networks, cell phones, and other communication mediums.
Information task – Information tasks lookup data or pull reports based on specific parameters and quickly return easy-to-consume results that are convenient for users.
Instruction-tuning – Instruction-tuning is an approach where a pre-trained model is adapted to perform specific tasks by providing a set of guidelines or directives that outline the desired operation.
Interface – A shared boundary across which two or more separate components of a computer system exchange information.
Intelligence (see also Platform Intelligence) – All the capabilities provided to developers who use the Kore.ai Platform to create AI-powered chatbots including how to use contextual data, memory, NLP, machine learning (both supervised and unsupervised), sentiment analysis, and more.
Intent – The few essential words that describe what the user wants the chatbot to do, usually a verb and a noun such as: Find an ATM, Create an event, etc.
Knowledge task – Knowledge tasks take user questions and query a pre-defined set of information to rapidly find the right answers, such as business hours of operation or specific policy questions.
Kore.ai – Founded in 2014 in Orlando, FL to help enterprises build and use conversational, intelligent chatbots for a variety of use cases. Learn more about Kore.ai
LangOps – Language operations, also known as language engineering, is the field of computer science that deals with the development and deployment of language models and other NLP technologies. LangOps engineers work to ensure that language models are trained on high-quality data, that they are deployed in a scalable and reliable way, and that they are used to solve real-world problems.
Large Language Model (or “LLM”) – A type of deep learning model trained on a large dataset to perform natural language understanding and generation tasks. There are many famous LLMs like BERT, PaLM, GPT-2, GPT-3, GPT-3.5, and the groundbreaking GPT-4. All of these models vary in size (number of parameters that can be tuned), in the breadth of tasks (coding, chat, scientific, etc.), and in what they’re trained on.
Live agent handoff – The ability of a chatbot to seamlessly take a conversation from any channel and bring in a human agent. This function is especially useful for areas like service and support and ITSM.
LLM Orchestration – LLM orchestration is the process of coordinating the use of multiple large language models (LLMs) to achieve a common goal. This can be done for a variety of reasons, such as to improve the accuracy or performance of a task, or to allow multiple LLMs to work together on a complex task.
Low-code – Low-code is a visual approach to software development that enables faster delivery of applications through minimal hand-coding.
Logic programming – A type of programming which is largely based on formal logic and is the building block for complex chatbot dialogs and workflows.
Managed Services Provider (MSP) – Most often an IT provider that manages and assumes responsibility for providing a defined set of services to its clients either proactively or as the MSP determines that services are needed. They typically offer a range of services such as network maintenance, hardware repair, help-desk, email management, and anything else that requires a day-to-day administrator to keep running. They may need to purchase technology from a third-party provider, a cost which is bundled with their services and passed on to their end customer. MSPs operate on longer-term annual or multi-year contracts, and the tenure of their relationship is open-ended.
Memory – Bots can remember actions, data, and contextual details to maintain conversation continuity and take helpful actions. The developer can designate how long the bot remembers information as either short term or long term memory.
Multi-layer authentication – A method of access control in which the user must provide several separate authentication factors before being granted access to data. The Kore.ai Platform supports multi-layer authentication for bot access.
Multimodal Language Model – Similar to a large language model, a multimodal language model is a type of deep learning model trained on large datasets, but unlike large language models which are only trained on textual data, multimodal language models are trained on both textual and non-textual data. This allows it to generate responses to multiple input modalities, like text, images, audio, and video.
Multiple intents – When a user gives a complex request to a chatbot which requires the bot to processes and prioritize two or more intents at once.
Natural Language Ambiguity – Natural language ambiguity refers to situations where a word, phrase, or sentence can have multiple meanings, making it challenging for both humans and AI systems to interpret correctly.
Natural Language Generation (or “NLG”) – A subfield of AI that produces natural written or spoken language.
Natural Language Training – The processes in which you refine a chatbot’s ability to understand and process NL requests, and test accordingly. It can be done by adding synonyms to the chatbot’s vocabulary via the Kore.ai Bot Builder, or training with complete utterances via machine learning. You can learn more about how to NL train a bot by watching How To Build A Chatbot In 5 Minutes.
Neural networks – A machine learning model inspired by the human brain’s structure and function that’s composed of layers of interconnected nodes or “neurons.” Example: A neural network that can recognize handwritten digits with high accuracy.
No-code – No-code is an approach to designing and using applications that doesn’t require any coding or knowledge of programming languages.
N-Shot Learning – Zero/Single/Few shot learning are variations of the same concept – providing a model with little or no training data to classify new data and guide predictions. A “shot” represents a single training example. Fun fact: Within the GPT prompt, you can ask for “N” examples to improve the accuracy of the response.
Omni-channel (deployment) – The process of building one chatbot that is “channel agnostic” (meaning the bot can live in any channel), and deploying it to the communication channels of your choice. Omni-channel bots can be accessed in more than one place and can carry conversation context across channels.
OpenAI – The organization that developed ChatGPT. More broadly speaking, OpenAI is a research company that aims to develop and promote friendly AI responsibly. Example: OpenAI’s GPT-3 model is one of the largest and most powerful language models available for natural language processing tasks.
Optimization – The process of adjusting the parameters of a model to minimize a loss function that measures the difference between the model’s predictions and the true values. Example: Optimizing a neural network’s parameters using a gradient descent algorithm to minimize the error between the model’s predictions and the true values.
Parameter-efficient Fine-tuning (or “PEFT”) – Parameter-Efficient Fine-Tuning, also known as PEFT, is an approach that helps you improve the performance of large AI models while optimizing for resources like time, energy, and computational power. To do this, PEFT focuses on adjusting a small number of key parameters while preserving most of the pretrained model’s structure.
Patterns – Patterns are word combinations that indicate a certain intent or entity.
Pervasive Cloud – Pervasive cloud, also known as ubiquitous cloud, is a vision of a future where cloud computing is so pervasive that it is transparent to users and applications. In a pervasive cloud environment, users and applications would be able to access cloud computing resources from anywhere, at any time, without having to worry about the underlying infrastructure.
Platform architecture – A visual descriptor of the various Platform features and how they interact with one another.
Pilot – The stage after proof of concept where a chatbot’s tasks are published and the bot is deployed to a select group of users for testing.
Pre-training – Training a model on a large dataset before fine-tuning it to a specific task. Example: Pre-training a language model like ChatGPT on a large corpus of text data before fine-tuning it for a specific natural language task such as language translation.
Probabilistic Model – A probabilistic AI model makes decisions based on probabilities or likelihoods.
Prompt Chaining Prompt – chaining is a technique used in generative AI to create more informative and comprehensive outputs. It involves feeding the output of one prompt to the input of the next prompt, creating a chain of prompts and outputs. This allows the AI model to leverage its knowledge and understanding of the previous prompt to generate a more relevant and informative response.
Prompt Engineering – Identifying inputs — prompts — that result in meaningful outputs. As of now, prompt engineering is essential for LLMs. LLMs are a fusion of layers of algorithms and, consequently, have limited controllability with few opportunities to control and override behavior. An example of prompt engineering is providing a collection of templates and wizards to direct a copywriting application.
Reasoning – AI reasoning is the process by which artificial intelligence systems solve problems, think critically, and create new knowledge by analyzing and processing available information, allowing them to make well-informed decisions across various tasks and domains.
Recursive Prompting – Recursive prompting is a strategy for guiding AI models like OpenAI’s GPT-4 to produce higher-quality output. It involves providing the model with a series of prompts or questions that build upon previous responses, refining both the context and the AI’s understanding to achieve the desired result.
Reinforcement Learning – A type of machine learning in which a model learns to make decisions by interacting with its environment and receiving feedback through rewards or penalties. GPT uses reinforcement learning from human feedback. When tuning GPT-3, human annotators provided examples of the desired model behavior and ranked outputs from the model.
Responsible AI – Responsible AI refers to the approach of creating, implementing, and utilizing AI systems with a focus on positively impacting employees, businesses, customers, and society as a whole, ensuring ethical intentions and fostering trust, which in turn enables companies to confidently scale their AI solutions.
Response formatting – The process by which a developer customizes the responses a bot will give during an interaction. Responses can be formatted to be natural language only, or include GUI elements like buttons, forms, images, etc.
Robotic Processing Automation (RPA) – A tool for automating manual, time-consuming, complex, rule-based workflows and functions for back-end IT administrative work. While it may sound similar to AI-chatbots, we explain all the nuances and differences in our whitepaper, Robotic Process Automation Software Robots & Chatbots: What’s Different, What’s Similar, and What’s Next?
SAP Solutions – Bots built by Kore.ai to specifically integrate with SAP’s most popular systems, such as SuccessFactors, Concur, S/4Hana, SAP Hybris, and more.
Sentiment analysis – Beyond completing tasks, Kore.ai built bots can understand a user’s mood throughout a conversation. Our NLP engine scores sentiment based on connotation, word placement, and modifiers. Developers can use these scores to trigger custom flows to improve bot-to-user communication, or bring in human agents as needed.
Sequence Modeling – A subfield of NLP that focuses on modeling sequential data such as text, speech, or time series data. Example: A sequence model that can predict the next word in a sentence or generate coherent text.
Smart Bots – A group of functionally specific bots built on the Kore.ai Platform that we designed with quick deployment and time-to-value in mind. These bots, which include Banking, Service & Support, IT Help Desk, Commerce, Sales, and SAP, come NL-enabled and with a pre-determined set of AI-rich tasks. They are also customizable.
Software Development Kit (also see Mobile SDK, Web SDK) – Tools or resources that help developers create websites and apps and customize elements of the UI.
Stacking – Stacking is a technique in AI that combines multiple algorithms to enhance overall performance. By blending the strengths of various AI models, stacking compensates for each model’s weaknesses and achieves a more accurate and robust output in diverse applications, such as image recognition and natural language processing.
Supervised learning – A type of machine learning in which a model is trained on labeled data to make predictions about new, unseen data. Example: A supervised learning algorithm that can classify images of handwritten digits based on labeled training data.
Synthetic data – Synthetic data is artificially generated data that can be used to train AI systems. Synthetic data can be used to train AI systems on tasks where real-world data is scarce or difficult to obtain.
Synonyms – Word variations for intents or entities that developers can add to a chatbot’s synonym library to give it a wider and more accurate range of natural language understanding.
Testing – The step-by-step processes of testing request chaining, intent and sub intent recognition and entity extraction, conversation flow, and more
Tokenization – The process of breaking text into individual words or subwords to input them into a language model. Example: Tokenizing a sentence “I am ChatGPT” into the words: “I,” “am,” “Chat,” “G,” and “PT.”
Training data – The amount of data, usually in the form of utterances, that is fed to a bot in order to
Transformer – A type of neural network architecture designed to process sequential data, such as text. Example: The transformer architecture is used in models like ChatGPT for natural language processing tasks.
Trusted AI – Trusted AI is the practice of developing and deploying AI systems that are reliable, secure, and fair.
Twitter – A supported channel for Kore.ai bots.
Use case – The various ways chatbots can be applied for employee and customer facing tasks. Check out our Top 30 defined use cases for chatbots.
Universal bot – A bot that has the power to communicate with other bots to complete tasks on its behalf.
Unstructured data – Unstructured data and documents, in this instance, refers to sources that are typically text-heavy and free-flowing. Such documents or data can still contain dates, numbers, and facts, but they lack a pre-defined data model or structure and overall consistency. The Bots Platform supports semantic search against unstructured data and the training of bots from unstructured documents.
Unsupervised learning – A type of machine learning in which a model is trained on unlabeled data to find patterns or features in the data. Example: An unsupervised learning algorithm that can cluster similar images of handwritten digits based on their visual features.
Value Added Reseller (VAR) – A value-added reseller offers third party software and hardware to the end user at a markup, along with some combination of procurement consulting, configuration, and customization services. They generate revenues through a combination of flat-rate fees per license, and billable hours, but their engagement is finite.