Glossary

Stay up to date with industry terms

The AI industry moves fast, and so does the terms used to describe it. This glossary helps you stay up-to-date with the evolving market language.

Download
A2A (Agent-to-Agent)
A2A is how autonomous AI agents communicate and collaborate directly, delegating tasks, sharing knowledge, and solving problems together, enabling distributed decision-making and multi-agent workflows without constant human oversight.
Action task
An Action Task is a predefined, trigger-based AI task that executes automatically, handling routine actions like notifications, record updates, or workflow launches instantly, without manual effort.
Adaptive RAG (Adaptive Retrieval-Augmented Generation) is a method that dynamically adjusts how and when external information is retrieved based on the complexity of a query. Instead of using a fixed retrieval process, it determines whether retrieval is needed and selects the appropriate strategy, such as no retrieval, single-step retrieval, or multi-step retrieval. This improves efficiency, reduces unnecessary computation, and ensures better response quality across different query types.
Agent embeddings
Agent Embeddings are vector-based representations of an agent's role, skills, and context, enabling intelligent task routing and matchmaking by helping the system identify the best-suited agent for any goal.
Agentic AI
Agentic AI refers to autonomous AI systems that combine reasoning, planning, memory, and adaptability to make decisions, execute complex tasks across tools, and solve problems proactively with minimal human input.
Agentic applications
Agentic Applications are AI-powered software systems where autonomous agents take actions, make decisions, and adapt in real time using LLMs, computer vision, and reinforcement learning with minimal human guidance.
Agentic Memory enables AI agents to store and recall short-term context and long-term knowledge, maintaining continuity, personalizing responses, and supporting informed decision-making across complex, multi-step workflows.
Coined by McKinsey, Agentic Mesh is a flexible, vendor-agnostic AI architecture where multiple autonomous agents collaborate across tools, systems, and models to make decisions and adapt in real time at enterprise scale.
Agentic orchestration is the process of coordinating AI agents, automation systems, and human inputs across workflows to achieve a defined outcome. It manages how tasks are distributed, executed, and monitored, ensuring agents operate within business rules and governance frameworks. This allows enterprises to run complex, multi-step processes with greater control, adaptability, and visibility.
Agentic RAG combines Retrieval-Augmented Generation with agent autonomy, enabling agents to actively decide what to retrieve, how to interpret it, and when to act, supporting context-aware, adaptive multi-step workflows.
Agentic Reasoning enables AI agents to break down complex goals, make context-driven decisions, learn from outcomes, and self-correct, transforming them from reactive tools into proactive, autonomous problem-solvers.
Agentic Workflows are dynamic task sequences planned and executed by AI agents with minimal human input, adapting in real time based on context and outcomes to achieve goals efficiently and autonomously.
Agentic X
Agentic X brings reasoning, planning, and autonomy into any application or domain, enabling systems to independently manage complex tasks, adapt on the fly, and coordinate actions without constant supervision.
Agent Orchestrator dynamically assigns tasks to the right AI agents based on goal and context, coordinating multi-agent collaboration to ensure smooth interaction, conflict-free execution, and efficient workflow completion.
Agent Planner breaks down high-level goals into executable steps, generating multi-step action plans based on intent, context, and available tools, enabling agents to act with foresight and adapt autonomously.
Agent platform (Kore.ai)
Agent Platform is enterprise-grade infrastructure for building, deploying, and orchestrating sophisticated AI agents at scale, supporting autonomy levels from guided assistants to fully independent systems tailored to business needs.
Agent reasoning
Agentic Reasoning enables AI agents to break down complex goals, make context-driven decisions, learn from outcomes, and self-correct, transforming them from reactive tools into proactive, autonomous problem-solvers.
Agent Traceability tracks and audits how AI agents make decisions, including model calls, tools used, and contextual inputs, providing transparency that supports governance, compliance, and error resolution.
Agent-washing, per Gartner, is branding simple bots or rule-based tools as Agentic AI without true autonomy, reasoning, or orchestration, misleading buyers with the "agent" label on systems that cannot think, plan, or adapt.
AI agents
AI Agents are intelligent software entities that independently break down goals into tasks, execute them using tools or APIs, and deliver results with minimal human input, adapting across systems in real time.
AI Analytics encompasses tools and dashboards that monitor AI performance across interactions, intent detection, and resolution outcomes, helping businesses assess accuracy, identify bottlenecks, and optimize workflows for business goals.
Augmentation enriches AI models with external context, real-time data, or tools to produce more accurate, relevant outputs, turning general-purpose models into domain-aware, task-specific assistants without retraining.
AI copilot
AI Copilot is a context-aware assistant that works alongside users, offering real-time suggestions, automating repetitive tasks, and surfacing relevant insights to boost productivity and streamline workflows.
AI for process (Kore.ai)
AI for Process uses process mining, cognitive task modeling, and reinforcement learning to optimize workflows, manage exceptions, and enforce compliance, enabling agents to autonomously adapt execution based on real-time data.
AI for service (Kore.ai)
AI for Service integrates agentic AI, multi-modal NLP, and adaptive reasoning to automate customer interactions across voice and digital channels, supporting intent-driven automation, agent augmentation, and scalable omnichannel self-service.
AI for work (Kore.ai)
AI for Work is an enterprise productivity framework using context-aware AI agents for knowledge retrieval, task automation, and workflow optimization, enabling semantic reasoning and decision intelligence across business functions.
AI governance
AI governance refers to the frameworks, policies, and controls used to manage how AI systems are developed, deployed, and monitored. It ensures that AI operates in a secure, compliant, and ethical manner by addressing risks such as bias, data privacy, and accountability. AI governance provides oversight and auditability, which are critical for enterprise adoption.
A hallucination occurs when AI generates confident but factually incorrect output, leading to misleading answers or flawed actions. Grounding and validation are essential to keeping responses accurate and reliable.
AI Safety is the practice of designing AI systems to operate securely, ethically, and aligned with human values, preventing bias, misuse, and unintended actions through governance, monitoring, and human oversight.
AI simulation
AI Simulation uses synthetic environments to train and test AI models safely, allowing agents to learn through trial and error, explore complex scenarios, and refine behaviors without real-world consequences.
AI supercomputing
AI Supercomputing is high-performance infrastructure built to train and run large language models and generative AI workloads at scale, delivering the speed needed for complex reasoning and real-time enterprise inference.
AI TRiSM, coined by Gartner, is a framework for managing AI trust, risk, and security, ensuring model governance, fairness, reliability, robustness, and data protection throughout the AI lifecycle.
Alert task
Alert Task is an AI-triggered response to anomalies or threshold breaches, instantly notifying the right people or systems about suspicious activity, errors, or performance drops without manual monitoring.
Anthropomorphism is attributing human traits like emotions or consciousness to AI systems. While making interactions feel natural, it creates false expectations by blurring the line between what AI appears to do and what it truly understands.
API
API is a set of rules and protocols enabling AI systems to communicate with other software, apps, or databases, acting as a bridge that allows data and actions to flow between tools automatically.
Artificial General Intelligence is theoretical AI capable of understanding, learning, and applying knowledge across any task like a human, unlike today's specialized systems. It remains the long-term ambition of the AI field.
Artificial Intelligence (AI)
Artificial Intelligence is a branch of computer science focused on creating machines that mimic human reasoning, learning, and decision-making, enabling automation that adapts and improves over time without explicit programming.
Automated Speech Recognition (ASR)
ASR is technology that converts spoken words into written text in real time, powering voice input in apps, IVR systems, and virtual assistants to enable machines to understand and respond to human speech.
Auto-NLP is a toolkit that automates key natural language processing tasks like text classification, sentiment analysis, and intent detection with minimal manual setup. It’s ideal for teams who need fast, reliable NLP results without building custom pipelines from scratch.
Autonomous agents
Autonomous Agents are AI systems that independently plan, act, and learn to achieve goals without human direction, breaking down tasks, making real-time decisions, and adapting to handle complex, multi-step processes.
An autoregressive model is a machine learning technique that predicts the next value in a sequence by learning from the values that came before it. The core idea is simple: what happens next is influenced by what has already happened. By identifying patterns in historical data, the model can make informed predictions about what comes next.
Basic RAG
Basic RAG enhances language models by retrieving relevant information from external sources to generate more accurate, grounded responses, supplementing pre-trained knowledge with real-time or domain-specific context.
Benchmark
A benchmark is a standardized evaluation that measures AI model performance on tasks like reasoning or language understanding, enabling consistent comparison, progress tracking, and identification of strengths and weaknesses.
BM25
BM25 is a keyword-based retrieval algorithm that ranks documents by matching search queries using term frequency and document length, making it fast and effective for classic information retrieval tasks.
Chain of Thought (CoT) prompting
Chain of Thought Prompting encourages AI models to reason through problems step by step before answering, rather than jumping to conclusions, producing more accurate and explainable outputs.
Chunking
Chunking breaks large documents into smaller, meaningful pieces so AI can understand and retrieve them more efficiently, making RAG and enterprise search smarter and faster.
Cloud connector
A Cloud Connector is a plug-and-play integration linking AI systems to third-party cloud apps like CRMs or databases, enabling seamless data exchange and real-time actions without custom code or middleware.
Cognitive services
Cognitive Services are pre-built AI capabilities for tasks like speech recognition, image analysis, and language translation, enabling teams to add intelligent, human-like abilities to applications without building models from scratch.
Component reusability
Component Reusability means building AI elements like intents, prompts, or connectors that can be reused across agents or applications, speeding up development, ensuring consistency, and eliminating redundancy when scaling AI.
Composable AI
Composable AI builds AI capabilities as modular, reusable blocks, including agents, tools, and workflows, enabling enterprises to scale, customize, and adapt AI for various use cases without starting from scratch.
Confidence score
A Confidence Score indicates how certain an AI is about its prediction or response, typically shown as a percentage, helping determine whether to proceed, seek clarification, or escalate to a human.
Context engineering
Context Engineering is designing AI systems to capture and apply real-world context like user identity, intent, and history, enabling agents to respond more intelligently, personalize actions, and handle complex workflows accurately.
Context router
A Context Router directs requests to the right agent, model, or workflow using contextual signals like user intent, system state, and confidence levels, ensuring accurate, relevant responses without unnecessary handoffs.
Contextual embedding
Contextual Embedding transforms words or data into numerical vectors that capture meaning based on surrounding context, helping AI distinguish nuances like "bank" as a financial institution versus a riverbank for smarter retrieval and reasoning.
Contextual intelligence engine
A Contextual Intelligence Engine collects and analyzes signals like user roles, conversation history, and business rules, giving AI agents memory, situational awareness, and the ability to make smarter, adaptive decisions.
Context window
The Context Window is the amount of information a language model can process at once, measured in tokens. Larger windows enable more coherent, context-aware outputs across long conversations, document summarization, or multi-turn reasoning.
Controllability
Controllability refers to how well AI behavior can be guided or constrained through boundaries on responses, tool usage, and tone, ensuring alignment with business rules, compliance standards, and safety in enterprise settings.
Conversational AI
Conversational AI enables machines to interact with humans using natural language across text, voice, and messaging, combining language understanding, intent recognition, and dialogue management to automate support and execute tasks.
Conversational UI
A Conversational UI enables user interaction through natural language instead of buttons or forms, making experiences more intuitive across chat interfaces and voice assistants, especially for complex or dynamic tasks.
Data augmentation
Data Augmentation expands training data by generating or tweaking existing examples, like rephrasing sentences or adding noise, improving model robustness and performance without collecting large amounts of new data.
Data preprocessing
Data Preprocessing cleans and formats raw data by removing errors and standardizing text, ensuring AI models receive structured, consistent inputs they can effectively learn from.
Data retention
Data Retention defines how long user and system data is stored before deletion, ensuring compliance with privacy laws like GDPR and HIPAA and preventing sensitive information from being kept longer than necessary.
Deliberation engine
A Deliberation Engine enables AI agents to pause and evaluate options before acting, choosing the most effective path rather than defaulting to the first available option, especially useful in complex workflows.
Dense retrieval
Dense Retrieval uses vector embeddings to find semantically similar information rather than matching keywords, enabling more relevant search results in RAG systems and enterprise search applications.
Deployment
Deployment is when an AI system goes live for real users, whether integrated into a chatbot, voice assistant, or internal tool, marking the point where performance and reliability truly matter.
Deterministic model
A Deterministic Model always produces the same output for the same input, making it ideal for consistency and traceability in legal, financial, or safety-critical workflows where predictability matters most.
Dialog builder
A Dialog Builder is a visual workspace for designing conversation flows using drag-and-drop tools, enabling teams to build smart, functional bots without requiring developer involvement at every step.
DialogGPT (Kore.ai)
Dialog GPT is an intelligent orchestration engine that powers natural, multi-turn conversations by autonomously managing intent detection, task execution, and ambiguity resolution, combining generative models with domain knowledge at scale.
Dialog task
A Dialog Task is a guided conversation path designed to complete a specific goal, like scheduling a meeting or checking an order, connecting user inputs to backend systems through logical, action-driven steps.
Domain-Specific Language Model (DSLM)
A DSLM is a language model fine-tuned for a specific industry like healthcare or banking, understanding domain-specific terminology and context to deliver smarter, more relevant responses.
Edge AI
Edge AI runs directly on local devices like phones or kiosks rather than in the cloud, enabling fast, private, real-time decisions even in environments with limited connectivity.
Embedding models
Embedding Models convert language into numerical vectors, enabling AI to make connections based on meaning rather than keywords, powering smarter search, retrieval, and reasoning across applications.
Embeddings
Embeddings are numerical representations of words or data that capture meaning, enabling AI to find similar content, rank search results, and maintain context across multi-step tasks.
Encryption
Encryption scrambles data so only authorized parties can access it, protecting passwords, personal information, and business data as AI systems store and transfer it.
Enterprise AI
Enterprise AI is AI built for business, designed to operate securely and at scale across departments and systems, handling real business logic while maintaining compliance and accountability.
Enterprise RAG
Enterprise RAG combines intelligent retrieval with LLM-generated responses, pulling answers from internal knowledge bases and documents while ensuring accuracy, brand alignment, security, and full traceability.
Enterprise search
Enterprise search is the ability to retrieve and surface relevant information from across an organization’s data sources, including documents, databases, and applications. It uses indexing, retrieval models, and often AI to deliver accurate and context-aware results. Enterprise search helps employees access knowledge quickly, improving productivity and decision-making.
Entity
An entity is a specific piece of information the AI is trying to extract like a person’s name, a date, or an account number. Think of it as a key detail that makes a vague request actionable.
Entity extraction
Entity Extraction identifies and pulls key details from user inputs, like "March" and "invoice" from a sentence, helping AI accurately route tasks and understand what the user needs.
Ethical AI
Ethical AI means building systems that are fair, responsible, and aligned with human values, avoiding harmful bias, protecting privacy, and ensuring AI is deployed in ways that go beyond just business goals.
Explainable AI (XAI)
Explainable AI reveals why an AI made a decision rather than leaving it opaque, building trust in high-stakes industries like finance and healthcare where transparency and accountability are essential.
FAQ
In AI, an FAQ refers to pre-trained question-answer pairs used by virtual assistants to deliver fast, accurate responses to common queries without requiring full conversations or complex workflows.
Federated learning
Frontier Models are the most advanced AI systems available, pushing the boundaries of reasoning, planning, and autonomy. Typically massive and multimodal, they are often in research or tightly controlled release.
Few-Shot learning
Few-shot learning enables AI models to understand new tasks from just a handful of examples provided in the prompt, eliminating the need for retraining and offering speed and flexibility at scale.
Fine-Tuning
Fine-tuning trains a general AI model on specific data to align it with a particular tone, vocabulary, or industry, making it more accurate and relevant for targeted use cases.
Foundation models
Foundation Models are large, general-purpose AI models trained on massive datasets, adaptable to many tasks like summarization, question answering, or classification through fine-tuning or prompting.
Frontier models
Frontier Models are the most advanced AI systems available, pushing the boundaries of reasoning, planning, and autonomous action. Typically massive and multimodal, they remain in research or tightly controlled release.
Generative AI
Generative AI refers to systems that create content like text, images, or code by learning patterns from data, generating new, dynamic output in real time rather than selecting from pre-set options.
GPT (Generative pre-trained transformer)
GPT is a family of generative language models pre-trained on massive datasets, capable of understanding and generating human-like text for applications ranging from chatbots to summarization and AI agents.
Graph of Thoughts (GoT)
Graph of Thoughts (GoT) is an advanced reasoning framework that extends structured thinking beyond linear or tree-based approaches. It represents reasoning as a graph, allowing multiple interconnected paths and dependencies between ideas. This enables more flexible and efficient exploration of complex problems where relationships between steps are not strictly hierarchical.
Graph-RAG
Graph-RAG combines retrieval-augmented generation with knowledge graphs, understanding relationships between data points rather than pulling isolated chunks, improving reasoning, context, and relevance in generated answers.
Grounding
Grounding ensures AI agent outputs are based on trusted sources like enterprise documents or real-time data rather than guesswork, giving agents a reliable foundation for factual, relevant, and safe responses.
Guardrails framework
Guardrails Framework sets boundaries around what AI can say or do, ensuring outputs are safe, compliant, and on-brand by blocking certain content, guiding tone, or restricting tool access.
Human in the loop
Human in the Loop keeps a person involved in AI decision-making for oversight, approvals, or intervention, balancing automation with control in workflows where accuracy, judgment, or compliance matter.
Hybrid search
Hybrid Search combines keyword-based and semantic search to retrieve both exact matches and meaning-based results, delivering more relevant, complete answers especially for open-ended or complex queries.
Hyperparameter tuning
Hyperparameter tuning optimizes settings that control how an AI model learns, like learning rate or model size, improving accuracy, speed, and reliability without changing the model's core architecture.
In-Context Learning (ICL)
In-Context Learning enables AI models to understand and handle new tasks by reading examples within the prompt, without retraining, making it ideal for custom tasks and dynamic use cases.
Indexing
Indexing organizes and stores data so AI can quickly search and retrieve it, ensuring documents, transcripts, or knowledge articles are structured for fast, accurate information retrieval.
Ingestion
Ingestion is the process of importing external data like documents, PDFs, or knowledge base articles into an AI system, making content searchable, retrievable, and usable in conversations or workflows.
Instruction-Tuning
Instruction-tuning trains AI models to follow human instructions effectively, teaching them to respond in expected ways, whether answering clearly, summarizing concisely, or taking action when asked.