AI Glossary

Curated boardroom definitions for AI terms that sit between the vendor narrative and technical reality. Each entry delivers a precise definition, the gap between marketing and substance, and the question that should actually be asked in the boardroom.

Agentic AI

AI systems that autonomously plan and execute multi-step tasks across multiple tool calls.

AI Act (EU AI Act)

EU Regulation 2024/1689 governing AI systems, with staggered application through 2027 and separate obligations for general-purpose AI models.

AI Red Teaming

Structured, adversarial testing of an AI system against security, bias, hallucination and misuse patterns.

Context Window

Maximum number of tokens a language model can process simultaneously per request.

Differential Privacy

A mathematical privacy standard that bounds the influence of any single record on a computation by a quantified parameter (epsilon).

Distillation

A technique for training a smaller model to approximate the behaviour of a larger one — at significantly lower inference cost.

Edge AI

Running AI models directly on end devices instead of in central data centres or cloud APIs.

Embedding

A numeric vector representation of a text, image or data object in a high-dimensional space, where semantically similar objects sit close together.

Evaluation (Eval)

Systematic, reproducible measurement of an AI system's quality against defined criteria.

Federated Learning

A training approach in which a shared model is trained across multiple decentralised data sources without the training data leaving those sources.

Fine-Tuning

A technique for retraining the weights of a pre-trained foundation model on a domain-specific dataset.

Foundation Model

A large, generally pre-trained AI model that serves as the basis for a variety of downstream applications.

Guardrails

Mechanisms before, during or after model inference that filter, restrict or escalate undesired inputs or outputs.

Hallucination

Output of a language model that is plausibly worded but factually wrong or not supported by the sources.

Harness

A structured software layer around an AI model that orchestrates tool calls, eval routines, guardrails and output processing.

Inference Cost / TCO

Ongoing cost of model use per request; TCO extends this to development, eval, hosting, monitoring and compliance over the lifecycle.

Mixture of Experts (MoE)

An architectural pattern in which a model consists of several specialised subnetworks, only a small selection of which is activated per token.

Model Card

A structured document describing the training data, intended use, limitations, bias patterns and licence of an AI model.

Model Governance

Processes, roles and documentation that steer the lifecycle of an AI model in a company.

Multimodal

An AI model's capability to process multiple input and output modalities — typically text, image, audio and video.

On-Premises AI

Operating AI models and infrastructure in your own or a dedicated environment instead of through the model providers' API services.

Open Weights vs. Open Source

Open weights denotes publication of the model parameters; open source additionally requires training code, data specification and an open licence.

Prompt Injection

An attack technique in which inputs are crafted so that the model ignores or overrides its original system instructions.

RAG (Retrieval-Augmented Generation)

An architectural pattern in which a language model retrieves relevant documents from a knowledge base and integrates them as context in the prompt.

Reasoning Model

A class of language models that produce a longer internal chain of thought before answering and outperform classical LLMs on multi-step tasks.

Sovereign AI

AI infrastructure under national or European control — across data, operations, models and training data.

Synthetic Data

Artificially generated data that reproduces statistical or structural properties of real data.

Vector Database

A database optimised for storage and search of high-dimensional vectors (embeddings).