AI Glossary
Common AI terms explained
Evals are tests that measure how well an AI model performs on specific tasks, helping teams assess accuracy, tone, safety, and overall quality.
Fine-tuning is a post-training method where a pre-trained model is further trained on a smaller, specific dataset to specialize it for a particular task or domain.
A hallucination is when an AI model produces an answer that sounds correct but is actually false or made up. It’s a result of the model predicting patterns, not checking facts.
Inference is when an AI model takes in new input and generates a response based on what it has already learned. It’s the moment the model is actually being used, like when ChatGPT answers your question.
A large language model (LLM) is an AI system designed to understand and generate human-like text.
An AI model is a computer program that is built to work like a human brain. You give it some input (i.e. a prompt), it does some processing, and it generates a response.
Post-training refers to everything that happens after a model has been initially trained, to make it more helpful, accurate, or aligned with human expectations.
Prompt engineering is the practice of crafting prompts, questions, instructions, or input, in a way that helps AI models give better, more useful responses.
RAG stands for Retrieval-Augmented Generation. It’s a technique that lets AI models pull in outside information at the time you ask a question, kind of like letting the model take an open-book test instead of relying purely on memory.
RLHF stands for Reinforcement Learning from Human Feedback. It’s a post-training technique used to teach AI models how to respond in ways that better align with human values, preferences, and expectations.
Supervised learning is when an AI model is trained using labeled data, that is, examples that already have the correct answers.
A token is a small chunk of text that an AI model processes, usually a word, part of a word, or even punctuation.
Training is the process where an AI model learns by analyzing massive amounts of data to recognize patterns, understand language, and make predictions.
A transformer is a type of AI model architecture that made today’s powerful language models (like ChatGPT and Claude) possible. It was introduced by Google researchers in 2017.
Unsupervised learning is when an AI model is trained on data without any labels or predefined answers. It learns by finding patterns and structure on its own.