Skip to main content

AI Fundamentals

These are the foundational terms you'll encounter when working with or around large language models and generative AI. If the rest of the glossary sounds like another language, start here.

TermWhat it is
Large Language Model (LLM)AI model trained on massive text datasets to predict and generate human-like text
TransformerThe neural network architecture powering virtually all modern LLMs
TokenThe fundamental unit of text an LLM processes (not the same as a word)
Context WindowThe maximum number of tokens an LLM can process in a single interaction
InferenceRunning a trained model to generate output (as opposed to training it)
EmbeddingA numerical representation of text as a vector that captures semantic meaning
HallucinationWhen an LLM generates plausible-sounding but factually incorrect content
Fine-TuningFurther training a pre-trained LLM on specialized data for task-specific improvement
Prompt EngineeringCrafting inputs to an LLM to produce better, more reliable outputs
System PromptHidden instructions that define an LLM's behavior for an entire conversation
Controlled Natural LanguageA restricted subset of a natural language designed to reduce ambiguity