AI Fundamentals
These are the foundational terms you'll encounter when working with or around large language models and generative AI. If the rest of the glossary sounds like another language, start here.
| Term | What it is |
|---|---|
| Large Language Model (LLM) | AI model trained on massive text datasets to predict and generate human-like text |
| Transformer | The neural network architecture powering virtually all modern LLMs |
| Token | The fundamental unit of text an LLM processes (not the same as a word) |
| Context Window | The maximum number of tokens an LLM can process in a single interaction |
| Inference | Running a trained model to generate output (as opposed to training it) |
| Embedding | A numerical representation of text as a vector that captures semantic meaning |
| Hallucination | When an LLM generates plausible-sounding but factually incorrect content |
| Fine-Tuning | Further training a pre-trained LLM on specialized data for task-specific improvement |
| Prompt Engineering | Crafting inputs to an LLM to produce better, more reliable outputs |
| System Prompt | Hidden instructions that define an LLM's behavior for an entire conversation |
| Controlled Natural Language | A restricted subset of a natural language designed to reduce ambiguity |