Large Language Model (LLM)
A type of AI model trained on massive text datasets to predict and generate human-like text. "Large" refers to the number of parameters (adjustable weights) in the model, modern LLMs have billions to trillions of them. Examples include Claude (Anthropic), GPT-4 (OpenAI), and Gemini (Google).
The key thing to internalize: LLMs work by predicting probable next tokens. They are not "understanding" in any human sense. They're autocomplete engines with a genuinely impressive amount of training data. This doesn't make them useless; it makes them predictable, if you know what drives their predictions.
Why it matters for writers: LLMs are the engines behind every AI writing assistant, chatbot, and content generation tool. Understanding the prediction mechanic changes how you interact with them and, more importantly, how much you trust their output. They're very confident. They're not always right. These are different qualities.
Related terms: Token · Context Window · Inference · Transformer