Skip to main content

Fine-Tuning

The process of taking a pre-trained LLM and training it further on a smaller, specialized dataset to improve its performance on specific tasks. Fine-tuning adjusts the model's existing weights rather than training from scratch, which makes it faster and cheaper than full training, but "cheaper than training GPT-4" is still a bar most budgets can't limbo under.

Why it matters for writers: Fine-tuning is one approach to making an LLM respect your style guide, terminology, or domain conventions. However, it's expensive, requires significant training data, and the fine-tuned model can "drift" from the base model's general capabilities. For most technical writing use cases, techniques like RAG or carefully structured prompts are more practical than fine-tuning. When an AI writing tool claims to "learn your style," ask whether it actually fine-tunes a model or just uses a really long system prompt. The answer is almost always the system prompt.

Related terms: Large Language Model · Inference · System Prompt