Skip to main content

Prompt Engineering

The practice of crafting inputs (prompts) to an LLM in ways that produce better, more reliable outputs. This ranges from simple techniques (being specific, providing examples) to structured approaches like chain-of-thought prompting (asking the model to show its reasoning step by step) and few-shot prompting (including examples of desired input-output pairs in the prompt).

It is, essentially, the art of telling a very literal genie exactly what you want. Vague wishes get creative interpretations. Precise wishes get useful results.

Why it matters for writers: Prompt engineering is the primary way most people interact with LLMs, whether they realize it or not. For technical writers specifically, understanding how to structure prompts (system prompts, context, constraints) is the difference between getting generic output and getting output that actually matches your document's voice, terminology, and audience level.

Related terms: System Prompt · Large Language Model · Context Window