Tool Use (Function Calling)
The ability of an LLM to output structured requests to invoke external functions rather than (or in addition to) generating text. When an LLM "uses a tool," it generates a structured output (typically JSON) specifying which function to call and with what parameters. The calling system executes the function, returns the result, and the LLM incorporates it into its response.
It's the difference between asking someone "what's the weather?" and having them Google it versus make something up. Tool use is the "Google it" option.
Why it matters for writers: Tool use is the mechanism that makes agents work. It's also a documentation surface: every tool needs a description (what it does), parameter definitions (what inputs it expects), and return value documentation (what output it produces). These descriptions are read by both humans and AI models simultaneously, creating the unusual writing challenge of serving two audiences with fundamentally different reading comprehension in the same paragraph.
Related terms: AI Agent · Model Context Protocol · Schema