Skip to main content

6 posts tagged with "Opinion"

Perspectives, hot takes, and deliberately subjective posts about technical writing, AI, and the software industry.

View All Tags

Your RAG Pipeline Has a Check Engine Light. You're Ignoring It.

Dashboard showing a GO/NO-GO decision framework with seven evaluation criteria for RAG pipeline quality assessment
· ~10 min read
Ryan Goodrich
Technical Writer, AI Enthusiast, and Developer Advocate

I ran a retrieval experiment that returned perfect zeros across all 36 queries, and every automated check I'd built said "statistically significant." The decision engine considered seven criteria, passed two of them, and issued a NO-GO. The pipeline caught the problem. Not me--the pipeline.

Here's what scares me: most production RAG systems don't have a pipeline like that. They don't have decision criteria. They don't have rollback thresholds. They don't have a concept of "this retrieval result is wrong and we should know about it automatically." They ship a model, run some spot checks, and move on to the next sprint.

Your RAG pipeline has a check engine light. You just never installed it.

Five Projects, One Realization: The Document Is the Database

Five project icons forming a document-centric pipeline: publish, validate, embed, compress, manage — connected by structural metadata flows
· ~8 min read
Ryan Goodrich
Technical Writer, AI Enthusiast, and Developer Advocate

I didn't plan a portfolio. I planned a Markdown file. Then another one. Then five projects materialized around them like ice crystals on a cold window, each shaped by the same principle I didn't recognize until project number four. Apparently I need to build the same insight multiple times before I notice I keep building it.

The insight: documents are not content delivery vehicles. They are structured knowledge systems. Almost every AI tool in production today throws away the structure and keeps only the content. That's like buying a filing cabinet, dumping all the folders on the floor, and asking someone to find last quarter's tax return by feeling the texture of the paper.

I know this because I've now built five projects that all, in their own way, try to fix that mistake.

Context Windows Are a Lie (And Haiku Protocol Is My Coping Mechanism)

Terminal showing a 128K context window shrinking to an effective 8K zone, with lost-in-the-middle degradation visualized as fading text
· ~10 min read
Ryan Goodrich
Technical Writer, AI Enthusiast, and Developer Advocate

LLM vendors would like you to know that their latest model supports a 128,000-token context window. Some of them say 200,000. One of them, and I won't name names but their logo is a little sunset, says a million. A million tokens. That's approximately four copies of War and Peace, which is appropriate because trying to get useful work done at the far end of a million-token window is its own kind of Russian tragedy.

Here's what the marketing materials don't mention: the effective context window, the portion where the model actually pays reliable attention to what you put there, is dramatically smaller. Research from Stanford, Berkeley, and others has converged on a finding that would be funny if it weren't costing people real money: models struggle with information placed in the middle of long contexts. They're great at the beginning. They're decent at the end. The middle? The middle is where facts go to die quietly, unnoticed, like a footnote in a terms of service agreement.

This is the "Lost in the Middle" problem, and if you're building anything that retrieves information and feeds it to a language model (which, in 2026, is approximately everyone) it means the number on the tin is a fantasy. Your 128K window is functionally an 8K window with 120K tokens of expensive padding.

I know this because I ran the experiment. Accidentally. Three times.

The Three Voices of Technical Research: Why My Blog Sounds Nothing Like My Paper

Three terminal panes side by side showing the same WAF-blocking finding in three voices: the blog (opinionated, orange tab), the guide (neutral, green tab), and the paper (impartial, blue tab). Tagline: same research, three rooms.
· ~10 min read
Ryan Goodrich
Technical Writer, AI Enthusiast, and Developer Advocate

Someone recently asked me a question that I've been thinking about ever since: "Doesn't writing your blog posts with humor and sarcasm undermine your credibility as a researcher?"

It's a fair question. The blog posts on this site are... aggressively me. I compare WAF blocking to "hiring a security guard who prevents anyone matching the physical description of 'reads books' from entering the bookstore." I describe AI crawlers as looking like "a DDoS attack with a liberal arts degree." I write sentences like "I am a documentation-first developer with a research compulsion and a growing collection of Markdown files about Markdown files," and then I publish those sentences on the internet where potential collaborators can see them.

Meanwhile, the analytical paper I'm writing about the same research uses phrases like "the structural misalignment between content publication intent and infrastructure-level access enforcement." Which is the same observation as the bookstore metaphor, expressed in the register of someone who wants to be taken seriously at a conference.

Same research. Same data. Same conclusions. Radically different voices. And I'd argue that if I used only one of those voices everywhere, the whole project would be worse.

I Fact-Checked My Own Research Paper Before Writing It (You Should Too)

Terminal running an evidence inventory audit of 49 claims: 33 verified, 13 author analysis, 1 partial, and 2 incorrect — including the 844,000 adoption stat that collapsed to 784 directory entries and 105 in the top million.
· ~11 min read
Ryan Goodrich
Technical Writer, AI Enthusiast, and Developer Advocate

Here's a workflow tip that's either going to save your credibility or confirm that I have an unhealthy relationship with spreadsheets: before you write anything that makes factual claims, build an evidence inventory first.

Not a bibliography. Not a "sources" section at the bottom of a Google Doc. An actual structured inventory where every single factual claim in your paper, blog post, report, or conference talk is cataloged, mapped to a primary source, independently verified, and assigned a status. Verified. Partially verified. Unverified. Or the one that makes your stomach drop: incorrect.

I know this sounds like the kind of advice that belongs on a poster in a university writing center, sandwiched between "cite your sources" and "plagiarism is bad." But I'm not talking about academic hygiene. I'm talking about self-defense.

I Write the Docs Before the Code, and Yes, I Know That's Weird

Terminal session descending diagonally like a staircase into a rabbit hole. Level 1: cat README.md shows a project overview. Level 2: curl site.com/llms.txt fetches curated content. Level 3: HTTP 403 Forbidden from Cloudflare. Level 4: grep -r 'why' finds waf-paradox.md and trust-gap.md. Level 5: depth unknown, docs all the way down.
· ~10 min read
Ryan Goodrich
Technical Writer, AI Enthusiast, and Developer Advocate

I have a confession to make. When I start a new project, any project, doesn't matter what it is, the first thing I do is open a Markdown file and start writing documentation for something that doesn't exist yet.

Not code. Not a prototype. Not even a to-do list. Documentation.

I realize this makes me sound like the kind of person who reads the terms of service before clicking "I Agree." I promise I'm not. (I absolutely am.)