Google Said No to llms.txt. Five Google Teams Didn't Get the Memo.
The timeline is where the joke lives.
April 2025. Google's John Mueller compares llms.txt to the keywords meta tag. For the uninitiated, the keywords meta tag is so discredited that invoking it in SEO circles is equivalent to recommending bloodletting at a medical conference. Mueller's message was clear: llms.txt is unnecessary, self-reported data that Google has no intention of using.
July 2025. Gary Illyes, also from Google's Search team, confirms the position at Search Central Live. No support. Won't be used. Normal SEO works fine for AI Overviews. The standard is, officially, not something Google is interested in.
December 3, 2025. An SEO professional named Lidia Infante discovers an llms.txt file on Google's own Search Central documentation. Mueller's response, posted to Bluesky: "hmmn :-/". The file was removed within hours.
So far, a clean narrative. Google said no, someone at Google accidentally deployed one, it was caught and deleted, and the official position holds. Embarrassing, but coherent.
Then I started pulling at threads.
Five Files Walk Into a Search Engine
I'm in the middle of building an evidence inventory for an analytical paper about llms.txt (the Access Paradox research that's been taking over my evenings since mid-February). I work with Claude on this project (introduced properly two posts ago), and earlier this week Claude flagged something I'd been putting off: making local copies of every source in the evidence inventory. PDFs, screenshots, the works. "Sources disappear," or words to that effect. I'd been meaning to do it anyway. So I started going through the inventory, saving things.
One of the sources was an article from Omnius titled "Google Adds LLMs.txt to Docs After Publicly Dismissing It." The article was gone. The domain acknowledged it had existed, the Wayback Machine had a record of the URL, but nobody had indexed the actual content (or the offending llms.txt file) before it vanished. An article about Google quietly adopting llms.txt had itself quietly disappeared.
So naturally I went looking for the primary evidence myself.
I checked Google.
Not Google Search. Not the property where the December incident happened. Google's developer documentation properties. The teams that write guides for Firebase, Chrome extensions, the AI SDK, Flutter, web performance. The teams whose audience is developers building things, not marketers optimizing rankings.
Here's what I found:
- ai.google.dev -- llms.txt at
/api/llms.txt - developer.chrome.com -- llms.txt at
/docs/llms.txt(including Flutter documentation) - firebase.google.com -- llms.txt at
/docs/llms.txt - google.github.io/adk-docs -- llms.txt at
/llms.txt - web.dev -- llms.txt at
/articles/llms.txt(the only one not under/docs/)
Five properties. All live. All serving llms.txt files while two of Google's most visible search representatives were telling the world the standard was comparable to a dead meta tag.
I saved PDF copies of every single one, because I have been burned by disappearing evidence before and because I am the kind of person who maintains a local archive manifest in his research repository. (This is not a personality trait I recommend developing. It is, however, one I cannot stop.)
The Content Isn't Revolutionary. The Adoption Is.
Let me manage expectations. These files aren't going to win any documentation awards. They're basic sitemap-style link lists. No rich summaries, no curated context hierarchies, no sophisticated inference-time optimizations. If DocStratum were grading them, they'd pass L0 (it parses) and maybe L1 (it has structure), but they wouldn't be winning medals at L3 or L4.
That's not the story.
The story is that these files exist at all. Five separate Google developer documentation teams independently decided that llms.txt was worth implementing, after two of Google's most senior search voices publicly said it wasn't. Nobody made a press release. Nobody updated the corporate talking points. They just shipped it.
If you've ever worked in a large organization, you know exactly what happened here. Policy flows downhill. Implementation flows sideways. The people writing Firebase documentation are solving a different problem than the people briefing journalists about Search ranking signals, and the two groups do not consult each other before deploying a text file.
AGENTS.md
Google's Agent Development Kit (ADK) Python repository, google/adk-python, includes a file called AGENTS.md. It's essentially a context document for AI agents: Claude Code, Gemini CLI, GitHub Copilot, Cursor, and any other AI coding assistant that might need to understand the project.
At the bottom of AGENTS.md, in the Additional Resources section, there's this line:
LLM Context:
llms.txt(summarized),llms-full.txt(comprehensive)
A Google repository is explicitly instructing AI agents to use llms.txt as their entry point for understanding the codebase. Not accidentally hosting one. Not passively allowing one to exist. Directing AI tools to consume it. At inference time. For context loading. Which is exactly what the llms.txt specification was designed for.
This is the thing Jeremy Howard proposed in September 2024, that Mueller compared to a dead meta tag seven months later, that Illyes said Google wouldn't use three months after that, that Mueller said "hmmn :-/" about when caught with one in December, and that Google's ADK team is now explicitly telling AI agents to read.
I don't know what the organizational chart looks like between Google's Search team and Google's ADK team. But I know what the git history looks like, and the git history says these files shipped.
The GitHub Issue That Says the Quiet Part Loud
There's also Issue #726 on google/adk-docs, titled "Update llms.txt to align with the llms.txt standard and act as a sitemap for models." The title alone frames llms.txt as a standard worth aligning with. Not a discredited meta tag. Not an unnecessary format. A standard. Worth conforming to.
This is grassroots adoption pressure from Google's own developer community, showing up in Google's own issue trackers, referencing the same specification Google's search executives publicly dismissed.
What This Actually Means
Google hasn't reversed its position on llms.txt. No announcement, no blog post, no update to Search Central. Mueller and Illyes haven't recanted. (I've written before about what happens when people get sloppy with adoption narratives, and I'm not about to do it myself.)
But the official line no longer describes what's actually happening inside Google's developer ecosystem. Five documentation teams have llms.txt files. The ADK team has an explicit inference-time directive pointing AI agents at theirs. A community issue is asking for better spec alignment. These aren't accidents; they're not residual files from a confused deployment. They're active implementations across distinct properties.
The gap between "we don't need this" and "hey AI, read our llms.txt" is the gap between policy and practice. And if you've been following this research series, that gap should feel familiar. It's the same pattern.
Your WAF blocks AI crawlers while your marketing team publishes llms.txt files. Your executives dismiss a standard while your developers implement it. Your robots.txt says one thing, your infrastructure does another. The institutions that shape how AI interacts with the web are not, it turns out, internally coherent on the subject. I'm starting to think this is the rule, not the exception.
The Evidence Inventory Shift
I'm adding this to the evidence inventory. Four new claims. And one of them shifts the nuance on a central assertion in the paper.
The paper's Section 4 includes the claim: "No major LLM provider has publicly confirmed using llms.txt at inference time." That claim is still technically accurate. Nobody's confirmed it. But the ADK's AGENTS.md is a de facto inference-time usage directive at the developer-tools level. The absence of confirmation no longer implies the absence of usage. That's a distinction the paper needs to handle carefully, and it's the kind of nuance that only shows up when you keep pulling at evidence threads after you think you're done.
I've saved local PDF copies of all five Google llms.txt files, the AGENTS.md file, the llms-full.txt, and the GitHub issue. They're archived in the paper's data directory with original URLs preserved. If any of them disappear (and given the December precedent, I'd say that's not an unreasonable concern), the evidence survives.
The Adoption Paradox
The paper's working title is "The llms.txt Access Paradox," but the paradoxes keep multiplying.
There's the adoption paradox: 844,000 claimed sites, really closer to 784. The inference paradox is subtler--the spec targets inference-time usage, but every crawler I've observed behaves like it's collecting training data. And then there's the institutional one, which cuts deepest. The organization most publicly opposed to the standard is also one of its most active implementers.
Same pattern, every time. What organizations say versus what actually happens. I don't think that's a coincidence. I think it's the thesis.
What Happens Next
I'm continuing to track Google's llms.txt implementations. If they disappear (the December precedent suggests this is possible), I have the archives. If they expand (the ADK issue suggests this is also possible), I'll document that too.
The analytical paper will cover this in Section 3 (adoption landscape) and inform the nuance in Section 4 (the inference gap). The blog gets the story first because the timeline matters and because sitting on primary evidence while writing a paper about evidence integrity felt like the wrong move.
If you're implementing llms.txt on your own documentation, you've probably wondered whether the standard has a future. The company that publicly called it a dead meta tag has five of them. They're nothing to scream about structurally. But they exist, and the ADK team is pointing AI agents at theirs on purpose.
Draw your own conclusions. I've drawn mine, and I've got the receipts saved in a folder called paper/data/sources/, organized by claim ID, because of course I do.
