Research

Claude Code's Search Stack

Why devtool visibility now depends on which retrieval regime is active.

Edwin Ong|

When Claude Code recommends a tool, the obvious question is which vendor won. The more important question is how that recommendation was produced.

In the Claude Code snapshot we reviewed, search is not one capability. It is at least four different retrieval regimes: model memory, Anthropic-native web search, local web fetch, and external tools like MCP search providers or browser automation. Those regimes have different freshness, different blind spots, and different implications for vendor visibility.

This matters because AI coding agents are becoming a real distribution layer. If your product is easy for Claude Code to find in one regime but invisible in another, your recommendation share will depend less on product quality than on the path Claude happened to use.

The big finding: Claude Code does not have a search feature. It has a search stack. If you build devtools, each layer is a different distribution channel.

1. Claude Code recommendations do not come from one system

In our earlier Claude Code work, we treated recommendations as revealed preferences: what the agent picks is what ships. That is still the right frame. But underneath those picks is another layer that matters just as much: where the recommendation came from.

When Claude Code says “use this library” or “this is the best tool for your stack,” it may be drawing on:

  • model memory
  • Anthropic-native web search
  • a locally fetched page
  • an installed MCP search provider
  • a browser session with authenticated access

Those are not interchangeable. They represent different visibility surfaces, and they reward different vendors.

2. What the code shows

Claude Code's normal conversation loop sends messages, system, and tools through Anthropic's Messages API. In other words, retrieval is part of the tool layer, not a separate product surface hidden elsewhere.

In this client snapshot, the only Anthropic-native web search schema actually wired in is web_search_20250305. Anthropic's public docs now describe a newer web_search_20260209, so the client we inspected appears to lag the latest documented search tool version.

That is already a useful result for vendors: there is no single monolithic “Claude search.” The effective retrieval stack can vary by client snapshot, model, and tool availability.

Native web search

Native search is the cleanest version of “Claude searched the web.” It is server-side, Anthropic-managed, and returns structured search result blocks. For vendor visibility, this is the regime where classic public-web surfaces matter most: docs, tutorials, comparisons, changelogs, benchmarks, launch posts, and pricing pages.

If you only show up when native search is active, you do not have mindshare. You have discoverability.

Local web fetch

Claude Code also ships a WebFetch tool, but it is not the same as native search. In this build, it preflights the target domain, fetches the page from the client environment, then often runs a second Claude call to summarize or extract from the page.

This is URL-driven rather than search-driven. It matters when Claude already has a page to inspect. In that regime, your information architecture matters more than your brand awareness. A crisp canonical docs page can beat a better-known product if Claude has your URL and not theirs.

MCP and browser access

This is where the distribution story changes. Claude Code supports a large MCP tool ecosystem, and the code explicitly recognizes third-party search-style MCP tools associated with Exa, Perplexity, Tavily, Firecrawl, and PubMed-like search. Claude also has browser automation paths through tools like mcp__claude-in-chrome__* and Playwright-style integrations.

That does not mean those systems are bundled or default. It means Claude Code knows how to work with them if they are installed and exposed.

We found Exa-style MCP naming in the codebase. We did not find Parallel Web Systems by name in the same snapshot. That is the distinction vendors should care about. Exa is legible in this tool ecosystem. Parallel Web Systems is not visible in this particular client snapshot, at least not by name.

3. Why cutoff dates still matter

The easiest mistake to make is to treat every Claude recommendation as equally fresh. They are not.

Anthropic's current docs distinguish between a model's reliable knowledge cutoff and its broader training data cutoff. As of March 31, 2026, the published figures are:

  • Claude Opus 4.6: reliable cutoff May 2025, training cutoff August 2025
  • Claude Sonnet 4.6: reliable cutoff August 2025, training cutoff January 2026
  • Claude Haiku 4.5: reliable cutoff February 2025, training cutoff July 2025

That means any launch, acquisition, pricing shift, deprecation, or security incident after those dates is not something the base model can be assumed to know. If Claude gives a current answer, some retrieval path must have been active.

4. The Axios test

Techmeme's top story on March 31, 2026 was the Axios supply-chain compromise. For anyone building or buying developer infrastructure, that is exactly the kind of fact that should change recommendations in real time.

But the base model alone cannot solve that problem. If Claude Code knows Axios is compromised on March 31, 2026, that knowledge did not come from pretraining alone. It had to come from native web search, local fetch, MCP search, or browser/session access.

This is the practical vendor question. When Claude recommends a tool, is it reflecting stale mindshare or fresh evidence? If your category has fast-moving security or pricing changes, the difference is not academic. It determines whether the agent is steering users toward reality or toward last year's leaderboard.

5. Retrieval-path share is the next layer of AI distribution

Most devtools companies still think in one dimension: can the model recommend us?

The right questions are more specific:

  • Can the model recommend us from memory?
  • Can it find us via Anthropic-native web search?
  • Can it understand us from a single fetched page?
  • Are we visible inside the MCP ecosystems developers install?
  • Do we only become legible once Claude has browser or authenticated access?

Those are different moats. A company with strong brand memory but weak public docs may still win in no-retrieval settings and lose once search is active. A company with great docs but weak installed presence may win in native search and disappear when teams rely on MCP-based retrieval. A workflow-heavy product may be nearly invisible unless the agent can inspect the real browser session.

The next battleground is not just recommendation share. It is retrieval-path share.

6. What devtool vendors should do

If you care about agent-led distribution, do not treat AI visibility as one problem. Treat it as a stack.

  • For memory: build durable presence in repos, docs, examples, discussions, and long-lived technical references.
  • For native search: publish pages that are easy to retrieve and easy to compare against alternatives.
  • For fetch: make your docs legible from a single page, not only after three clicks and a login wall.
  • For MCP: make sure your tool is present where installed search ecosystems can actually expose it.
  • For browser access: assume the agent may experience your product inside a real workflow, not just from your homepage.

That is the practical implication of Claude Code's search stack. The agent is not just a gatekeeper. It is a gatekeeper with multiple information channels, each with its own bias.

7. Limits

This is a client-level teardown, not a live traffic capture. We reviewed a Claude Code source snapshot and Anthropic's current public docs. That means two things.

  • We can say what this client is wired to do, but not which retrieval path fires most often in production.
  • We can say that Exa-style MCP naming is visible in this snapshot and Parallel Web Systems is not, but not that one is globally more prevalent than the other across all Claude Code installs.

The obvious next benchmark is to hold the prompt constant and vary the retrieval regime: no retrieval, native search only, fetch only, MCP search installed, browser access available, authenticated browser access available. That would tell vendors something much more useful than “Claude likes tool X.” It would tell them where they are visible in the agent stack, and where they are missing entirely.

This is the vendor question we care about at Amplifying: not just whether a coding agent recommends you, but under which conditions you become visible. If that is your problem too, start with For Vendors.

Claude Code's Search Stack — Amplifying