Performance Signals for AI Visibility

Slow response times and unstable layouts don't just frustrate users—they cause AI fetch failures. We check TTFB, CLS, and INP using real-user data from Google's Chrome UX Report, grouped together in a single Performance tab.

Performance as an Access Metric, Not a UX Metric

For AI visibility, performance isn't about user experience — it's about whether AI systems can access your page at all. There are two distinct effects:

The Gate — Pass or Fail

AI crawlers operate under tight time budgets. Pages that don't respond fast enough simply don't get read — your content is absent from the result, with no penalty, no warning. A site can score well on Core Web Vitals and still fail this gate.

The Race — Fastest First

When AI systems query multiple sources simultaneously, they process the fastest responses first. Only the fastest-responding servers get fully crawled in real-time competitive windows. Server load makes this dynamic — a site that passes at 9am can fail at 2pm.

Our position: We don't claim performance is a direct ranking factor for AI systems. What we know is that AI systems operate under time budgets, and faster, more stable pages are more reliably fetched and processed. The relationship is about fetch reliability, not ranking signals.

Three Metrics, Three Perspectives

TTFB — Time to First Byte

What it measures: The time between a browser requesting a page and receiving the first byte of the response from your server. Units: Milliseconds (ms).

Why it matters: Fast TTFB = Higher fetch success rate. TTFB is the most directly relevant metric for AI retrieval—it's the raw server response time that determines whether a request completes within a system's timeout window.

GradeTTFBAI Crawl Implication
A+< 200msFastest-responding tier. Highest crawl reliability.
A200–350msStrong crawl likelihood. No meaningful timeout risk.
B350–600msModerate risk. May miss crawl window under load.
C600–1000msMeaningful crawl risk. Optimize server response time.
D1000–2000msHigh timeout risk. Priority optimization target.
F> 2000msEffectively invisible to AI crawlers.

Note: Core Web Vitals uses Google's general thresholds (good ≤ 800ms). The grades above reflect AI-specific crawl viability — a site can pass CWV and still fail the AI crawl gate.

CLS — Cumulative Layout Shift

What it measures: How much page content shifts unexpectedly during loading. Units: Dimensionless score (lower is better).

Why it matters: Stable layout = Reliable content extraction. High CLS means content jumps around as elements load. This can disrupt text extraction and structured data parsing by crawlers that snapshot content during load.

RatingThresholdInterpretation
Good≤ 0.1Layout is stable. Content extraction is reliable.
Needs Improvement0.1–0.25Moderate instability. May affect content extraction accuracy.
Poor> 0.25Significant layout instability. High risk of extraction errors.

INP — Interaction to Next Paint

What it measures: How quickly the page responds to user interactions. Units: Milliseconds (ms).

Why it matters: Good INP = Lighter scripting load. INP is less directly relevant to AI crawlers (which don't interact with pages), but high INP often signals heavy JavaScript execution that may also delay content availability for crawlers.

RatingThresholdInterpretation
Good≤ 200msPage responds quickly to interactions.
Needs Improvement200ms–500msModerate delay. May indicate heavy JS execution.
Poor> 500msSignificant JS overhead. Likely affecting crawl rendering too.

Real-User Data from CrUX

We use Google's Chrome User Experience Report (CrUX)—a public dataset of real performance metrics collected from Chrome users who have opted in to sharing. This means:

Real field data

Not synthetic lab tests. CrUX reflects how real users actually experience your site across diverse networks, devices, and locations.

Same source as Google

This is the same dataset that powers Google's page experience signals and Search Console Core Web Vitals reports.

URL-level and origin-level

We check both the specific page you entered and the overall domain, so you can see if issues are isolated or site-wide.

Honest about coverage

Not all URLs have CrUX data. Low-traffic pages may not meet the aggregation threshold. When data is unavailable, we say so clearly.

How to Improve Performance

Improve TTFB

  • CDN: Serve from edge nodes closer to users
  • Caching: Cache full HTML responses where possible
  • Database optimization: Index slow queries, reduce N+1 problems
  • Upgrade hosting: Move from shared to dedicated or cloud hosting

Improve CLS

  • Reserve space for images: Always set width/height attributes
  • Font loading: Use font-display: swap with size adjustments
  • Avoid injecting content above existing content
  • Size ad slots: Reserve space for dynamic ad content

Improve INP

  • Defer non-critical JS: Use defer and async attributes
  • Code splitting: Load only what's needed for each page
  • Remove unused scripts: Audit third-party JS regularly
  • Use web workers: Move heavy computation off the main thread

Frequently Asked Questions

CrUX shows real-world performance from actual users. Lighthouse uses synthetic tests run from a single machine under controlled conditions. Both have their uses, but CrUX data better represents the diversity of real-world conditions including varied networks, devices, and geographic locations.

This is normal for low-traffic pages. CrUX requires a minimum threshold of real-user data to generate aggregated metrics. When URL-level data is unavailable, we check origin-level data (your whole domain). We clearly label when data is missing rather than guessing or showing synthetic scores.

We don't claim these are direct ranking factors. What we know is that AI systems operate under time and resource budgets when fetching content. Faster, more stable pages are more reliably retrieved. The relationship is about fetch success rate, not algorithmic ranking signals.

Partially. Google's Core Web Vitals are LCP, INP, and CLS. We focus on TTFB (most relevant for fetch reliability), INP, and CLS. We don't check LCP because it's less relevant to server-side fetch performance. We added TTFB specifically because it's the primary determinant of whether a request completes within an AI system's timeout window.

CrUX data typically updates monthly. Performance improvements you make may take 4–6 weeks to appear in the CrUX dataset. For immediate validation of changes, use Lighthouse or WebPageTest alongside CrUX monitoring.

Check Your Performance Signals

Enter any URL to see TTFB, CLS, and INP metrics from real user data. Get URL-level and origin-level analysis with Good/Warning/Risk status interpretation.

Check Performance Signals
Also available as a Chrome Extension

Quick audits while you browse—all core features included, always free.

Add to Chrome