Slow response times and unstable layouts don't just frustrate users—they cause AI fetch failures. We check TTFB, CLS, and INP using real-user data from Google's Chrome UX Report, grouped together in a single Performance tab.
AI systems that retrieve information in real time—like ChatGPT Search and Perplexity—operate under tight time budgets. They query multiple sources simultaneously and process the fastest responses first. A slow page doesn't just rank lower: it may time out before returning any content at all.
When a retrieval system times out waiting for your server response, your content is simply absent from the result. There's no penalty—your site just isn't included.
High CLS means content shifts around during load. Crawlers trying to extract text from a shifting layout may grab wrong sections or miss content entirely.
What it measures: The time between a browser requesting a page and receiving the first byte of the response from your server. Units: Milliseconds (ms).
Why it matters: Fast TTFB = Higher fetch success rate. TTFB is the most directly relevant metric for AI retrieval—it's the raw server response time that determines whether a request completes within a system's timeout window.
| Rating | Threshold | Interpretation |
|---|---|---|
| Good | ≤ 800ms | Server responds quickly. Low fetch failure risk. |
| Needs Improvement | 800ms–1800ms | Moderate risk. Some systems may time out under load. |
| Poor | > 1800ms | High fetch failure risk. Priority optimization target. |
What it measures: How much page content shifts unexpectedly during loading. Units: Dimensionless score (lower is better).
Why it matters: Stable layout = Reliable content extraction. High CLS means content jumps around as elements load. This can disrupt text extraction and structured data parsing by crawlers that snapshot content during load.
| Rating | Threshold | Interpretation |
|---|---|---|
| Good | ≤ 0.1 | Layout is stable. Content extraction is reliable. |
| Needs Improvement | 0.1–0.25 | Moderate instability. May affect content extraction accuracy. |
| Poor | > 0.25 | Significant layout instability. High risk of extraction errors. |
What it measures: How quickly the page responds to user interactions. Units: Milliseconds (ms).
Why it matters: Good INP = Lighter scripting load. INP is less directly relevant to AI crawlers (which don't interact with pages), but high INP often signals heavy JavaScript execution that may also delay content availability for crawlers.
| Rating | Threshold | Interpretation |
|---|---|---|
| Good | ≤ 200ms | Page responds quickly to interactions. |
| Needs Improvement | 200ms–500ms | Moderate delay. May indicate heavy JS execution. |
| Poor | > 500ms | Significant JS overhead. Likely affecting crawl rendering too. |
We use Google's Chrome User Experience Report (CrUX)—a public dataset of real performance metrics collected from Chrome users who have opted in to sharing. This means:
Not synthetic lab tests. CrUX reflects how real users actually experience your site across diverse networks, devices, and locations.
This is the same dataset that powers Google's page experience signals and Search Console Core Web Vitals reports.
We check both the specific page you entered and the overall domain, so you can see if issues are isolated or site-wide.
Not all URLs have CrUX data. Low-traffic pages may not meet the aggregation threshold. When data is unavailable, we say so clearly.
font-display: swap with size adjustmentsdefer and async attributesCrUX shows real-world performance from actual users. Lighthouse uses synthetic tests run from a single machine under controlled conditions. Both have their uses, but CrUX data better represents the diversity of real-world conditions including varied networks, devices, and geographic locations.
This is normal for low-traffic pages. CrUX requires a minimum threshold of real-user data to generate aggregated metrics. When URL-level data is unavailable, we check origin-level data (your whole domain). We clearly label when data is missing rather than guessing or showing synthetic scores.
We don't claim these are direct ranking factors. What we know is that AI systems operate under time and resource budgets when fetching content. Faster, more stable pages are more reliably retrieved. The relationship is about fetch success rate, not algorithmic ranking signals.
Partially. Google's Core Web Vitals are LCP, INP, and CLS. We focus on TTFB (most relevant for fetch reliability), INP, and CLS. We don't check LCP because it's less relevant to server-side fetch performance. We added TTFB specifically because it's the primary determinant of whether a request completes within an AI system's timeout window.
CrUX data typically updates monthly. Performance improvements you make may take 4–6 weeks to appear in the CrUX dataset. For immediate validation of changes, use Lighthouse or WebPageTest alongside CrUX monitoring.
Enter any URL to see TTFB, CLS, and INP metrics from real user data. Get URL-level and origin-level analysis with Good/Warning/Risk status interpretation.
Check Performance Signals