BeSeenByAI grades three performance metrics: TTFB, CLS, and INP. Each grade reflects the impact on AI retrieval, not just general web performance.
For a full explanation of how performance affects AI visibility, see the Performance tab guide.
TTFB (Time to First Byte)
TTFB is the most important performance metric for AI retrieval. It measures how long the server takes to start responding after a request is made. AI systems assembling answers from multiple sources tend to process the fastest responses first — a slow TTFB does not block a crawler outright, but it makes you a less attractive source when an AI is racing to include content under a time budget.
| Grade | Range | Category | AI retrieval impact |
|---|---|---|---|
| A+ | Under 200ms | Ideal | Minimal fetch failure risk. Likely to be included in AI retrieval workflows. |
| A | 200–350ms | Competitive | Fast enough for most AI retrieval scenarios. Low risk of fetch failures. |
| B | 350–600ms | Acceptable | Within typical fetch budgets for most AI systems. |
| C | 600–1,000ms | Needs improvement | Slower responses increase the chance of being skipped by AI retrieval systems. |
| D | 1,000–2,000ms | At risk | Many AI systems have fetch budgets in this range. Fetch failures become likely. |
| F | Over 2,000ms | Poor | Most AI systems will likely fail to fetch content at this speed. |
Target: Aim for A or better (under 350ms). B is acceptable. C and below carry increasing risk of being deprioritized or dropped by AI retrieval.
CLS (Cumulative Layout Shift)
CLS measures how much visible page content moves unexpectedly during loading. For AI retrieval, layout instability affects how reliably content can be extracted from a page — crawlers that parse the DOM structure are sensitive to shifts that move or reorder content after initial render.
| Grade | Range | Category | AI retrieval impact |
|---|---|---|---|
| A | Under 0.1 | Good | Stable rendering — reliable content extraction. |
| C | 0.1–0.25 | Needs improvement | Layout shifts may affect how reliably content is extracted. |
| F | 0.25 or higher | Poor | Significant instability. Unreliable content extraction likely. |
Note: CLS is primarily a real-user metric sourced from the Chrome UX Report. It reflects aggregated field data from actual visitors, not a single lab measurement.
INP (Interaction to Next Paint)
INP measures how quickly a page responds to user input. For most AI use cases (read-only crawlers), INP has limited direct impact. It becomes relevant for AI agents that interact with pages — filling forms, navigating, or triggering dynamic content.
| Grade | Range | Category | AI retrieval impact |
|---|---|---|---|
| A | Under 200ms | Good | Fast interactions support AI agents that perform tasks on your site. |
| F | 200ms or higher | Poor | High INP can indicate heavy scripting that may also affect page rendering for extractors. |
Note: INP replaced First Input Delay (FID) as a Core Web Vital in 2024.
Field data vs. lab data
TTFB in BeSeenByAI reports is measured two ways:
Field data (CrUX) — aggregated from real Chrome users visiting your page. Reflects real-world conditions including CDN caching, geographic distribution, and user device mix. CrUX data requires a minimum number of visits to be available; new or very low-traffic pages may show no field data.
Lab data — a live measurement taken at audit time from a single location. Always available, but reflects one moment and one network path.
When CrUX data is available, BeSeenByAI uses it as the primary source. When it is not, lab data is used instead and the report notes this.
CLS and INP are CrUX-only metrics — they require real user data and are not measured in lab conditions.