Three core diagnostics tell you if AI crawlers can access, index, read, and reliably use your content. No guessing, no assumptions—just real data about your AI readiness.
Run Free AuditThis tool runs diagnostics on a page and website-level to identify technical barriers that can prevent AI systems from accessing your content. Each check uses authoritative data sources only—we show you real data or we show you nothing, no guesses.
| Feature | What It Checks | Why It Matters | Output |
|---|---|---|---|
| Performance | TTFB, CLS, INP from Google CrUX | Faster, more stable pages reduce fetch failures and increase retrieval likelihood | URL-level and origin-level metrics with Good/Warning/Risk status |
| Crawlability | robots.txt rules for 33 AI crawlers, HTTP Status Check (Browser + GPTBot + ClaudeBot + PerplexityBot), and noindex directives | Access and indexability are prerequisites for visibility; blocks or noindex directives prevent discovery | Allow/Block status, HTTP response-code status, and noindex status with advisory notes |
| Content Visibility | JavaScript-rendered vs raw HTML content, likely render pattern, and raw HTML structure signals | Missing HTML content means many crawlers can't capture it | Word count comparison, Content Diff impact summary, prioritized invisible-content list, Render Pattern inference, and raw HTML structure signals |
AI systems don't always see what users see. If important content appears only after JavaScript runs, many crawlers can miss it. The Web App now combines gap metrics with an impact summary and a prioritized list of content potentially invisible to AI bots.
Your crawlability setup is more than robots.txt. We check 33 AI user agents, run HTTP Status Check probes for Browser + GPTBot + ClaudeBot + PerplexityBot, and detect noindex directives that can hide pages even when bots are allowed.
Many AI systems fetch multiple sources under time budgets. Slow or unstable responses are more likely to fail or be skipped. In the audit tool, these signals are grouped in one Performance tab.
Data source: Google's Chrome User Experience Report (CrUX)—real-user field data at URL and origin level.
Learn more about performance signals →Paste any URL into the audit tool. We extract both the full URL and the root domain for analysis.
Three checks run simultaneously: performance signals via CrUX, crawlability checks (robots.txt, HTTP Status Check, noindex), and content visibility via JS vs HTML comparison plus Content Diff prioritization.
Each check returns a status interpretation and specific gap identification. Good/Warning/Risk labels surface the issues that need attention.
Download a PDF with all findings—performance metrics, crawlability summary, and content visibility analysis—ready to share with stakeholders.