Troubleshooting

Solutions to common issues: audit failures, unexpected crawlability results, missing performance data, monitoring alerts, and more.

Audit issues

The audit timed out or failed Audits time out if the page takes too long to respond or the server is unreachable. Try re-running the audit. If it fails consistently, check whether the URL is accessible from outside your network, and whether any firewall, VPN, or bot protection might be blocking the audit request. Pages behind authentication or login walls cannot be audited.

The audit ran but shows no data for some tabs Some checks require the page to be publicly accessible and return a successful response. If your page returns a 4xx or 5xx status, certain checks will be skipped or return empty results. The report will note which checks could not complete and why.

I got an error saying I’ve hit my audit limit Free accounts are capped at 3 audits per day (IP-based, resets at midnight UTC). Paid plan quotas reset on your billing anniversary. To check your remaining quota, go to Settings → Plan in your account.


Crawlability issues

Bots are shown as blocked, but I didn’t intend to block them The most common cause is a wildcard rule in robots.txt. Look for User-agent: * with Disallow: / — this blocks everything, including AI search agents. Also check for rules that were added for one purpose but match more broadly than intended.

A second common cause is infrastructure-level blocking. Your robots.txt may allow a bot, but your CDN, WAF, or hosting provider may be rejecting its requests at the network layer. This shows up as a mismatch finding — robots.txt says allow, live HTTP check says blocked. See the Crawlability tab guide for specific causes and how to investigate.

The HTTP Status Check shows failures for AI bots but not Browser This is the classic mismatch pattern. Something in your infrastructure is identifying AI bot user agents and treating them differently from regular browser traffic. Common sources: Cloudflare Bot Fight Mode (enabled by default), WAF rules targeting “AI scrapers,” or security plugins with AI bot blocklists. Run curl -I with an AI bot user agent to confirm which layer is responsible.

Crawlability results changed between audits with no changes on my end Crawlability checks are live measurements — your infrastructure’s response to the audit request can vary. A 429 or 5xx that appears once may reflect a transient rate limit or WAF decision rather than a permanent block. Re-run the audit to confirm whether the issue is consistent.


Performance issues

No CrUX data available — only lab data is shown Chrome UX Report data requires a minimum number of real visits from Chrome users to be collected. New pages, very low-traffic pages, and pages not indexed by Google typically have no CrUX data. BeSeenByAI uses a live lab measurement as a fallback and notes when field data is unavailable.

My TTFB grade seems unexpectedly low TTFB can vary significantly depending on whether the page was cached at the time of the audit. An uncached origin request can be 5–10x slower than a cached edge response. If you have a CDN, check whether it was warm at the time of the audit. Re-running shortly after your first audit often returns a faster result as caching kicks in.

CLS or INP data is missing CLS and INP are CrUX-only metrics — they require real user data and are not measured in lab conditions. If no CrUX data is available for your page, these metrics will not appear in the report.


Content Visibility issues

The content comparison seems wrong — bots see a completely different page If your site uses Cloudflare or another bot protection service, the bot fetch for content comparison may have received a challenge page instead of your actual content. This is a known limitation of any tool that fetches pages as an AI bot. The fix is to configure your bot protection to allow the specific user agents BeSeenByAI uses (GPTBot, ClaudeBot, PerplexityBot).

A large percentage of content appears “added by JavaScript” This is expected for sites that load content client-side (single-page apps, lazy-loaded sections, accordion content). It means bots without JavaScript execution cannot see that content. See the Content Visibility tab guide for what this means and when it matters.


Monitoring issues

Monitoring is not sending alert emails Check three things: (1) your change type selections in Monitoring settings — if the category that changed is not enabled, no alert fires; (2) your notification preferences — confirm “Changed pages detected” is toggled on; (3) your email inbox, including spam — monitoring alerts come from no-reply@beseenby.ai.

Monitoring is running but the report looks identical to the previous run If nothing actually changed on the page between runs, the report will match. Monitoring alerts only fire when a change is detected, not on every run. The monitoring history shows the timestamp of each run even when no changes occurred.

Monitoring is available on Agency plan only Automated daily monitoring is an Agency feature. On Plus and Pro, tracked URLs are available for project organization and manual audit history, but no scheduled runs execute.


Batch audit issues

Multiple URLs in my batch are failing The most common causes are: the URLs return errors or timeouts; the pages are behind bot protection that blocks the audit; or the domain has DNS issues. Open the batch detail and check the failure reason shown for each URL. See the Batch Audits guide for what each failure code means.

My batch is stuck in “Running” status Batches process URLs sequentially at roughly 30 seconds per URL. A 50-URL batch takes about 25 minutes. If the batch appears stuck beyond that window, re-run it. If the issue persists, contact support.


Reverse Prompting and Prompt Fit issues

The Reverse Prompting or Prompt Fit analysis shows an error These analyses use an LLM to process your page content. Transient API errors can occur. Try re-running the analysis. If errors persist, check the Help menu in the platform for known issues or contact support.

I used a credit but saw the same cached result Re-running the same analysis on the same URL returns the cached result without consuming a new credit. Credits are only consumed when a new LLM run is triggered (a different URL, or after the cached result has expired).


Search and navigation

Help Center search returns no results The search index is built when content is published. If you searched immediately after the site was updated, try again in a few seconds. If search consistently returns nothing, try a shorter or more general term — Pagefind matches on exact words and stems.


Account and billing

I can’t see a feature that should be on my plan Refresh the page and check your plan in Settings → Plan. If your account was recently upgraded, a hard refresh (Ctrl+Shift+R / Cmd+Shift+R) may be needed to pick up the new entitlements. If the issue persists, contact support with your account email.


For issues not covered here, use the Help menu in the platform to contact support.