How Can We Help?

Guides, explanations, and troubleshooting for both free audits and the beta monitoring platform.

Quick Navigation

Getting Started with Free Audits

1

Enter a URL

Paste any public URL you want to check — your homepage, a blog post, a product page, or any publicly accessible page.

2

Wait for Results

Performance, crawlability, and content visibility checks run in parallel and usually complete in 15–30 seconds.

3

Review Findings

The report is organized into tabs: Overview, Performance, Crawlability, and Content Visibility. Each tab surfaces actionable findings.

4

Export Report

Click Export PDF to download a shareable report for developers, clients, or stakeholders.

Getting Started with the Beta Platform

1

Create an Account

Sign up at app.beseenby.ai/signup. Access is invite-only during beta — use your invite code at signup.

2

Create Your First Project

Projects group pages by site or client. Examples: "Company Website", "Client A", "Client B — E-commerce". One project per website is the recommended pattern.

3

Add Pages to Track

Add individual URLs one at a time, or bulk-import via CSV for larger sites. You can also trigger a manual check immediately.

4

Configure Monitoring

Optional but recommended. Set scheduled checks, choose frequency (daily or weekly), and configure alert thresholds for each page.

Managing Projects and Pages

Creating Projects

Projects keep your tracked pages organized. Each project represents one website or client property. Best practices: one project per website, separate projects for separate clients, use descriptive names that make reports easy to identify.

Adding Pages

Individual Entry

  1. Open your project.
  2. Click Add Page.
  3. Enter the full URL (including https://).
  4. Optionally set a page name and monitoring frequency.
  5. Click Save — an immediate audit runs automatically.

Bulk Import via CSV

  1. Open your project.
  2. Click Import Pages.
  3. Prepare a CSV file with the columns below.
  4. Upload the file and review the preview.
  5. Confirm import — audits are queued and run sequentially.
url,page_name,frequency
https://example.com,Homepage,daily
https://example.com/about,About Page,weekly
https://example.com/products,Products,weekly

Managing Tracked Pages

From the project view you can see all tracked pages at a glance. Filter and sort options include: by status (healthy / needs attention / error), by last check date, and by monitoring status (monitored / manual-only). Bulk actions available: enable or disable monitoring, change check frequency, delete pages, and export the full list as CSV.

Setting Up Monitoring

Enabling Scheduled Checks

  1. Open the page settings from the project view.
  2. Toggle Enable Monitoring.
  3. Choose a check frequency (daily, weekly, or custom interval).
  4. Set alert thresholds if desired (or inherit project defaults).
  5. Save — the first scheduled check will run at the next window.

When monitoring is enabled, checks run automatically on schedule, results are saved to report history, alerts fire when thresholds are crossed, and no manual action is required between checks.

Check Schedules

Daily

Best for critical pages. Recommended for: homepage, main product or service pages, top-converting landing pages, and any page you're actively monitoring post-deployment.

Weekly

Standard cadence for most pages. Good for: blog posts, secondary product pages, support docs, and about or contact pages.

Custom Intervals

Set any schedule that fits your workflow — every 3 days, bi-weekly, monthly, or first Monday of the month.

Manual-Only Mode

You can add a page to a project without enabling scheduled monitoring. In manual-only mode, audits only run when you trigger them. Useful for: staging environments, pre- and post-deployment spot checks, and one-off audits where you don't need ongoing history.

Understanding Alerts

Alert Types

Crawlability Alerts

  • New robots.txt block rules detected
  • HTTP status failures (4xx / 5xx / 429)
  • noindex directive added

Performance Alerts

  • TTFB crosses a threshold band
  • CLS score increases beyond tolerance
  • INP degrades into a worse category

Content Visibility Alerts

  • Word count gap increases more than 10 percentage points
  • New high-impact invisible content detected
  • Render pattern regression identified

Authority Alerts (beta)

  • Authority score drops significantly
  • Schema validation failures introduced

Alert Delivery

Immediate Email

Sent as soon as a change is detected. Used for high-severity changes like a crawlability block or a major TTFB regression.

Daily Digest

A summary email covering all changes detected in the past 24 hours. Good for staying informed without inbox noise.

Weekly Digest

A summary of the past 7 days of changes. Useful for lower-priority pages or weekly review workflows.

Configuring Alerts

Alerts can be configured per-page from page settings, or at the project level to set defaults that apply to all pages in the project. Per-page settings always override project defaults when set.

Using Compare Mode

Compare Mode lets you view two saved reports side-by-side to see what changed between any two points in time.

Use Cases

How to Compare

  1. Navigate to the report history for any tracked page.
  2. Select two reports using the checkboxes.
  3. Click Compare.
  4. Review the comparison: improved metrics are shown in green, degraded in red, unchanged in gray. New issues and resolved issues are called out explicitly.
  5. Optionally, export the comparison as a PDF for sharing or archiving.

White-Label Reports

Setting Up White-Label Branding

  1. Go to Settings → Branding.
  2. Upload your logo.
  3. Set your brand colors.
  4. Add your company name and footer text.
  5. Save — branding is applied immediately to all PDF exports and shared links.

Using White-Label

Once branding is configured, PDF exports automatically include your logo and colors, shareable report links display your branding, and the "Powered by BeSeenBy.AI" attribution is removed.

Multiple Branding Profiles (planned)

Support for multiple branding profiles — one per client — is on the roadmap. This will let you switch branding per project, useful for agencies managing multiple brands. Not yet available.

Understanding Performance Results

What does "No data available" mean for CrUX?
CrUX (Chrome UX Report) only includes URLs with sufficient real-user traffic in Chrome. When URL-level data is unavailable, we check origin-level data aggregated across the whole domain. This is clearly labeled in the report. For low-traffic pages, consider running a Lighthouse audit for synthetic performance data alongside the CrUX results.
What is the difference between URL-level and origin-level metrics?
URL-level metrics reflect the performance of the exact page you audited. Origin-level metrics are aggregated across your entire domain. Both matter: a fast homepage combined with a slow product page means AI systems may reliably access some pages while timing out on others.
Why is my TTFB marked "Poor" but my site feels fast?
TTFB measures raw server response time before any content renders in the browser. Your site may feel fast due to geographic proximity to a server, browser caching, or client-side rendering that masks a slow initial response. CrUX shows real-world data across diverse users worldwide — a p75 TTFB of 2000ms means 25% of visitors experience even slower. AI crawlers originate from data center IPs and typically cannot benefit from CDN caching, making TTFB the single most critical metric for AI visibility.
What thresholds should I aim for?
MetricGoodNeeds ImprovementPoor
TTFB≤ 800ms800–1800ms> 1800ms
CLS≤ 0.10.1–0.25> 0.25
INP≤ 200ms200–500ms> 500ms

Understanding Crawlability Results

My robots.txt allows all crawlers but some bots show Blocked — why?

Check for these common causes:

  • Global block ruleUser-agent: * Disallow: / blocks all bots including AI crawlers.
  • Partial match rules — a rule like User-agent: Bot can inadvertently catch GPTBot, ClaudeBot, and similar names.
  • Conflicting rules — an allow rule lower in the file may be overridden by a disallow rule that appears first.

Example robots.txt that blocks all bots:

User-agent: *
Disallow: /

We parse robots.txt according to RFC 9309 to determine the effective policy for each bot.

Do I need to allow all AI crawlers?
No. Selective blocking is a valid and common choice. What matters is that your policy is intentional rather than accidental. You may want to allow search-oriented crawlers (PerplexityBot, OAI-SearchBot) for real-time AI search visibility while blocking training crawlers (GPTBot, CCBot) if you have concerns about training data use. Both choices are legitimate — our report shows you the effective policy so you can make an informed decision.
What does "robots.txt is advisory" mean?
robots.txt is a published request for crawlers to honor your rules — not a technical enforcement mechanism. Compliant crawlers respect it, but it cannot technically prevent access the way server authentication can. Major AI providers publicly document that they respect robots.txt. For real access control, use server-level authentication or IP allowlisting.
Which AI crawlers should I allow for maximum visibility?
The most important crawlers to allow for AI search visibility are: GPTBot, OAI-SearchBot, ClaudeBot, PerplexityBot, Google-Extended, and CCBot. We track 33 AI bots in total — see the full list on the Crawlability features page.
What if HTTP status is different from robots.txt rules?
This indicates server-level restrictions that override robots.txt permissions. For example, if robots.txt allows GPTBot but the live HTTP request returns 403 Forbidden, a firewall or anti-bot middleware is blocking the request regardless. Fixing this requires investigating your server configuration, WAF rules, or CDN settings — not your robots.txt.

Understanding Content Visibility Results

What does the word count gap percentage mean?
We compare the word count in the JavaScript-rendered view (what a user sees in a browser) against the raw HTML response (what most crawlers see). The gap percentage is the proportion of visible content that requires JavaScript to appear. Example: if the JS-rendered page has 1,000 words and the raw HTML has 700, the gap is 30%.
Is a 10% gap bad?
Context matters. A rough guide:
  • Under 5% — normal, low risk. Minor dynamic elements not affecting core content.
  • 5–30% — may affect some crawlers. Investigate what is in the gap. Missing navigation is less critical than missing article body text.
  • Over 30% — significant. Prioritize getting critical content into the raw HTML response to ensure it is accessible to AI crawlers that do not execute JavaScript.
How do I fix content visibility issues?
Four approaches in order of implementation effort:
  • Server-side rendering (SSR) — render full HTML on the server before sending to the client.
  • Static site generation (SSG) — pre-generate HTML at build time.
  • Pre-rendering services — serve static HTML snapshots to bots while keeping SPA behavior for users.
  • Progressive enhancement — ensure critical content is present in the base HTML; JavaScript enhances rather than creates it.

See the Content Visibility features page for a deeper technical walkthrough.

Troubleshooting

Audit timed out or returned errors
Common causes: the server is down or extremely slow (over 30 seconds to respond), the URL format is invalid, or a firewall is blocking the audit request. Try: verify the URL loads in your browser, check whether the site is reachable from an external connection, and try auditing a different page from the same domain to narrow down whether the issue is site-wide or page-specific.
Chrome extension not working on certain pages
The extension cannot analyze certain page types by browser design: chrome:// pages, extension settings pages, local files (file://), and pages with strict Content Security Policy headers that block extension scripts. For these cases, use the web app at beseenby.ai instead.
robots.txt shows old rules after an update
We fetch a fresh copy of robots.txt on every audit. However, CDN or proxy caching may delay how quickly changes propagate. If you've recently updated your robots.txt: wait 5–10 minutes, verify the changes are visible by navigating directly to yourdomain.com/robots.txt in your browser, then run the audit again.
Monitoring stopped running scheduled checks
Possible causes: monitoring was disabled for the page, the project was paused, or consecutive server errors caused the scheduler to back off. Check: navigate to page settings and confirm monitoring is still enabled, then review the last successful check date. If the issue persists, contact support.
Report history not showing recent checks
Two common explanations: CrUX data updates monthly, so performance metrics may not reflect changes from the last 4–6 weeks. Also, manual checks run from the free tool do not automatically appear in your workspace history — only audits run from within a monitored project are saved to report history.

Key Terms

TTFB

Time to First Byte. Measures how long a server takes to respond to a request. The most critical metric for AI crawler visibility — crawlers from data center IPs cannot rely on CDN caching.

CLS

Cumulative Layout Shift. Measures visual stability — how much page content unexpectedly moves during load. A Core Web Vital.

INP

Interaction to Next Paint. Measures responsiveness to user interactions. Replaced FID as a Core Web Vital in 2024.

CrUX

Chrome UX Report. Google's dataset of real-user performance metrics collected from Chrome users in the field, aggregated at the URL and origin level.

robots.txt

A plain-text file at the root of a domain that instructs crawlers which paths they are (or are not) permitted to access. Advisory, not enforced technically.

noindex

An HTML meta tag or HTTP response header that instructs search engines and crawlers not to index a page, even if they can access it.

Origin-level vs URL-level

Origin-level data is aggregated across an entire domain. URL-level data is specific to one page. Both can be present in a single report when page-level data exists but is supplemented with domain context.

Monitored Page vs Tracked Page

A tracked page is any URL added to a project. A monitored page is a tracked page with scheduled automatic checks enabled. All monitored pages are tracked, but not all tracked pages are monitored.

Still Need Help?

Contact Support

Have a question not answered here? Reach out and we'll get back to you.

Get in Touch

Explore Features

Detailed documentation on each diagnostic check — crawlability, performance, content visibility, and authority.

View Features

Request Beta Access

Access monitoring, white-label reports, compare mode, and team features in the full platform.

Get Beta Access