Beta Platform Feature

Track AI Visibility Over Time

One audit shows problems today. Monitoring shows you when new problems appear—before they compound into invisible months.

Catch Changes Before They Cost You Visibility

Technical barriers do not stay static. A robots.txt update blocks crawlers. A deployment creates content visibility gaps. A server migration tanks performance.

Without monitoring, you usually find these issues after AI systems have already been failing to access your content for weeks.

With monitoring, you get alerted when metrics shift while there is still time to fix them.

Accidental robots.txt block

A staging rule reaches production and GPTBot goes from allowed to blocked. Monitoring catches it within the next scheduled check instead of three weeks later when someone notices a traffic drop.

Performance regression

TTFB jumps from acceptable to risky after a plugin update or infrastructure change. Saved run history shows exactly when the shift happened and what grade it was before.

Content visibility gap

A template change moves key copy behind client-side rendering. Monitoring flags the new gap before it propagates to all similar pages across the site.

Monitor What Matters

Add pages individually or bulk import via CSV. Tracked pages are URLs in your project. Monitored pages are the subset you have enabled automated checks on.

Tracked Pages

Any URL you add to a project is a tracked page. Use tracked pages to organize all URLs associated with a site without running checks on all of them immediately.

Monitored Pages

Monitored pages are tracked pages with automated checks enabled. Set frequency, configure alerts, and let the system run checks on a schedule so you do not have to remember to check manually.

How It Works

1. Add Pages

  • Individual URL entry
  • CSV upload
  • Copy and paste from a spreadsheet

2. Configure Monitoring

  • Enable or disable per page
  • Set check frequency
  • Choose alert conditions

3. Track Across Projects

  • Group pages by site or client
  • Compare runs across time
  • Share reports with stakeholders

Check on Your Terms

Different pages deserve different attention. High-traffic product pages need frequent checks. Evergreen content pages need less. Configure schedules to match the risk profile of each page.

Daily

Runs once every 24 hours. Best for high-traffic pages, recently migrated sites, or any URL where a day of downtime or blocking is unacceptable.

When to use: Homepage, primary product pages, conversion-critical landing pages.

Weekly

Runs once every 7 days. Suitable for pages that change infrequently and where a few days of lag before detection is acceptable.

When to use: Blog posts, documentation, secondary landing pages, evergreen content.

Custom Interval

Set a specific cadence—every 3 days, every 2 weeks, or whatever matches your deployment and content update cycle.

When to use: When default daily or weekly cadences do not align with your release schedule.

Manual Only

No automatic checks. Run a check on demand when you want it. Useful for pages you want to validate before and after a specific change.

When to use: Pre- and post-deployment validation, one-off investigations, low-priority archived pages.

What We Monitor

Monitoring tracks four categories of changes. Each category has specific alert triggers so you only get notified when something that matters has actually shifted.

Content Changes

What we track:

  • Word count shifts between bot HTML and rendered output
  • New or widened content visibility gaps
  • Render pattern changes (server-side vs client-side)
  • Heading structure regressions
  • Key content moving behind JavaScript rendering

Alert triggers: Visibility gap exceeds threshold, significant word count drop, render pattern changes from server-side to client-side.

Subject: [BeSeenByAI Alert] Content visibility gap widened
Site: example.com/product-page

What changed:
- Content visibility gap: 8% → 32%
- Cause: Key product description now
  rendered client-side
- Last clean check: Apr 3, 2026

Recommended action:
Review recent template or CMS changes
that may have moved content into
JavaScript-only rendering.

Crawlability Changes

What we track:

  • New robots.txt disallow rules affecting AI bots
  • HTTP status changes (200 to 4xx or 5xx)
  • noindex directives added to monitored pages
  • Anti-bot blocking behavior detected
  • Bot-specific block patterns across 33 tracked crawlers

Alert triggers: Any bot transitions from allowed to blocked, HTTP status failure, noindex added.

Subject: [BeSeenByAI Alert] GPTBot now blocked
Site: example.com

What changed:
- GPTBot: Allowed → Blocked
- Cause: robots.txt updated
- Last successful check: Apr 5, 2026

Recommended action:
Review robots.txt and verify the
block is intentional.

Performance Changes

What we track:

  • TTFB grade changes (CrUX field data)
  • CLS instability increases
  • INP spikes above threshold
  • Grade boundary crossings (e.g. B to C)

Alert triggers: TTFB grade drops one or more levels, CLS moves from stable to unstable, INP crosses poor threshold.

Note: CrUX data reflects the previous 28-day field data window. Performance alerts may lag 1–4 weeks behind the actual regression. Lab data (measured at check time) is more immediate but less representative of real user conditions.

Subject: [BeSeenByAI Alert] TTFB degraded
Site: example.com/landing-page

What changed:
- TTFB: Good (720ms) → Warning (1450ms)
- Grade: B → C
- Last clean check: Apr 1, 2026

Recommended action:
Investigate server response time changes,
CDN configuration, or recent infrastructure
updates.

Authority Changes Beta

What we track:

  • JSON-LD schema validation failures
  • Page type classification changes
  • Authority score grade shifts
  • Structured data removal or malformation
  • Entity signal regressions

Alert triggers: Authority grade drops, schema validation errors introduced, page type reclassified.

Subject: [BeSeenByAI Alert] Authority score dropped
Site: example.com/article-page

What changed:
- Authority score: A → C
- Cause: JSON-LD schema removed
- Last clean check: Mar 28, 2026

Recommended action:
Check for template changes that may
have removed structured data markup
from article pages.

Track Every Check

Every scheduled or manual check creates a saved run record. Run history gives you a timestamped audit trail you can use for verification, investigation, and documentation.

What Gets Saved Per Check

  • Full report snapshot at the time of the check
  • Metric deltas compared to the previous run
  • Improved, degraded, or stable indicators per category
  • Timestamp, schedule type (automated vs manual), and trigger
  • Bot policy status for all 33 tracked crawlers
  • Content visibility score and gap percentage
  • TTFB grade and raw value

Run records are retained for the duration of your plan period. Beta access includes full history.

Using Run History

Compare two time points. Select any two runs and open Compare Mode to see exactly what changed between them.

Track fix verification. After deploying a fix, run a manual check and confirm the issue is resolved against the last failing run.

Audit trail for clients. Show clients a documented record of when issues appeared, when they were fixed, and what the before and after states were.

Trend analysis. Identify pages that are gradually degrading versus pages that had a sudden incident, and prioritize your response accordingly.

Side-by-Side Diagnostics

Compare Mode lets you select two saved runs for the same page and view them side by side. Every metric, every bot policy, every score shown in parallel so differences are immediately visible.

Use Cases

  • Before and after deployment. Run a manual check before pushing a change, deploy, run another check, compare the two.
  • Fix verification. Confirm that a fix actually resolved the reported issue, not just that the alert stopped firing.
  • Monthly health checks. Compare this month's baseline to last month's baseline to surface slow-moving regressions.
  • Client reporting. Show clients a documented before-and-after for any remediation work you completed.

How to Use Compare Mode

  1. Open the run history for a monitored page.
  2. Select the first run (the baseline or before state).
  3. Select the second run (the current or after state).
  4. Click Compare to open the side-by-side view.
  5. Review each category for changed indicators, new blocks, and score shifts.

Get Notified When It Matters

Configure monitoring alerts so you hear about the right problems on the right pages without noise from pages you do not care about.

Alert Configuration Options

  • Per-page alert toggles. Enable alerts only on the pages that warrant immediate attention.
  • Alert type filtering. Choose which change families trigger alerts: crawlability, content, performance, authority, or any combination.
  • Immediate vs digest. Critical crawlability changes (bot blocked, site down) can be sent immediately. Lower-severity changes can be batched into a weekly digest.
  • Delivery channel. Email delivery via your account email address. Additional channels planned.

Example Alert Email

From: no-reply@beseenby.ai
Subject: [BeSeenByAI Alert] 2 changes detected
         on example.com

Monitoring run completed: Apr 7, 2026 00:14 UTC
Schedule: Daily

Changes detected:

1. CRAWLABILITY — GPTBot now blocked
   Page: example.com/product-page
   Before: Allowed | After: Blocked
   Cause: robots.txt rule added

2. PERFORMANCE — TTFB degraded
   Page: example.com/landing-page
   Before: B (820ms) | After: C (1380ms)

View full run report:
https://app.beseenby.ai/projects/123/runs/456

To adjust alert settings, visit your
monitoring configuration page.

Add Pages at Scale

Import dozens or hundreds of URLs at once using CSV upload or paste-from-spreadsheet. Useful for post-migration audits, client site onboarding, and setting up monitoring for large content libraries.

CSV Upload

Upload a CSV file with one URL per row. Optional columns for page name and notes. Download a template from the import dialog to get started.

  • One URL per row
  • Optional name and notes columns
  • Duplicate URLs are skipped automatically

Copy and Paste

Paste a list of URLs directly from a spreadsheet, Google Sheet, or text file. One URL per line. The importer handles whitespace and normalization.

  • Paste directly into the import field
  • One URL per line
  • Works from any spreadsheet tool

Sitemap Import Planned

Point the importer at a sitemap URL and it will extract all listed pages automatically. Useful for comprehensive site coverage without manual URL collection.

  • XML sitemap parsing
  • Sitemap index support
  • Filter by URL pattern before importing

Import Limits

After Import

1. Review imported URLs. Confirm the list looks correct, fix any normalization issues, and remove URLs you do not want to track.

2. Select pages to monitor. Not every imported URL needs automated checks. Enable monitoring only on the pages that warrant it.

3. Run an initial batch audit. Get a baseline reading for all imported pages before the first scheduled check runs.

Who Monitoring Is For

Agencies Managing Client Sites

Scenario: You manage 10–30 client sites. One robots.txt change affects a client's homepage without anyone noticing for three weeks.

  • Add client sites as separate projects
  • Enable daily monitoring on high-priority pages
  • Get alerted when crawlability changes for any client
  • Share run reports with clients directly

Value: Proactive issue detection instead of reactive damage control.

Example: An agency monitors 200 URLs across 15 clients. A staging robots.txt rule goes live on a client's site on a Tuesday. By Wednesday morning, the alert is in the account owner's inbox before the client notices.

In-House Teams Post-Migration

Scenario: Your team completed a platform migration. You want to verify nothing regressed and catch any issues that surface in the weeks after go-live.

  • Import all migrated URLs via CSV
  • Run an initial batch audit for baseline
  • Enable daily monitoring for the first 30 days post-migration
  • Compare pre-migration baseline to each subsequent run

Value: Migration confidence without manual weekly checks.

Example: A team migrates from WordPress to a headless CMS. Monitoring catches that the new template renders all article body copy client-side, creating content visibility gaps that were not present before migration.

SEO Consultants

Scenario: You advise clients on technical SEO and GEO. You need documented evidence that issues you flagged were fixed, and evidence of ongoing health.

  • Set up monitoring during the audit engagement
  • Use Compare Mode to document before/after states
  • Share run history as deliverables
  • Enable weekly monitoring after engagement ends as a retainer component

Value: Documentation and continuity built into the workflow.

Example: A consultant recommends fixing a content visibility gap. After the client's dev team deploys the fix, a manual check plus Compare Mode produces a documented before/after report that becomes part of the project deliverable.

Founders Staying Informed

Scenario: You run a content-driven product or SaaS site. You do not have a technical team watching AI visibility, but you know it matters for growth.

  • Add 5–10 key pages to a project
  • Enable weekly monitoring
  • Set up email alerts for critical changes only
  • Review monthly trend summary

Value: Peace of mind without adding another thing to the weekly checklist.

Example: A founder's blog drives most of their inbound pipeline. Weekly monitoring flags a TTFB regression after a theme update. They forward the alert to their developer, who fixes the issue before it affects a full month of CrUX data.

Monitoring vs. Free Audits

The free audit tool and monitoring serve different needs. Use the free audit to investigate a specific page. Use monitoring to maintain ongoing visibility health across your most important URLs.

Feature Free Audit Monitoring (Beta)
Run on demand Yes Yes (manual check)
Automated scheduled checks No Yes (daily, weekly, custom)
Saved run history No Yes
Compare Mode No Yes
Email alerts on change No Yes
Bulk URL import No Yes (CSV, paste)
Multi-page projects No Yes
Authority checks No Yes (Beta)
Shareable reports No Yes
Account required No Yes (beta invite)

Frequently Asked Questions

How often does monitoring run checks?
You choose the frequency per page: daily, weekly, or a custom interval. You can also set specific pages to manual-only if you only want to run checks on demand before and after deployments.
What triggers an alert email?
Alerts fire when a monitored metric crosses a threshold you have configured: a bot transitions from allowed to blocked, a content visibility gap exceeds a set percentage, TTFB grade drops one level or more, or an authority score regresses. You can configure which change families trigger alerts per page.
How many pages can I monitor?
Free tier allows up to 5 tracked URLs per project with limited automated checks. Beta access includes higher URL limits depending on your plan tier. Plus, Pro, and Agency plans have progressively higher tracked URL budgets (50, 500, and 2000 respectively).
How far back does run history go?
Run history is retained for the duration of your active plan period. Beta access includes full history during the beta phase. Report retention policies will be documented in the plan terms before the public launch.
Can I share monitoring reports with clients?
Yes. Individual run reports can be shared via a shareable link. Shareable reports are entitlement-gated—available on plans that include the sharing feature. The recipient does not need a BeSeenByAI account to view a shared report.
Why might a performance alert be delayed?
Performance metrics (TTFB, CLS, INP) are sourced from the Chrome UX Report (CrUX), which reflects the previous 28-day rolling window of real-user field data. A performance regression that happened today may not surface in CrUX data for up to four weeks. Lab measurements taken at check time are more immediate but represent a single synthetic data point rather than real-user conditions.
What is the difference between tracked pages and monitored pages?
Tracked pages are any URLs you have added to a project. Monitored pages are the subset of tracked pages that have automated checking enabled. You can add many URLs to a project for organizational purposes without enabling automated checks on all of them.
Can I import pages from a sitemap?
Sitemap import is planned but not yet available. Currently you can add pages individually, via CSV upload, or by pasting a list of URLs. Sitemap import will be added in a future update.
How do I use Compare Mode?
Open the run history for a monitored page, select two runs you want to compare, and click Compare. The side-by-side view shows every metric and status from both runs in parallel, with visual indicators for anything that changed between them.
Does monitoring require a paid plan?
Monitoring is a beta platform feature available to users with beta access. Basic monitoring is included in the beta. Full scheduling, alert configuration, and history retention are tied to plan tier. The free audit tool at BeSeenByAI.com does not require an account and is always free.

Start Monitoring Your AI Visibility

Run the free audit to understand where you stand today. Set up monitoring to stay informed as things change.