One audit shows problems today. Monitoring shows you when new problems appear—before they compound into invisible months.
Technical barriers do not stay static. A robots.txt update blocks crawlers. A deployment creates content visibility gaps. A server migration tanks performance.
Without monitoring, you usually find these issues after AI systems have already been failing to access your content for weeks.
With monitoring, you get alerted when metrics shift while there is still time to fix them.
A staging rule reaches production and GPTBot goes from allowed to blocked. Monitoring catches it within the next scheduled check instead of three weeks later when someone notices a traffic drop.
TTFB jumps from acceptable to risky after a plugin update or infrastructure change. Saved run history shows exactly when the shift happened and what grade it was before.
A template change moves key copy behind client-side rendering. Monitoring flags the new gap before it propagates to all similar pages across the site.
Add pages individually or bulk import via CSV. Tracked pages are URLs in your project. Monitored pages are the subset you have enabled automated checks on.
Any URL you add to a project is a tracked page. Use tracked pages to organize all URLs associated with a site without running checks on all of them immediately.
Monitored pages are tracked pages with automated checks enabled. Set frequency, configure alerts, and let the system run checks on a schedule so you do not have to remember to check manually.
Different pages deserve different attention. High-traffic product pages need frequent checks. Evergreen content pages need less. Configure schedules to match the risk profile of each page.
Runs once every 24 hours. Best for high-traffic pages, recently migrated sites, or any URL where a day of downtime or blocking is unacceptable.
When to use: Homepage, primary product pages, conversion-critical landing pages.
Runs once every 7 days. Suitable for pages that change infrequently and where a few days of lag before detection is acceptable.
When to use: Blog posts, documentation, secondary landing pages, evergreen content.
Set a specific cadence—every 3 days, every 2 weeks, or whatever matches your deployment and content update cycle.
When to use: When default daily or weekly cadences do not align with your release schedule.
No automatic checks. Run a check on demand when you want it. Useful for pages you want to validate before and after a specific change.
When to use: Pre- and post-deployment validation, one-off investigations, low-priority archived pages.
Monitoring tracks four categories of changes. Each category has specific alert triggers so you only get notified when something that matters has actually shifted.
What we track:
Alert triggers: Visibility gap exceeds threshold, significant word count drop, render pattern changes from server-side to client-side.
Subject: [BeSeenByAI Alert] Content visibility gap widened Site: example.com/product-page What changed: - Content visibility gap: 8% → 32% - Cause: Key product description now rendered client-side - Last clean check: Apr 3, 2026 Recommended action: Review recent template or CMS changes that may have moved content into JavaScript-only rendering.
What we track:
Alert triggers: Any bot transitions from allowed to blocked, HTTP status failure, noindex added.
Subject: [BeSeenByAI Alert] GPTBot now blocked Site: example.com What changed: - GPTBot: Allowed → Blocked - Cause: robots.txt updated - Last successful check: Apr 5, 2026 Recommended action: Review robots.txt and verify the block is intentional.
What we track:
Alert triggers: TTFB grade drops one or more levels, CLS moves from stable to unstable, INP crosses poor threshold.
Note: CrUX data reflects the previous 28-day field data window. Performance alerts may lag 1–4 weeks behind the actual regression. Lab data (measured at check time) is more immediate but less representative of real user conditions.
Subject: [BeSeenByAI Alert] TTFB degraded Site: example.com/landing-page What changed: - TTFB: Good (720ms) → Warning (1450ms) - Grade: B → C - Last clean check: Apr 1, 2026 Recommended action: Investigate server response time changes, CDN configuration, or recent infrastructure updates.
What we track:
Alert triggers: Authority grade drops, schema validation errors introduced, page type reclassified.
Subject: [BeSeenByAI Alert] Authority score dropped Site: example.com/article-page What changed: - Authority score: A → C - Cause: JSON-LD schema removed - Last clean check: Mar 28, 2026 Recommended action: Check for template changes that may have removed structured data markup from article pages.
Every scheduled or manual check creates a saved run record. Run history gives you a timestamped audit trail you can use for verification, investigation, and documentation.
Run records are retained for the duration of your plan period. Beta access includes full history.
Compare two time points. Select any two runs and open Compare Mode to see exactly what changed between them.
Track fix verification. After deploying a fix, run a manual check and confirm the issue is resolved against the last failing run.
Audit trail for clients. Show clients a documented record of when issues appeared, when they were fixed, and what the before and after states were.
Trend analysis. Identify pages that are gradually degrading versus pages that had a sudden incident, and prioritize your response accordingly.
Compare Mode lets you select two saved runs for the same page and view them side by side. Every metric, every bot policy, every score shown in parallel so differences are immediately visible.
Configure monitoring alerts so you hear about the right problems on the right pages without noise from pages you do not care about.
From: no-reply@beseenby.ai
Subject: [BeSeenByAI Alert] 2 changes detected
on example.com
Monitoring run completed: Apr 7, 2026 00:14 UTC
Schedule: Daily
Changes detected:
1. CRAWLABILITY — GPTBot now blocked
Page: example.com/product-page
Before: Allowed | After: Blocked
Cause: robots.txt rule added
2. PERFORMANCE — TTFB degraded
Page: example.com/landing-page
Before: B (820ms) | After: C (1380ms)
View full run report:
https://app.beseenby.ai/projects/123/runs/456
To adjust alert settings, visit your
monitoring configuration page.Import dozens or hundreds of URLs at once using CSV upload or paste-from-spreadsheet. Useful for post-migration audits, client site onboarding, and setting up monitoring for large content libraries.
Upload a CSV file with one URL per row. Optional columns for page name and notes. Download a template from the import dialog to get started.
Paste a list of URLs directly from a spreadsheet, Google Sheet, or text file. One URL per line. The importer handles whitespace and normalization.
Point the importer at a sitemap URL and it will extract all listed pages automatically. Useful for comprehensive site coverage without manual URL collection.
1. Review imported URLs. Confirm the list looks correct, fix any normalization issues, and remove URLs you do not want to track.
2. Select pages to monitor. Not every imported URL needs automated checks. Enable monitoring only on the pages that warrant it.
3. Run an initial batch audit. Get a baseline reading for all imported pages before the first scheduled check runs.
Scenario: You manage 10–30 client sites. One robots.txt change affects a client's homepage without anyone noticing for three weeks.
Value: Proactive issue detection instead of reactive damage control.
Example: An agency monitors 200 URLs across 15 clients. A staging robots.txt rule goes live on a client's site on a Tuesday. By Wednesday morning, the alert is in the account owner's inbox before the client notices.
Scenario: Your team completed a platform migration. You want to verify nothing regressed and catch any issues that surface in the weeks after go-live.
Value: Migration confidence without manual weekly checks.
Example: A team migrates from WordPress to a headless CMS. Monitoring catches that the new template renders all article body copy client-side, creating content visibility gaps that were not present before migration.
Scenario: You advise clients on technical SEO and GEO. You need documented evidence that issues you flagged were fixed, and evidence of ongoing health.
Value: Documentation and continuity built into the workflow.
Example: A consultant recommends fixing a content visibility gap. After the client's dev team deploys the fix, a manual check plus Compare Mode produces a documented before/after report that becomes part of the project deliverable.
Scenario: You run a content-driven product or SaaS site. You do not have a technical team watching AI visibility, but you know it matters for growth.
Value: Peace of mind without adding another thing to the weekly checklist.
Example: A founder's blog drives most of their inbound pipeline. Weekly monitoring flags a TTFB regression after a theme update. They forward the alert to their developer, who fixes the issue before it affects a full month of CrUX data.
The free audit tool and monitoring serve different needs. Use the free audit to investigate a specific page. Use monitoring to maintain ongoing visibility health across your most important URLs.
| Feature | Free Audit | Monitoring (Beta) |
|---|---|---|
| Run on demand | Yes | Yes (manual check) |
| Automated scheduled checks | No | Yes (daily, weekly, custom) |
| Saved run history | No | Yes |
| Compare Mode | No | Yes |
| Email alerts on change | No | Yes |
| Bulk URL import | No | Yes (CSV, paste) |
| Multi-page projects | No | Yes |
| Authority checks | No | Yes (Beta) |
| Shareable reports | No | Yes |
| Account required | No | Yes (beta invite) |
Run the free audit to understand where you stand today. Set up monitoring to stay informed as things change.