Drop in a list of URLs. Pick daily or weekly. We re-check on schedule, compute the diff vs the last run, and only email you when something flips from healthy to broken. No noise — no Slack pileups, no "503 then 200" flap, no daily "all good" emails.
One-time setup per list. After that, you only hear from us when something changes — and even then, only when the change crosses the threshold you set.
Paste a list or import from a completed job. Up to 75,000 URLs per schedule. Same engine that powers our one-time checker — proxy-rotated, rate-limit aware, classifies network errors separately from 4xx/5xx so you don't get false alarms.
Daily for high-stakes lists (client deliverables, payment redirects). Weekly for SEO link audits. Monthly for content hygiene. Set per-schedule so you can match the lift to the stakes.
Email by default. Slack and webhook on the Agency tier. We compute new-breaks vs prior run and only ping you when the total change count clears the threshold — no spam when one CDN flaps for 30 seconds and recovers.
Most "monitors" ping you on every status flip — including the transient 503 that recovers a minute later. We track flip history per URL and only alert when total changes since the last run clear the threshold YOU set. Mute one-off blips, escalate when something is actually wrong.
Export your backlinks from Ahrefs / Semrush once. Drop them here, set weekly. The day a partner takes down the page referencing you, you find out — not three months later when you re-run the audit.
Crawl your docs once for outbound links, save the list, monitor weekly. When MDN moves a page or a vendor renames a product doc, you get an email — fix it before users hit the 404.
One schedule per client, monthly. Embed the diff in the deliverable. White-label PDF on the Agency tier so the report ships under your brand, not ours.
4xx, 5xx, and connection-level failures (timeouts, refused, SSL handshake errors). We deliberately exclude 429 rate-limited responses from the "broken" bucket — those usually mean the target is hardening anti-scraping, not that the page is actually down. Status taxonomy is documented in the app.
No. You set a per-schedule threshold (default: 1 change). We only alert when the run's total change count crosses it. Bump to 3-5 for noisy lists where you don't want to know about minor flaps; keep at 1 for high-stakes lists.
Daily is the floor across all paid plans. We don't do hourly intentionally — we'd look like an aggressive scraper to your target sites, which would hurt response quality. Daily is enough for the vast majority of use cases.
The first manual check of any URL list is free up to 300 URLs. To schedule recurring checks you need a paid plan (Starter at $9/mo includes 1 weekly schedule). Set one up from the app — if you're not subscribed yet, the form saves your draft and brings you back after checkout.
Email is on every paid plan. Slack and webhook alerts are Agency-tier (per-schedule). REST API + MCP are available so you can read past runs and diffs programmatically.
Free trial includes 300 URL checks. Add a schedule from the app — if you're not subscribed yet, we save your draft and resume the moment you upgrade.
We use analytics cookies to improve your experience. Opt out anytime in Cookie Settings. Privacy Policy