AgentScan logoAgentScan

About AgentScan

A specialized AI-agent content-readiness project for modern web teams.

AgentScan was created to solve a specific production gap: sites that rank well for humans but fail basic machine-readable expectations for autonomous agents. Our goal is to make that gap measurable and fixable.

We intentionally keep the default profile focused and operational. Instead of broad, generic audits, we validate the checks most likely to affect whether an agent can discover your content and consume it correctly in real-world workflows.

Our editorial approach

We prefer explicit, testable checks over vague heuristics. If a signal cannot be verified from observable HTTP behavior, we avoid using it in the default score.

Core production checks

robots.txt quality

Ensures crawler policy exists, is parseable, and supports deterministic bot behavior.

Sitemap discoverability

Checks canonical sitemap paths and robots references to improve content indexing reliability.

Link response headers

Validates machine-discovery signals via RFC 8288-style Link relations.

Markdown negotiation

Tests whether `Accept: text/markdown` returns agent-friendly markdown responses.

AI bot directives

Looks for explicit AI-crawler handling or safe wildcard policy coverage.

Content-Signal rules

Checks explicit content-use preferences for training/search/input scenarios.

How teams use it

Pre-release checks

Run scans before launches or migrations to catch crawl and content regressions.

Incident triage

When agents fail to parse content correctly, use scan evidence to isolate root causes quickly.

Ongoing governance

Use recurring scans as part of technical SEO and AI-readiness hygiene.

Scope and limits

AgentScan evaluates public HTTP behavior only. It does not verify private application logic, security posture, legal compliance, or business correctness. Results should be used as engineering guidance, not legal advice.

What makes this project different

The site content and prompts are written specifically for AI-agent interoperability work. We do not use spun text, filler templates, or syndicated policy blocks as primary content.