ANSWER ENGINE OPTIMIZATION

Is your site cited by AI?

AEO Expert dispatches an agent that runs 147 checks on how discoverable your site is to ChatGPT, Claude, Perplexity, Gemini and Google AI Overviews. Not theory — real prompts, real citations.

URL
┌─ Average scan: 47 seconds · No account required
01 — WHAT IS AEO?

SEO is for search results. AEO is for answers.

More than 1 in 4 searches now end in an AI-generated answer instead of a list of blue links. Answer Engine Optimization is the discipline that determines whether your content gets cited, ignored, or misparaphrased by LLMs.

01
Discoverability
Can an agent actually crawl, understand, and structure your site?
02
Citability
Do you have chunks that stand alone, read authoritatively, and invite quoting?
03
Trust
Does your site match the E-E-A-T signals LLMs use to rank sources?
02 — 147 CHECKS

What the agent actually inspects

01 Infrastructure
48 CHECKS
robots.txt for AI crawlers
llms.txt present + valid
sitemap.xml structure
User-agent handling (GPTBot, ClaudeBot, PerplexityBot)
Server render vs client render
Core Web Vitals
02 Semantics
48 CHECKS
Schema.org coverage
JSON-LD validation
FAQ/HowTo/Article markup
OpenGraph + Twitter cards
Canonical tags
Heading hierarchy
03 Content
48 CHECKS
Chunk coherence
Answer-per-paragraph score
Entity density
Factual claims + sources
Dates + updates
Author + author bio
04 Authority
48 CHECKS
Backlink quality
Wikipedia citations
Cross-site mentions
Brand recognition
Review/rating schema
Author.org profiles
03 — LIVE REPORT

Not a 40-page PDF. A worklist.

REPORT · AEO-2026-04-22
nos.nl
72 /100 OVERALL
88 /100 TECHNICAL
61 /100 CONTENT
45 /100 AUTHORITY
CRIT
No llms.txt found
Add /llms.txt so Claude and ChatGPT know exactly what they may cite.
HIGH
FAQ markup missing on 12 pages
Your FAQ content is not served as Featured Snippet or AI citation.
MED
GPTBot blocked in robots.txt
Line 4: `Disallow: /` for User-agent GPTBot. Intentional?
MED
Chunks average 847 tokens
LLMs cite better at 200-400 token chunks with clear headings.
LOW
No author.org schema
Adding this raises E-E-A-T score for Google AI Overviews.
04 — ENGINES

Tested against 5 live engines

Every scan runs real prompts against real models. No scraped screenshots.

ChatGPT
MODEL
GPT-5, GPT-4.1
INDEX
OpenAI SearchGPT index
CRAWLER
GPTBot, OAI-SearchBot
Claude
MODEL
Claude Opus 4.5, Sonnet 4.5
INDEX
Claude Web Search
CRAWLER
ClaudeBot, Claude-SearchBot
Perplexity
MODEL
Sonar, Sonar Pro
INDEX
Perplexity Index
CRAWLER
PerplexityBot
Gemini
MODEL
Gemini 2.5 Pro
INDEX
Google Index + Vertex
CRAWLER
Google-Extended
AI Overviews
MODEL
Google SGE
INDEX
Google Search Index
CRAWLER
Googlebot
05 — TEAMS SCANNING

Within a week of the first scan we saw our brand appear in Perplexity answers where we weren't even mentioned before.

Sd
Sanne de Vries
Head of Growth, Kobalt

I've done SEO for 20 years. This is the first tool that actually tells me something new.

MJ
Marcus Jansen
SEO Lead, Basecamp NL

The agent found our pricing page was blocked by GPTBot. We had no idea.

LC
Lisa Chen
CTO, Northwind
06 — FAQ
Q.01 Isn't AEO just SEO with a new coat of paint?
No. SEO optimizes for 10 blue links; AEO optimizes for 1 generated answer. Ranking factors, content structure, and technical signals differ materially.
Q.02 How often should I scan?
Monthly is a good baseline. On major content updates or model releases (GPT-5, new Claude) we auto-run a delta scan.
Q.03 What exactly is llms.txt?
A convention — similar to robots.txt — that tells LLM crawlers which content may be used, what license applies, and where the canonical sources are.
Q.04 Do you really get live answers from Claude and GPT?
Yes, we use the official APIs with targeted prompts for your domain. Results appear verbatim in the report.