
Site-Level Visibility Is Not Enough. You Need Page-Level Testing.
Answer Engine Optimization (AEO) is the practice of structuring individual pages so AI search engines extract and cite your content in their responses. Site-level tools tell you whether AI bots can access your domain, whether they know your brand, and how your llms.txt is configured. But they cannot tell you whether a specific page is structured well enough for an AI engine to cite it, or whether ChatGPT actually references your URL when someone asks a question your page answers.
We built two new tools to close this gap. The AEO Page Auditor scores any page for answer engine readiness. The Answer Engine Citation Tester queries four AI providers with your question and checks if they cite your URL.
These are tools 7 and 8 in our free AI visibility suite, and they are now integrated into the Radar platform alongside the existing six.
Why We Needed Page-Level Tools
Our core infrastructure tools measure site-wide signals. The AI Crawl Checker tests whether 14 bot user-agents can access your domain. The AI Citation Tracker queries four AI providers to see if they mention your brand. The robots.txt Analyzer parses your bot directives. The llms.txt Validator scores your AI discovery file. The Reddit Brand Monitor tracks community sentiment. The AI Readiness Score combines everything into a unified metric.
These tools answer the question: can AI find my business?
But they do not answer: will AI cite this specific page?
The gap became obvious when we audited client sites that scored well on site-level metrics but poorly on actual citations. A domain could have perfect crawl access (all AI bots allowed), good structured data, a solid llms.txt file, and still get zero page-level citations for their target queries. The site was visible. The pages were not citable.
The problem is structural. A page might contain the right information but format it in a way that AI engines cannot extract. Answers buried after long introductions. Data presented as prose instead of tables. No speakable schema to signal which content is suitable for voice delivery. No entity markup linking the content to a knowledge graph.
These are page-level problems that require page-level measurement.
Tool 7: AEO Page Auditor
The AEO Page Auditor takes any URL and scores it for answer engine readiness. It fetches the page, parses the HTML, analyzes the content structure, checks the schema markup, and produces a 0-100 score across six weighted categories.
The Six Scoring Categories
| Category | Points | What It Measures |
|---|---|---|
| Structured Data Quality | 25 | JSON-LD completeness, entity linking, FAQ markup, schema type diversity |
| Answer-First Structure | 20 | Whether opening paragraphs front-load direct answers under each heading (BLUF principle) |
| Data Extractability | 20 | Tables, lists, stat blocks, and other machine-readable formats AI engines pull verbatim |
| Speakable Schema | 15 | SpeakableSpecification in Article JSON-LD with CSS selectors for headline, description, and key takeaways |
| Content Freshness | 10 | Date recency, updated timestamps, current statistics and references |
| Entity Authority | 10 | Clear authorship, organizational backing, topical expertise signals |
The weights reflect what we have observed in practice and what the Princeton GEO study confirmed: structured data and answer-first formatting have the largest impact on whether AI engines select a page as a citation source.
How to Read the Results
Each category returns specific findings marked as good, warning, or bad. For example, the Structured Data Quality category might report:
- (good) Article JSON-LD with complete author and publisher properties
- (good) FAQPage schema with 8 questions
- (warning) No SpeakableSpecification found
- (bad) No entity @id linking to knowledge graph
The findings are not generic advice. They are specific to the page you tested. The tool reads the actual HTML and tells you exactly what is present and what is missing.
After the category breakdown, the auditor generates AI-powered recommendations ranked by priority. High-priority items (like adding speakable schema to an article page) appear first because they have the highest impact-to-effort ratio.
When to Use the AEO Auditor
Run it on your highest-traffic pages first. Blog posts targeting question-based queries, product pages, landing pages, and any URL you want AI engines to cite as a source. After making structural changes (adding speakable schema, reformatting to answer-first, adding BlogTable components), rerun the auditor to measure the improvement.
Tool 8: Answer Engine Citation Tester
The Citation Tester answers the most direct question in AI visibility: when someone asks this question, does the AI cite my page?
You provide a URL and a question. The tool fetches your page content, then queries ChatGPT, Perplexity, Claude, and Gemini with that question. For each provider, it analyzes:
- Direct citation: Did the AI include a link to your URL?
- Content alignment: How closely does the response match your page content?
- Competitor citations: What other URLs did the AI cite instead?
- Content gaps: What did the AI include that your page is missing?
The Four Providers
| Provider | Citation Behavior | What to Watch For |
|---|---|---|
| ChatGPT (GPT-4o-mini) | Rarely includes URLs in responses | Content alignment score matters most: high alignment means your info is being used even without explicit citation |
| Perplexity (Sonar) | Consistently provides source URLs | The most actionable provider for citation tracking. If Perplexity does not cite you, check what it cites instead. |
| Claude (Haiku) | Occasionally references sources by name | Brand mention detection rather than URL citation. Look for whether it recommends your content by description. |
| Gemini (Flash) | Varies by query type | Integrated across Google ecosystem. Content alignment here suggests potential for Google AI Overview inclusion. |
The Content Alignment Gap
The most important signal from the Citation Tester is not whether you got cited. It is the gap between content alignment and citation. When an AI engine produces a response that closely matches the information on your page but does not cite your URL, you know the content is right but the page structure is not making it easy for the AI to attribute the answer to you.
This is where the AEO Page Auditor and Citation Tester work together. The Tester tells you there is a gap. The Auditor tells you what structural changes will close it. Missing speakable schema, no entity markup, answers buried after introductions: these are the specific issues that keep a page from earning citations even when its content is being used.
Competitor Citation Intelligence
The Citation Tester shows you which URLs competitors are getting cited for your target questions. This is competitive intelligence you cannot get any other way. If Perplexity cites three competitor pages instead of yours for "best practices for AI visibility optimization," you can read those pages and see exactly what they have that you do not.
Common differences we observe between cited and uncited pages:
- Cited pages have richer structured data (Article + FAQPage + Organization)
- Cited pages front-load definitions in the first 1-2 sentences
- Cited pages use tables and lists for comparison data instead of prose paragraphs
- Cited pages have explicit authorship and entity signals
These are all things the AEO Page Auditor measures.
How the Two New Tools Fit the Full Stack
The twelve tools now form a complete measurement system from infrastructure to page-level citation:
| Layer | Tool | Question It Answers |
|---|---|---|
| Infrastructure | AI Crawl Checker | Can AI bots physically access your site? |
| Infrastructure | robots.txt Analyzer | Are your bot directives optimized? |
| Site-Level Authority | llms.txt Validator | Have you told AI who you are? |
| Site-Level Authority | AI Readiness Score | How ready is your site overall? |
| Brand Visibility | AI Citation Tracker | Do AI engines know your brand? |
| Brand Visibility | Reddit Brand Monitor | What does the community say about you? |
| Page-Level AEO | AEO Page Auditor | Is this page structured for AI citation? |
| Page-Level AEO | Answer Engine Citation Tester | Does AI actually cite this specific page? |
The progression is logical. Fix infrastructure first (crawl access, bot directives). Build site-level authority (llms.txt, structured data, readiness). Measure brand visibility (citations, community). Then optimize at the page level (AEO structure, citation verification).
Most sites we audit have decent infrastructure but weak page-level AEO. They have allowed the right bots, maybe even created an llms.txt file, but their individual pages are formatted for human readers, not AI extractors. The last two tools close that measurement gap.
Integrated Into Radar
Both tools are fully integrated into the Radar platform. When you run a Radar audit, you can now select all 12 tools. The AEO Auditor runs on the domain's homepage by default, and the Citation Tester uses an auto-generated question based on the brand and domain.
Radar's cross-tool insight engine now generates page-level patterns alongside site-level ones. For example:
- "AEO score is low while crawl access is high" means your pages are reachable but not citable. Focus on page structure.
- "High content alignment but no citations" means AI uses your information without attribution. Add speakable schema and entity markup.
- "Competitors cited for your target question" with a list of the specific competitor URLs, so you know exactly who to study.
The unified AI Visibility Score now averages all 12 tools, giving a complete picture that spans infrastructure, brand signals, competitive positioning, structured data, and hallucination detection.
v2.1: DIY Implementation for AEO and Citation Fixes
Radar v2.1 added DIY implementation features that make these tools significantly more useful. Every action item from the AEO Auditor and Citation Tester now has a Generate AI Prompt button that produces a context-rich prompt pre-filled with your audit data. Copy it into Claude, ChatGPT, or Cursor and the AI tool implements the fix.
AEO and Citation Test actions are grouped into the Content Authority Thread, one of five structured implementation threads in Radar. The thread guides you through ordered steps: add speakable schema, restructure content for answer-first extraction, add extractable data elements, then optimize for specific citation queries. Each step generates a thread-aware prompt that includes context from previous steps.
After implementing a fix, click Re-verify on the AEO Auditor or Citation Tester card to re-run just that tool in 15 seconds instead of the full 12-tool audit.
What We Learned Building These Tools
AEO Scores Are Low Across the Board
When we tested 50 pages across various industries during development, the average AEO score was 38 out of 100. The most common deficiencies:
- 92% of pages had no speakable schema at all
- 78% buried the key answer after 3 or more paragraphs of introduction
- 65% had basic or no JSON-LD (just Article type, no FAQPage, no entity linking)
- 71% presented comparison data as prose instead of tables
These are fixable structural issues, not content problems. The information is there. The formatting makes it invisible to AI extractors.
Citation Rates Depend on the Question
The same page can score 90% content alignment for one question and 20% for another. Citation testing is not a one-time check. You need to test each page against the specific questions you want to own. A product page might score well for "what does [product] do?" but poorly for "best tools for [category]" because the category comparison data is formatted as paragraphs instead of tables.
Perplexity Is the Citation Leader
Across all our testing, Perplexity (Sonar) is the most reliable citation provider. It consistently returns source URLs, making it the only provider where you can definitively measure "did they cite my page." ChatGPT, Claude, and Gemini provide content alignment data (they use your information in their responses) but rarely link directly to your URL.
This means Perplexity is currently the highest-value target for page-level citation optimization. If Perplexity cites your page, your AEO structure is working.
How to Use Both Tools Together
The most effective workflow:
- Run the AEO Page Auditor on your target page. Get the 0-100 score and see which categories need work.
- Run the Answer Engine Citation Tester with the specific question you want to own. See if any of the four providers cite your page.
- Compare findings. If your AEO score is below 60, fix the structural issues first. If your AEO score is above 60 but you are not getting cited, the problem is more likely content authority or topical coverage.
- Check competitor citations. If the Citation Tester shows competitor URLs being cited, run the AEO Auditor on those pages too. Compare their scores to yours to identify exactly what they do differently.
- Retest after changes. After improving your page structure, rerun both tools to measure the impact.
This workflow takes 5 minutes per page and gives you a concrete, actionable picture of your page-level AI visibility.
The Complete AI Search Playbook
This is Part 10 of The AI Search Playbook. Here is how all 10 parts connect:
| Part | Title | What It Covers |
|---|---|---|
| 1 | The Traffic Shift | Google traffic is dropping. AI search is growing. The data. |
| 2 | SEO vs GEO vs AEO | Three disciplines, three skill sets, one strategy. |
| 3 | Get Cited | Tactical GEO playbook for ChatGPT, Perplexity, Claude. |
| 4 | Our Results | What changed when we optimized our own site. |
| 5 | Brand Building | How to build authority AI models trust. |
| 6 | Discoverability Stack | llms.txt, JSON-LD, ai-plugin.json, knowledge API. |
| 7 | Training Bots | Blocking training bots increased citations. |
| 8 | Radar Platform | Unified auditing with cross-tool insights. |
| 9 | AI Visibility Stack | SEO + GEO + LLM monitoring with Vector and Hive. |
| 10 (this post) | Page-Level Tools | AEO Page Auditor and Answer Engine Citation Tester. |
The playbook started with the macro trend (traffic shifting to AI), moved through strategy (GEO, AEO, brand building), into implementation (discoverability stack, bot management), unified measurement (Radar, AI visibility stack), and now completes with page-level optimization tools.
If you have been following along, the progression is clear: understand the shift, build the foundation, measure the results, then optimize individual pages. These two tools are the final piece: the page-level measurement that turns strategy into action on specific URLs.
Ready to test your pages for AI citation readiness?
- Try the AEO Page Auditor -- Score any page for answer engine readiness
- Try the Answer Engine Citation Tester -- Check if AI cites your specific URL
- Run all 12 tools in Radar -- Unified AI visibility audit in 60 seconds
- Contact us -- AI visibility strategy consulting
AEO Page Auditor and Citation Tester: Questions
Common questions about this topic, answered.
