
The Problem We Kept Running Into
Over the past two months, we published 7 parts of The AI Search Playbook. We covered everything: the traffic shift from Google to AI, the difference between SEO, GEO, and AEO, how to get cited by ChatGPT and Perplexity, how to build a brand AI search engines reference, what happened when we optimized our own site, the AI discoverability stack, and why blocking training bots actually increased citations.
Every time we audited a domain (ours or a client's), we ran 6 separate tools. Open the crawl checker, paste the URL, wait. Open the robots analyzer, paste again, wait. Open the citation tracker, enter the brand name, wait. Repeat for llms.txt, AI readiness, Reddit monitoring.
Then we'd stare at 6 separate result pages and try to connect the dots ourselves. Is the low citation rate because bots are blocked? Or because there's no structured data? Or because the llms.txt is missing? You can't tell from one tool alone.
We needed something that runs all 12 tools at once and tells us what the combination of results means. So we built it.
What Radar Does
Radar is a unified AI visibility auditing platform. You enter a domain, select which tools to run (all 6 by default), and hit one button. The tools execute in parallel as independent threads. Results stream in as each one completes.
The 12 tools:
- AI Crawl Checker: Tests 13 bot user-agents (GPTBot, Claude-Web, PerplexityBot, Googlebot, and more) for actual HTTP access, robots.txt rules, structured data, and llms.txt detection.
- Robots.txt Analyzer: Deep-parses your robots.txt against 16 known bots, validates syntax, scores AI bot coverage, and suggests specific rule changes.
- llms.txt Validator: Checks for the emerging llms.txt standard, validates structure, sections, entity definitions, links, and use policy.
- AI Readiness Score: 5-category audit covering bot discoverability, structured data, LLM communication, content accessibility, and cross-signal consistency.
- AI Citation Tracker: Queries ChatGPT, Claude, Gemini, and Perplexity with 8 template queries to measure brand mention rate, URL citation rate, and sentiment.
- Reddit Brand Monitor: Discovers Reddit mentions via search, analyzes sentiment, and detects AI-seeded (artificially promoted) content with heuristic + GPT analysis.
Each tool produces its own score (0-100) and grade (A through F). Radar averages them into a unified AI Visibility Score.
But the scores are not the point.
Cross-Tool Insights: What Individual Tools Cannot See
The real value of running all 12 tools together is the cross-tool analysis. Radar compares results across dimensions and surfaces patterns that no individual tool can detect.
Here are the actual insights Radar generates:
| Insight | What It Means | Tools Involved |
|---|---|---|
| Blocking AI bots while citation rate is low | Your robots.txt blocks GPTBot/Claude-Web, and AI models are not mentioning you. Unblocking browsing bots could improve citations. | Crawl Check + Citation Tracker |
| Good crawlability but no llms.txt | Bots can access your site but lack structured context about your business. Adding llms.txt improves how AI describes you. | Crawl Check + llms.txt Validator |
| Reddit buzz but AI bots blocked | Community talks about you on Reddit, but AI crawlers cannot access your site. Fix bot access to convert community signal into AI citations. | Reddit Monitor + Crawl Check |
| Technically visible but not being cited | Your site is accessible to AI and has good structured data, but models are not mentioning you. This is a content authority problem, not technical. | Crawl Check + AI Readiness + Citation Tracker |
| Strong technical foundation | Bot access and AI readiness both score above 80. You have a solid base for AI search visibility. | Crawl Check + AI Readiness |
| llms.txt exists but needs improvement | You have an llms.txt file but it scored below 50. Structure, entity definitions, and link completeness need work. | llms.txt Validator |
These insights map directly to the strategies we covered in the playbook series. "Blocking bots while citation rate is low" is exactly what Part 7 addressed. "Good crawlability but no llms.txt" connects to the AI discoverability stack in Part 6. "Technically visible but not cited" is the content authority problem from Part 5.
Prioritized Action Items: What to Fix First
Every audit generates a list of specific action items. Each one tells you:
- What to fix: The specific issue (e.g., "Unblock 3 AI browsing bots: GPTBot, Claude-Web, PerplexityBot")
- Why it matters: The business impact (e.g., "AI browsing bots retrieve your content to include in AI-generated answers. Blocking them means your site is invisible to those platforms.")
- How to fix it: Step-by-step instructions (e.g., "Edit your robots.txt: remove Disallow rules for these user-agents, or add explicit Allow: / rules.")
- Effort level: Quick Win, Moderate, or Major Effort
- Expected impact: High, Medium, or Low
Actions are sorted by impact and effort. Quick wins with high impact surface first. You can filter by category (Crawlability, Structured Data, LLM Communication, Citations, Community) and by effort level.
This is the part that turns an audit into a to-do list. Run the scan, hand the action items to your developer or SEO team, track progress by running again in a week.
How Each Playbook Part Connects to Radar
Every strategy we taught in the 7-part series maps to a specific Radar measurement:
| Playbook Part | What It Taught | What Radar Measures |
|---|---|---|
| Part 1: Traffic Shift | Google traffic is moving to AI search | Unified AI Visibility Score shows if you are prepared |
| Part 2: SEO vs GEO | GEO is a separate discipline from SEO | AI Readiness Score measures GEO-specific signals |
| Part 3: Get Cited | How to appear in ChatGPT/Perplexity/Claude | Citation Tracker tests 4 providers with 8 query types |
| Part 4: Our Results | What changed when we optimized | Run History tracks score changes over time |
| Part 5: Brand Building | How to build authority AI models trust | Cross-tool insight: visible but not cited = authority gap |
| Part 6: Discoverability Stack | llms.txt, JSON-LD, ai-plugin.json, knowledge API | llms.txt Validator + AI Readiness Score + Crawl Check |
| Part 7: Training Bots | Blocking training bots increased citations | Robots.txt Analyzer + Citation Tracker cross-reference |
Radar is the playbook in action. Instead of reading about what to check and then checking it manually, you enter a domain and get the full picture in 60 seconds.
The AI Advisor: Strategy on Demand
Radar includes a built-in AI strategy advisor. It's a chat interface powered by our knowledge graph (the same one that drives our llms.txt and structured data).
You can ask it questions like:
- "How do I improve my AI visibility score?"
- "Why are AI bots blocked on my site?"
- "What is GEO and how does it apply to my results?"
- "How should I structure my llms.txt?"
The advisor references your specific audit data when answering. If your crawl score is 92 but your citation rate is 30%, it knows that and tailors the advice accordingly.
It uses the consultative approach we built for Vector: enough depth to demonstrate expertise, with a path to deeper engagement when the problem is complex.
Competitor Comparison
Radar lets you run the same audit on a competitor domain. You get a side-by-side comparison table showing scores across all 12 tools, with color-coded diffs (green where you lead, red where they lead).
This is useful for agencies presenting to clients. "Here is your AI visibility score versus your top competitor. Here are the 5 things they do better, and here is the priority order to close the gap."
Who Radar Is Built For
In-house SEO teams: Run audits on your own domain, get action items, hand them to your dev team. Save runs and compare scores over time to track whether your changes are working.
SEO agencies: Audit client domains, export PDF reports, compare against competitors. Each action item includes "why it matters" language you can use in client presentations.
Content strategists: Understand how AI search engines see your content. Are you being cited? Is your structured data working? What is Reddit saying about your brand?
Founders building in public: Check if AI search engines can find your product. Most startups are invisible to GPT, Claude, and Perplexity without knowing it. A 60-second Radar scan tells you where you stand.
The Technical Architecture
Radar runs all 12 tools as parallel threads (we call them "Hive agents" internally, built on the same architecture as our Hive multi-agent platform). Each tool is an independent API call. Results stream into the dashboard as they complete, so you see scores filling in rather than waiting for everything to finish.
The brand detection system uses GPT-4o-mini to identify the company from just the domain name. Enter "stripe.com" and it auto-fills "Stripe", "payment processing platforms", and relevant Reddit keywords. No manual setup needed.
Cross-tool insights are generated by a rule engine that compares results across all completed tools. The insight generator runs after all tools finish, looking for specific patterns (blocked bots + low citations, good crawl + missing llms.txt, etc.).
Action items are generated from tool-specific results and cross-tool patterns, then sorted by an impact/effort scoring algorithm. High-impact, low-effort items (like adding a sitemap directive to robots.txt) surface before high-impact, high-effort items (like building content authority).
v2.1 Update: DIY Implementation Features
The most common feedback from early users: "The audit is great, but how do I actually fix the issues?" Radar v2.1 bridges the gap from diagnosis to implementation.
AI Prompt Generator: Every action item now has a "Generate AI Prompt" button that produces a context-rich prompt pre-filled with your actual audit data. Copy it into Claude, ChatGPT, or Cursor and the AI tool has everything it needs to implement the fix. Not a generic template -- your specific blocked bots, missing schema types, and citation rates are baked into the prompt.
5 Implementation Threads: Actions are grouped into structured fix threads (Crawlability, Structured Data, LLM Communication, Content Authority, Citation Building) with ordered steps. Each step generates a thread-aware prompt that includes context from previous steps.
Generators: Radar now generates starter llms.txt files and copy-pasteable JSON-LD schema markup from your audit data. No need to write these from scratch.
Single-Tool Re-verify: Fix something, re-run just that tool in 15 seconds. No full re-audit needed.
Progress Tracking: Check off completed steps. Progress persists across sessions so you pick up where you left off.
Read the full breakdown in Radar v2: From Technical Audit to AI Intelligence Platform.
What Comes Next
Radar is currently in private beta. We are gathering feedback from early users to understand which features matter most before we add pricing tiers and expanded capabilities.
On the roadmap:
- Scheduled auto-scans: Weekly or monthly audits that run automatically and alert you when scores drop
- Historical trend charts: Visualize how each tool score changes over time
- Team access: Multiple people viewing the same audit results
- White-label PDF reports: Branded exports for agency-client presentations
- API access: Programmatic access for teams integrating Radar into their workflows
Get Your Access Token
Radar is free during the private beta. Enter your email at pixelmojo.io/platform and you will receive an access token automatically via email. Click the link in the email and you are in.
The individual tools remain free at pixelmojo.io/tools. Radar wraps them all together with cross-tool insights and prioritized actions.
If you have been following this playbook series, Radar is the tool that puts all 7 parts into practice. Enter your domain, see your score, get your action items.
Ready to see your AI visibility score?
- Get your Radar access token - Free during private beta
- Try individual tools - All 12 tools, free, no login required
- Contact us - Questions about AI visibility strategy
Radar by Pixelmojo: Questions and Answers
Common questions about this topic, answered.
