
Radar Is the Dashboard That Runs Your Entire AI Visibility Audit
Radar by Pixelmojo is an AI visibility auditing platform that runs 12 tools in parallel on any domain. It tests crawl access, bot directives, llms.txt quality, AI readiness, brand citations, community sentiment, page-level AEO, and page-level citation verification. You enter a URL, click one button, and get a unified AI Visibility Score with cross-tool insights and prioritized action items in under 90 seconds.
This guide walks through every feature step by step: how to get access, how to run your first audit, how to read the results, how to use the AI Advisor, and how to track progress over time.
Step 1: Get Your Access Token
Radar is in free private beta. To get access:
- Go to pixelmojo.io/platform
- Enter your email address and click Request Access
- Check your inbox for an email from Pixelmojo with your access token
- Click the link in the email. It takes you directly to the Radar dashboard with your token validated and stored in your browser.
Your token persists across sessions. You will not need to re-enter it unless you clear your browser data or the token expires.
If you want to skip the token and try the individual tools first, all 8 are available free at pixelmojo.io/tools with no signup required.
Step 2: Enter Your Domain
The Radar dashboard has a URL input field at the top. Enter any domain you want to audit: your own site, a client site, or a competitor.
When you enter a domain and tab out of the field (or pause typing), Radar auto-detects your brand information. Enter "stripe.com" and it fills in:
- Brand name: Stripe
- Category: Payment processing platforms
- Keywords: Stripe payments, online payment processing
- Description: A technology company that builds economic infrastructure for the internet
You can edit any of these fields if the auto-detection needs adjustment. The brand name and category are used by tools that require brand context: the Citation Tracker, Reddit Brand Monitor, and Answer Engine Citation Tester.
Step 3: Select Your Tools
The sidebar shows all 12 tools with checkboxes. All are selected by default:
| Tool | What It Measures | Est. Time |
|---|---|---|
| Crawl Check | 14 bot user-agents, robots.txt rules, structured data, llms.txt | ~15s |
| Robots.txt | Deep directive parsing across 16 bots, syntax validation, policy clarity | ~10s |
| llms.txt | Structure, content sections, links, entity definitions, use policy | ~10s |
| AI Readiness | 5-category unified score: bots, structured data, LLM comms, content, cross-signal | ~20s |
| Citations | Brand mentions across ChatGPT, Perplexity, Claude, Gemini with 8 query types | ~30s |
| Brand mentions, sentiment, LLM seeding detection, cross-subreddit patterns | ~25s | |
| AEO Audit | Page-level speakable schema, answer-first structure, data extractability | ~15s |
| Citation Test | 4 AI providers queried with a specific question, checking if they cite your URL | ~45s |
| Source Influence | Top domains shaping AI narratives in your category, editorial hit list | ~40s |
| Prompt SOV | Brand share of voice in AI recommendations, competitive ranking across providers | ~45s |
| Schema Audit | JSON-LD completeness across 10 schema types, speakable detection, multi-page coverage | ~15s |
| Hallucination Check | Factual inaccuracy detection: wrong pricing, products, founding year across AI providers | ~40s |
Deselect tools if you only need to recheck specific dimensions. For example, after fixing your robots.txt, you might run just the Crawl Check and robots.txt Analyzer to verify the change without waiting for all 12 tools.
Tools That Need Brand Context
Three tools require brand information: Citations, Reddit, and Citation Test. If you deselect these, Radar skips the brand detection step entirely. The other five tools (Crawl Check, Robots.txt, llms.txt, AI Readiness, AEO Audit) work with just the URL.
Step 4: Run the Audit
Click Run Audit or press Ctrl/Cmd + Enter. All selected tools launch simultaneously as parallel threads.
The dashboard shows each tool's status in real time:
- Queued: Waiting to start
- Running: Active, with an elapsed timer
- Done: Completed with a score and grade
- Error: Failed (usually a timeout or unreachable domain)
- Skipped: Deselected by you
Results stream in as each tool finishes. You do not wait for all 12 tools to complete before seeing data. The fastest tools (Robots.txt, llms.txt) show results within 10 seconds. The slowest (Citation Test) takes up to 45 seconds because it queries four AI providers sequentially.
A thread log panel at the bottom shows real-time progress messages: "Crawl Check started", "Testing GPTBot access", "Robots.txt analysis complete: Score 85/100". This log is useful for understanding what each tool does and for debugging if a tool errors out.
Step 5: Read the Overview
Once tools complete, the Overview tab (press 1) shows:
Unified AI Visibility Score
A single 0-100 score that averages all completed tool scores, with a letter grade:
| Grade | Score Range | What It Means |
|---|---|---|
| A | 85-100 | Strong AI visibility across most dimensions. Focus on maintaining and optimizing. |
| B | 70-84 | Good foundation with specific gaps. Fix the lowest-scoring tools first. |
| C | 50-69 | Mixed results. Some dimensions are solid, others need significant work. |
| D | 30-49 | Below average. Multiple dimensions need attention. Start with infrastructure. |
| F | 0-29 | Critical issues. AI bots likely cannot access your site or find your content. |
Per-Tool Score Breakdown
Below the unified score, each tool displays its individual score (0-100) and grade with a color-coded bar. Click any tool to expand its detailed results in the Details tab.
Cross-Tool Insights
This is where Radar's value becomes clear. The insight engine compares results across all completed tools and surfaces patterns:
Conflict insights (red): Problems that only appear when comparing two tools. "AI bots are blocked in robots.txt, and your citation rate is low. Unblocking GPTBot and PerplexityBot could improve citations." Neither the Crawl Checker nor the Citation Tracker alone would flag this connection.
Positive insights (green): Strengths confirmed by multiple signals. "Strong technical foundation: bot access and AI readiness both above 80." This tells you the infrastructure is solid and you should focus your effort on content and citations instead.
Warning insights (yellow): Gaps that represent opportunities. "Good crawlability but no llms.txt file. Adding llms.txt would improve how AI systems describe your brand."
Step 6: Review Action Items
Switch to the Actions tab (press 2) for a prioritized list of everything to fix. Each action item includes:
- Title: What to fix ("Add speakable schema to Article JSON-LD")
- Why it matters: Business impact ("Voice assistants and AI Overviews use speakable schema to identify which content to read aloud. Without it, your pages are skipped for spoken answers.")
- How to fix it: Step-by-step instructions ("Add a speakable property to your Article JSON-LD with SpeakableSpecification type and CSS selectors targeting your headline, description, and key takeaways elements.")
- Effort level: Quick Win, Moderate, or Major Effort
- Impact level: High, Medium, or Low
- Source tool: Which tool generated this action
Actions are sorted by priority: high-impact, low-effort items appear first. You can filter by category (Crawlability, Structured Data, LLM Communication, Citations, Community, AEO, Citation Test) and by effort level.
Using Action Items as a To-Do List
The action items are designed to be handed directly to your development team or SEO team. Each one is self-contained with enough context that someone unfamiliar with AI visibility can understand what to do and why.
For agencies, the action items become the deliverable. Run a Radar audit on a client domain, export the action items, and you have a prioritized improvement plan backed by data from 8 diagnostic tools.
Step 7: Explore Tool Details
The Details tab (press 3) shows the full results from each tool. Click any tool in the sidebar to expand its results. The detail view varies by tool:
Crawl Check: Per-bot access status (allowed, blocked, partial) for all 14 user-agents, robots.txt rules, structured data extraction, llms.txt detection.
Robots.txt: Full directive parsing with line numbers, per-bot rule matching across 16 bots, syntax errors, suggested snippet for missing rules.
llms.txt: Section-by-section scoring: structure, content, links, entity definitions, use policy, completeness.
AI Readiness: 5-category breakdown (bot discoverability, structured data, LLM communication, content accessibility, cross-signal readiness) with individual findings.
Citations: Per-provider results (ChatGPT, Perplexity, Claude, Gemini) showing brand recognition, competitive visibility, URL citations, and sentiment.
Reddit: Discovered mentions with sentiment analysis, seeding risk assessment (heuristic + GPT), cross-subreddit patterns, and overall community signal.
AEO Audit: 6-category page-level scoring (speakable schema, answer-first structure, structured data quality, data extractability, content freshness, entity authority) with per-finding status.
Citation Test: Per-provider citation check, content alignment scores, competitor URLs cited, content gap analysis.
Step 8: Ask the AI Advisor
The sidebar includes an AI Advisor chat interface. It is powered by the Pixelmojo knowledge graph (the same engine behind our llms.txt and structured data) and references your specific audit data when answering.
Questions you can ask:
- "What should I fix first?" (answers based on your prioritized action items)
- "Why is my citation score so low?" (cross-references crawl access, structured data, and llms.txt)
- "How do I add speakable schema?" (technical guidance tailored to your current schema state)
- "What is GEO?" (educational answer from the knowledge graph)
- "How does my site compare to industry averages?" (contextualizes your scores)
The Advisor uses the consultative approach from Vector, our AI sales agent. It provides enough depth to be useful, with a path to deeper engagement when the problem requires hands-on help.
Step 9: Save and Compare Over Time
After completing an audit, press S or click the Save button. The run is stored locally in your browser with:
- Domain and timestamp
- Unified score and grade
- Per-tool scores and grades
- Full tool data for later review
History Tab
Switch to the History tab (press 4) to see all saved runs. Each entry shows the domain, date, unified score, and a visual comparison of tool scores. Click any saved run to compare it against your current results.
This is how you track progress. Run a Radar audit today, make improvements over the next week (fix robots.txt, add llms.txt, restructure content), then run again and compare. The history view shows exactly which tool scores improved and by how much.
Comparing Against a Baseline
Select a saved run from history and Radar overlays it on the current results. Score differences are color-coded: green where you improved, red where you regressed. This makes it immediately obvious whether your changes had the intended effect.
Step 10: Compare Against Competitors
Switch to the Competitor tab (press 5), enter a competitor domain, and click Run. Radar executes the same 12 tools on the competitor domain. Once complete, you get a side-by-side comparison table:
| Tool | Your Score | Competitor Score | Difference |
|---|---|---|---|
| Crawl Check | 92 (A) | 85 (A) | +7 (you lead) |
| Citations | 30 (D) | 65 (C) | -35 (they lead) |
| llms.txt | 78 (B) | 0 (F) | +78 (you lead) |
| AEO Audit | 45 (D) | 72 (B) | -27 (they lead) |
(Example data for illustration. Your actual scores will differ.)
The comparison tells you where your competitive advantages are and where you need to catch up. If a competitor scores 72 on AEO while you score 45, you can run the AEO Page Auditor on their specific pages to see what structural patterns they use that you do not.
For agencies, the competitor comparison is the presentation. "Here is where you stand. Here is where your competitor stands. Here are the 5 things to prioritize."
Keyboard Shortcuts Reference
| Shortcut | Action |
|---|---|
| Ctrl/Cmd + Enter | Run the audit |
| Ctrl/Cmd + P | Export PDF report |
| S | Save the current run |
| 1 | Switch to Overview tab |
| 2 | Switch to Actions tab |
| 3 | Switch to Details tab |
| 4 | Switch to History tab |
| 5 | Switch to Competitor tab |
| 6 | Switch to Threads tab |
Common Audit Patterns and What They Mean
After running hundreds of audits during development and beta testing, we see recurring patterns:
High Crawl, Low Citations (Most Common)
Crawl Check scores 80+ but Citation Tracker scores below 40. This means AI bots can access the site, but AI engines do not consider the brand authoritative enough to cite. The fix is content depth, not infrastructure. Focus on building topical authority through comprehensive content, structured data, and llms.txt.
Good Infrastructure, Weak Page-Level AEO
Crawl, Robots, llms.txt, and Readiness all score well, but AEO Audit scores below 50. The site is discoverable and well-configured, but individual pages are not formatted for AI extraction. Add speakable schema, reformat to answer-first, convert comparison prose to tables, and add entity markup.
Strong Everywhere Except Community
All tools score well except Reddit Brand Monitor. This is common for B2B companies with limited consumer-facing community presence. The fix depends on whether community presence matters for your category. For B2B, the impact is lower than for consumer brands.
Competitor Leads on Citations Despite Lower Infrastructure
Your crawl and readiness scores are higher, but the competitor gets more AI citations. This almost always comes down to content authority and AEO formatting. Study their pages with the AEO Auditor and Citation Tester to identify what structural patterns they use.
Step 11: Use Implementation Threads to Fix Issues
Switch to the Threads tab (press 6) to see your action items grouped into structured implementation paths. Radar organizes fixes into five threads:
- Crawlability Thread: Unblock AI bots, fix robots.txt, enable server-side rendering
- Structured Data Thread: Add missing schema types, complete fields, align cross-signals
- LLM Communication Thread: Create or improve llms.txt, add llms-full.txt
- Content Authority Thread: Add speakable schema, restructure for AEO, optimize citations
- Citation Building Thread: Fix hallucinations, build community, create definitive content, grow SOV
Each thread has ordered steps. Only threads with matching action items from your audit appear.
Generating AI Prompts
Every action item and every thread step has a Generate AI Prompt button. Click it and Radar produces a prompt pre-filled with your actual audit data: your blocked bots by name, your missing schema types, your citation rates per provider.
Copy the prompt into Claude, ChatGPT, or Cursor and the AI tool has everything it needs to implement the fix. Thread step prompts include context from previous steps, so the AI does not redo or conflict with earlier work.
Using the Generators
Two common fixes have dedicated generators:
- llms.txt Generator: For actions about creating an llms.txt file, Radar generates a starter file from your audit data (meta tags, detected schema, domain structure) with placeholder comments where manual input is needed.
- Schema Markup Generator: For actions about adding JSON-LD, Radar generates copy-pasteable markup blocks for every missing type (Organization, Article, BreadcrumbList, FAQPage, Product, WebSite, HowTo).
Re-verifying Fixes
After implementing a fix, click the Re-verify button on any completed tool card. This re-runs just that single tool in 15 seconds instead of the full 12-tool audit. Fix your robots.txt, re-verify with Crawl Check. Update your schema, re-verify with Schema Audit.
Tracking Progress
Check off completed steps in the Threads tab. Progress persists in your browser across sessions, so you can close the tab, come back next week, and see exactly where you left off. Each thread shows a progress bar with completion percentage.
Who Should Use Radar
In-house SEO teams: Run weekly audits on your domain. Track unified scores over time. Hand action items to your development team as a prioritized backlog.
SEO and marketing agencies: Audit client domains during onboarding. Use competitor comparison for presentations. Export action items as the deliverable. Track progress between engagements.
Content strategists: Understand which pages are optimized for AI citation and which need structural work. Use the AEO Auditor and Citation Tester on your highest-value content.
Founders and product teams: Quick 60-second check on whether AI engines can find your product. Most startups are invisible to ChatGPT, Perplexity, and Claude without knowing it.
Ready to run your first audit?
Free during private beta. Enter your email and start auditing in 60 seconds.
All 12 AI visibility tools, free, no signup required.
The 10-part strategy series behind everything Radar measures.
AI visibility strategy consulting for agencies and teams.
Audit any site without leaving your browser
Install the Radar Chrome extension, click the icon on any tab, and get an instant 6-tool AI readiness check. Free, anonymous, one audit per domain.
Radar User Guide: Questions and Answers
Common questions about this topic, answered.
