
The Question Every AI Visibility Tool Avoids
AI visibility tools fall into two camps. Technical audit tools tell you whether AI can find your website. Enterprise monitoring platforms tell you what AI says about your brand. No single platform does both.
Technical audit tools check your crawl access, robots.txt rules, and structured data. They answer: can AI find my website? But they stop there. They cannot tell you how ChatGPT describes your brand, whether Perplexity cites your competitors instead of you, or if Gemini is telling customers the wrong pricing.
Enterprise monitoring platforms track brand sentiment across AI responses. They answer: what does AI say about my brand? But they cannot tell you what is technically broken. They show you the problem without the fix.
Radar v2 bridges this gap. It runs 12 tools in parallel that answer both questions and generates a prioritized roadmap of exactly what to fix, why it matters, and how to do it.
Answers: "Can AI find me?"
Answers both questions
Answers: "What does AI say?"
What Changed: Four New Intelligence Tools
The original Radar ran eight tools focused on technical infrastructure: crawl access, robots.txt rules, llms.txt validation, AI readiness scoring, brand citation tracking, Reddit monitoring, AEO page auditing, and answer engine citation testing.
These tools tell you whether AI search engines can access, understand, and cite your content. They are necessary. But they are not sufficient.
Radar v2 adds four new tools that close the intelligence gap.
Source Influence Map
When AI platforms discuss your category, they draw from specific sources. The Source Influence Map identifies which domains and URLs are shaping those AI narratives.
This is your editorial hit list. If a competitor's blog post is the primary source ChatGPT cites when answering questions about your industry, you now know exactly which content to create (or outperform) to shift the narrative.
The tool queries all four AI providers with competitive prompts, extracts cited URLs (Perplexity provides structured citations natively), and aggregates them by citation frequency and provider coverage. You see which domains are cited by multiple providers (strong consensus signals) versus which appear in only one.
competitor-a.com
cited
industry-blog.io
cited
techreview.com
cited
your-domain.com
cited
niche-guide.org
cited
Your domain ranks #4. Top 3 sources are shaping AI narratives in your category.
Prompt Share of Voice (SOV)
Share of voice in AI recommendations is the new competitive metric. When someone asks ChatGPT "what are the best [your category] companies?" and your competitor appears at position one while you are absent, that is a measurable gap.
Radar's Prompt SOV tool runs five industry-specific prompts across ChatGPT, Claude, Perplexity, and Gemini. It extracts every brand mentioned in each response, scores them by position (first mention gets 3x weight, top-third gets 2x, rest gets 1x), and calculates your share of voice as a percentage of total weighted mentions.
You get your rank, your SOV percentage, and a competitor leaderboard showing exactly who is above you and by how much.
#3
Your Rank
18%
Your SOV
5
Brands Found
Schema Completeness Audit
Structured data is the language AI models use to understand your content with confidence. The Schema Completeness Audit checks JSON-LD markup across 10 Schema.org types: Organization, Article, FAQ, HowTo, Speakable, BreadcrumbList, Product, LocalBusiness, WebSite, and SoftwareApplication.
Unlike the existing AI Readiness Score (which checks structured data as one of five categories), this tool does a deep, dedicated validation. It fetches your homepage plus up to four linked pages, checks each schema type against required and recommended fields, and reports completeness percentages with specific missing field names.
This is the only tool in the suite with zero LLM calls. Pure HTML parsing, the fastest and cheapest tool to run.
Hallucination Detection
AI models sometimes fabricate facts about brands. Wrong founding year, wrong product names, wrong pricing, wrong headquarters. These inaccuracies reach every user who asks about your business.
The Hallucination Detection tool extracts ground truth from your actual website (products, pricing, founding year, leadership, location), queries all four AI providers about your brand, and compares their claims against verified facts. Discrepancies are flagged by severity (high, medium, low) and category (product, pricing, founding, location, description).
A high-severity hallucination (ChatGPT telling customers your product costs $99/month when it is actually $49/month) is a different problem than a low-severity one (getting your founding year off by one). The severity ratings help you prioritize which corrections matter most.
1
High Severity
1
Medium Severity
1
Low Severity
AI claims: "Pricing starts at $149/month for the basic plan"
Reality: "Pricing starts at $49/month (Starter plan)"
AI claims: "Offers a mobile app for iOS and Android"
Reality: "No mobile app exists. Web platform only."
AI claims: "Founded in 2023"
Reality: "Founded in 2024"
Enhanced Citation Tracker: Sentiment, Narratives, and Locale
The existing Citation Tracker already queried four AI providers to check if your brand gets mentioned. Radar v2 transforms it from a binary "cited or not" check into a full brand perception analysis.
LLM-Based Sentiment Analysis
The old sentiment detection used a regex pattern matching roughly 15 positive words and 12 negative words. It worked, but it was crude.
The new system sends all provider responses through a structured GPT-4o-mini classification that returns four data points per response:
| Data Point | What It Tells You | Example |
|---|---|---|
| Tone | How the AI frames your brand (positive, neutral, negative, or cautionary) | ChatGPT: cautionary, Claude: positive |
| Confidence | How clearly the sentiment comes through (0.0 to 1.0) | 0.85 confidence on "positive" vs 0.4 on "neutral" |
| Emotional Signals | Key phrases driving the classification | "innovative", "market leader", "trusted by enterprises" |
| Narrative Frame | How the AI positions your brand in one phrase | "emerging challenger", "industry standard", "niche specialist" |
The "cautionary" category is new and important. It captures responses where the AI acknowledges your brand but adds caveats: "X is a good option, but consider Y for larger teams" or "X offers competitive pricing, though some users report limited support." This is distinct from negative sentiment and requires a different response strategy.
All responses are batched into a single LLM call (not 16 separate calls), keeping the analysis within the 60-second serverless timeout.
Narrative: Industry standard
Narrative: Established with caveats
Narrative: Top recommendation
Narrative: One of several
Per-Provider Narrative Summaries
Each AI provider frames your brand differently. ChatGPT might describe you as an "innovative startup," while Gemini calls you a "niche player." These narrative differences matter for brand strategy.
Radar v2 generates a 2-3 sentence narrative summary for each provider, capturing how it positions your brand, the dominant tone, and the key themes it emphasizes. This runs in parallel with the existing AI summary, adding zero latency to the audit.
Locale Targeting
AI models respond differently depending on the user's perceived location. A global brand may be described as a "market leader" in the US but unknown in APAC markets.
Radar v2 adds a locale selector with six options: Global (default), United States, United Kingdom, European Union, Asia Pacific, and Latin America. When a locale is selected, a system message prefix is added to AI queries that frames the response from that regional perspective.
This is particularly valuable for agencies managing international clients or brands expanding into new markets.
The Competitive Landscape: Audit vs Monitor vs Radar
| Capability | Free Audit Tools | Enterprise Monitors ($40k+/yr) | Radar |
|---|---|---|---|
| Technical crawl audit | Partial (single tools) | No | Yes (12 tools in parallel) |
| Bot access testing | Yes | No | Yes (13 user-agents) |
| robots.txt analysis | Yes | No | Yes (16 bots) |
| llms.txt validation | Yes | No | Yes (structure + entities) |
| Brand citation tracking | No | Yes (6-9 LLMs) | Yes (4 LLMs) |
| Sentiment/emotion analysis | No | Yes | Yes (LLM-based, 4 tones) |
| Source influence mapping | No | Partial | Yes |
| Share of voice (SOV) | No | Yes | Yes |
| Hallucination detection | No | Limited | Yes (ground truth comparison) |
| Schema completeness audit | No | No | Yes (10 types) |
| Locale/geography targeting | No | Yes | Yes (6 regions) |
| Cross-tool conflict detection | No | No | Yes (unique) |
| Prioritized action items | No | No | Yes (effort x impact) |
| Narrative summaries per LLM | No | Partial | Yes |
| Trend tracking over time | No | Yes | Yes |
| Webhook integrations | No | Yes | Yes (Slack, Notion, JSON) |
| Pricing | Free | $40,000-$100,000+/yr | Free (beta) |
The gap that no single platform addressed before Radar v2: enterprise monitors tell you what is happening to your brand in AI but cannot tell you what to technically fix. Free audit tools tell you what is technically broken but cannot tell you how AI perceives your brand.
Radar is the only platform that answers both questions and generates a ranked list of what to fix, sorted by impact and effort.
Free Tools
$0/mo
Enterprise
$40k+/yr
Radar
Free (beta)
2/8
4/8
8/8
Cross-Tool Intelligence: Where the Real Value Lives
Running 12 tools in parallel is useful. But the intelligence that emerges from analyzing their results together is what makes Radar unique.
Radar v2 now generates insights from cross-tool patterns that no individual tool can surface:
| Cross-Tool Pattern | What It Means | Action |
|---|---|---|
| High SOV but low citation rate | AI recommends your brand but does not cite it in direct queries | Create brand-specific content that targets direct brand queries |
| Good AEO score but weak schema | Content is well-structured but missing JSON-LD validation signals | Add Organization, Article, and Speakable schema |
| Cited but cautionary sentiment | AI mentions you but adds warnings or suggests alternatives | Review sentiment details and address concerns in llms.txt |
| Hallucinations + no llms.txt | AI has wrong facts and no authoritative source to correct them | Create llms.txt with verified company facts immediately |
| High crawlability but zero SOV | AI can access your site but does not include you in category rankings | Build topical authority with comprehensive category content |
| Source influence gap | Competitors are cited as sources but you are not | Create definitive content that competes for source authority |
These patterns are invisible when tools run in isolation. A hallucination detection tool alone tells you AI has wrong facts. Combined with llms.txt validation, Radar tells you AI has wrong facts AND you have no mechanism to correct them. That combination is the actionable insight.
Wrong pricing on ChatGPT
No llms.txt file found
Cross-Tool Insight
AI has wrong facts and no mechanism to correct them. Creating llms.txt is the fastest fix.
18% SOV, rank #3
12% citation rate
Cross-Tool Insight
AI recommends you in category queries but not in brand-specific queries. Build brand authority content.
Missing Speakable schema
AEO score 78/100
Cross-Tool Insight
Content is well-structured but lacks the schema markup AI assistants need to cite it directly.
Architecture: Built With Thread-Based Engineering
Radar v2 was built using Thread-Based Engineering, our framework for scaling AI-assisted development. The entire v2 upgrade (4 new tools, 5 enhanced features, 30 new files, 3 database migrations) was structured as a B-Thread orchestrating 14 sub-threads across 4 phases.
Phase 1 decomposed the 5,046-line dashboard monolith into shared modules and composable components. Phase 2 built the four Tier 1 intelligence tools in parallel. Phase 3 added Tier 2 features with mixed dependencies. Phase 4 delivered strategic infrastructure (Supabase persistence, webhooks).
The tool registry pattern means new tools can be added without modifying the dashboard page. Each tool registers itself with its endpoint, display properties, body builder, and rate limits. The dashboard reads from the registry and renders dynamically.
v2.1 Update: From Audit to DIY Implementation
Radar v2 told you what was wrong. Radar v2.1 helps you fix it.
The gap we kept hearing: "The audit is great, but I don't know how to actually implement the fixes." Users would get a prioritized action list, then stare at items like "Add JSON-LD structured data" without knowing where to start. So we built the bridge.
AI Prompt Generator
Every action item now has a "Generate AI Prompt" button. Click it and Radar produces a context-rich prompt pre-filled with your actual audit data: your blocked bots by name, your missing schema types, your citation rates per provider, your SOV percentage. Copy it into Claude, ChatGPT, or Cursor and the AI tool has everything it needs to implement the fix.
This is not a generic template. If Radar found that GPTBot and PerplexityBot are blocked on your site, the prompt says exactly that. If your Organization schema is missing logo and contactPoint fields, the prompt lists those specific fields. Your audit data becomes the implementation context.
Implementation Threads
Actions are no longer a flat list. They are grouped into five structured threads following our Thread-Based Engineering methodology:
| Thread | What It Covers | Verify With |
|---|---|---|
| Crawlability | Unblock AI bots, fix robots.txt, enable server-side rendering | Crawl Check + Robots.txt |
| Structured Data | Add missing schema types, complete incomplete fields, align cross-signals | Schema Audit |
| LLM Communication | Create or improve llms.txt, add llms-full.txt | llms.txt Validator |
| Content Authority | Add speakable schema, restructure for AEO, add extractable data, optimize citations | AEO Auditor + Citation Test |
| Citation Building | Fix hallucinations, build community, create definitive content, grow SOV | Citations + SOV + Reddit |
Each thread has ordered steps. Each step generates a thread-aware AI prompt that includes context from previous steps, so the AI tool does not redo or conflict with earlier work. You check off steps as you go, and progress persists across sessions.
Generators: llms.txt and Schema Markup
Two actions that came up constantly: "Create an llms.txt file" and "Add JSON-LD structured data." Users had never written either one. So Radar now generates them.
The llms.txt starter generator pulls from your audit data (meta tags, detected schema, domain structure) and produces a structured file you can deploy immediately, with placeholder comments where manual input is needed.
The schema markup generator reads your Schema Audit results and produces copy-pasteable JSON-LD blocks for every missing type: Organization, Article, BreadcrumbList, FAQPage, Product, WebSite, and HowTo. Pre-filled with your domain and brand name.
Single-Tool Re-verify
Previously, verifying a fix meant re-running the full 12-tool audit (60 seconds). Now every completed tool has a "Re-verify" button. Fix your robots.txt, re-run just the Crawl Check in 15 seconds. Update your schema, re-verify just the Schema Audit. No full re-audit needed.
The Shift: Audit Tool to DIY Platform
Radar v2 was diagnostic: here is what is wrong and why it matters.
Radar v2.1 is operational: here is what is wrong, here is the exact prompt to fix it, here is the order to do it in, here is how to verify it worked, and here is where you left off last time.
What This Means for Your AI Visibility Strategy
If you ran a Radar audit before this update, run it again. The same domain, the same brand name. You will see four new tool cards, enhanced sentiment data on your citation results, cross-tool insights that did not exist before, and now a Threads tab with structured implementation paths.
The actionable difference:
Before Radar v2: You knew if AI could find your site and whether it mentioned your brand. Twelve dimensions of technical and citation data.
After Radar v2: You know if AI can find your site, what it says about you, how it frames your brand compared to competitors, whether it is telling customers the wrong facts, which sources are shaping the narrative, where you rank in category recommendations, and how all of these signals change over time. Plus a prioritized fix list for every issue found.
After Radar v2.1: All of the above, plus AI-ready implementation prompts for every fix, structured threads that guide you from audit to verified improvement, and generators that produce the files you need. Audit, understand, fix.
Radar v2: Questions About the New Intelligence Features
Common questions about this topic, answered.
Try Radar v2.1
Run a new audit on your domain. The four intelligence tools appear alongside the originals, all running in parallel. After the audit, open the Threads tab to see your structured implementation path with AI-ready prompts for every fix.
- Run a Radar audit -- 12 tools, 60 seconds, AI-ready fixes, free during beta
- Try individual tools -- No signup required for standalone tools
- Read the user guide -- Step-by-step walkthrough of all 12 tools and the new DIY features
