
What Is the Path From Free Audit to AI Visibility Strategy?
The path is five stages long, spans roughly 14 days on average, and most users take it whether we design for it or not. Stage 1 is a free tool run on one domain. Stage 5 is either a self-served fix, a $5 audit pack, a $199 Pro Retainer, or a $4,500 AI Visibility Strategy sprint. Everything in between is signal the user is collecting about what their problem actually is and whether they can solve it alone.
This is Part 5 of The AI Visibility Stack, the capstone. The prior four parts built the case: Part 1 showed what 50 users actually did with our free tools. Part 2 told the origin story of building Radar because AI was lying about our own brand. Part 3 explained why traditional SEO tools miss the AI visibility layer entirely. Part 4 unpacked why orchestrated 12-tool audits beat serial ones. This final post is the capstone: the actual conversion path those 50 users followed, where they got stuck, and what to do at whatever stage you are in right now.
Stage 1: Why Do Marketers Start With Free AI Visibility Tools?
Users start with free tools because the paid AI visibility market is expensive, opaque, and skewed to enterprise buyers. AthenaHQ charges $295 per month. Gauge charges $99. Most Fortune 1000 teams end up in an RFP process before running a single audit. A $0 tool that returns a score and a grade in 60 seconds collapses that evaluation loop to minutes.
The actual reason users land on the free tool, though, is not curiosity. It is a specific event: someone internally asked "how do we rank in ChatGPT?" or "why is Perplexity citing our competitor but not us?" and the team realized they had no way to measure it. The free tool is the first answer that does not require a budget approval.
What do users do in the first 60 seconds?
They type their own domain, hit run, and watch the tools execute in parallel. The 6 free tools (crawl check, robots.txt analyzer, llms.txt validator, AI readiness score, AEO page auditor, schema audit) run in about 20 to 25 seconds total. Users see the score land, then scan for the letter grade, then start reading the recommendations.
The first surprise almost every user hits
It is usually robots.txt. About half of the domains we audit block at least one major AI bot (GPTBot, ClaudeBot, PerplexityBot, or Google-Extended) by accident. These blocks were usually added years ago when the team wanted to block scrapers, and nobody revisited the list when AI crawlers became business-critical.
Stage 2: What Makes Users Request a Radar Token?
Users request a free Radar token when they hit a specific pattern: the free tool shows them a problem they cannot fully measure because they only get 1 run per domain. They want to audit a second domain (usually a competitor), or re-audit their own after a fix, or save the report to share internally.
Token holders get 2 runs per tool, saved audit history, shareable public report URLs at /r/[id], and an embeddable SVG badge. The token is free. The cost is an email address. The implicit promise is "come back."
Why the email is a reasonable trade
From the user's perspective, the 1-audit-per-domain rule on the free tier feels like a paywall wrapped as a product decision. For us, it is the only way to keep the free tier sustainable while keeping the LLM-powered tools behind a small purchase. The token is the middle ground: you get more access, we get a signal that you are serious enough to come back.
The behavior we see after tokens are issued
Users do not just audit one more domain. They audit a set. Agencies audit their entire client list. Marketers audit every competitor mentioned in their positioning deck. A handful of users audit every page on their own site (not just the homepage) to find which pages are invisible to AI. This is the moment Radar starts looking like a workflow tool instead of a one-off scanner.
Stage 3: How Do Users Navigate the Full 12-Tool Audit?
Users reach Stage 3 when they hit the edge of what the 6 free tools can tell them. The free tools audit the technical layer: can AI bots crawl, does your llms.txt work, is your schema complete, is your page answer-engine ready. The 6 paid-tier tools audit the citation layer: is ChatGPT actually citing you, does Perplexity recommend you in your category, is Reddit chatter accurate, are AI models hallucinating facts about your brand.
The gap between "technical readiness" and "actual citations" is the gap that pushes users into the paid tier. A 70 on the technical layer with zero AI citations means your infrastructure is clean but your content is not being referenced. A 40 on the technical layer with some citations means you are getting referenced despite broken infrastructure, which is fragile and will collapse.
The cross-tool conflicts that only paid tier surfaces
Running the 12 tools together is the move that turns a scanner into an audit platform. Part 4 of this series has the full argument, but the short version: serial tool runs miss contradictions. Example: your AEO auditor says you have speakable schema, but your citation tracker shows zero voice-assistant citations. That is a signal your speakable cssSelector is targeting elements that do not exist on the page. A single-tool run would not catch that.
| Stage | What the user gets | What it costs | Typical time spent |
|---|---|---|---|
| Free tool (Stage 1) | 6 tools, 1 audit per domain, anonymous | $0 | 5 to 10 minutes |
| Token (Stage 2) | +2 runs per tool, saved history, shareable reports | $0 (email) | 10 to 20 minutes |
| Audit pack (Stage 3) | 12 tools, 3 to 10 audits, LLM Answer Diff teaser | $5 to $40 | 30 to 60 minutes per audit |
| Pro Retainer (Stage 4) | 40 audits/month, watched pulse re-scan, full LLM Answer Diff | $199/month | Ongoing |
| Strategy Sprint (Stage 5) | Full audit + implementation + 60-day plan | $4,500 (6 weeks) | Agency-managed |
Stage 4: How Do Users Decide Between DIY and Delegation?
The decision is not about price. Users at Stage 4 have already proven they will pay. The real question they ask is: "Can my team ship this fix list in a reasonable window?" The answer depends almost entirely on what type of fix the audit surfaced.
Technical fixes: DIY wins
Robots.txt edits, llms.txt creation, schema JSON-LD additions, meta tag cleanup. Any engineer can ship these in a day. Users with these fix lists buy a $5 or $40 audit pack, close the issues internally, and re-audit to verify.
Structural fixes: delegation usually wins
Entity strategy (Wikidata pages, knowledge graph registration, author bio infrastructure), content re-architecture to match AI query intent, cross-domain canonical syndication, Reddit sentiment remediation. These involve content, legal, comms, and engineering in the same workstream. A $4,500 sprint with a 60-day action plan is almost always faster than trying to coordinate internally.
Ongoing monitoring: Pro Retainer
Users who have agency clients or who operate in competitive categories (B2B SaaS, e-commerce, financial services) want weekly score-delta alerts, not one-off audits. Pro Retainer at $199 per month is purpose-built for that. Every Friday the watched-domain pulse re-runs; if the score moves more than 5 points, you get an email.
Stage 5: Which Conversion Path Are Users Taking?
The conversion split looks roughly like this, based on the 50 users who worked through the path: most pick a single audit pack or 3-pack to validate fixes. A smaller group picks Pro Retainer because they are agencies or in-house teams with client-facing reporting requirements. The smallest group buys the Strategy Sprint, but they are also the highest-value group because the implementation scope includes full entity strategy and 6 weeks of agency-delivered work.
Why audit packs are the most common first purchase
The audit pack is the lowest-risk "prove it" transaction. $5 for a single audit is smaller than a lunch. Users pay it to unlock the citation tracker, Reddit monitor, hallucination detection, prompt SOV, and source influence map for one scan. If the scan reveals specific content gaps, they buy the 10-pack at $40 for $4 per audit (the Power Pack pricing).
Why Pro Retainer wins for agencies
Agencies billing client retainers need three things audit packs cannot provide: client labels and saved runs, a watched-domain weekly pulse for score-delta alerts, and PDF export for client-facing deliverables. Pro Retainer includes all three plus 40 audits per month. For a 5-client agency auditing each client twice a month, $199 is cheaper than the per-audit math at $5 a scan.
Why Strategy Sprint wins for structural work
Our 6-week AI Visibility Strategy sprint at $4,500 covers: full Radar audit plus structured data overhaul, llms.txt optimization, disambiguation strategy, competitive benchmarking, 60-day action plan. The math works when the fix list includes things like "register the brand in Wikidata," "rewrite 12 pages to match prompt-style query intent," "build a knowledge graph of 18 topic clusters." No audit pack or retainer subscription gets you delivery; the sprint does.
Where Do Users Drop Off (And Why)?
Two drop-off points dominate the path. Both are symptomatic, not structural, which means they are fixable with better product moves.
Drop-off 1: Stage 1 to Stage 2
Users run the free tool, see the score, then leave without requesting a token. The fix for this is already shipping: every tool now includes AI-generated recommendations in the results, so the score is never the output alone. The user sees what to do, which is the push they need to come back and run it again after fixes.
Drop-off 2: Stage 3 to Stage 4
Users buy an audit pack, run the full 12-tool audit, see a big fix list, and go silent. The cause is usually not price. It is "I do not know if I can do this myself." The fix we are testing: shipping a per-action-item AI prompt generator. Every recommendation produces a context-rich prompt you can paste into Claude, ChatGPT, or Cursor to implement the fix. Lowers the DIY tax.
What Does This Mean for Where You Are Right Now?
Here is the honest answer at each stage. If you have not run a free audit yet, the free Radar run at pixelmojo.io/platform takes about 60 seconds and costs nothing. If you ran a free audit but did not request a token, the token is free and unlocks 2 more runs per tool plus saved history. If you have a token and want the paid tools, the $5 single audit pack is the lowest-friction way to see whether the LLM-powered tools surface new findings for you. If you are an agency or in-house team running monthly client reports, Pro Retainer at $199 per month is purpose-built for that workflow. If your fix list includes entity strategy, knowledge graph work, or content rearchitecture, the AI Visibility Strategy sprint at $4,500 skips the "can we coordinate internally" question.
None of these stages require you to skip the earlier ones. The 50 users who moved through this path did not jump from Stage 1 to Stage 5. They walked it. The free tools exist to prove the problem. The token tiers exist to prove the fix. The Pro Retainer exists to prove the ongoing result. The Strategy Sprint exists when the problem is bigger than the tools can close alone.
What This Series Covers
The Conversion Path: Questions Readers Ask
Common questions about this topic, answered.
Ready To Take the Next Step?
This series set out to show what happens when you treat AI visibility like a measurable discipline instead of a vibe. Fifty users walked through the full path. Most converted at Stage 3. Some never left Stage 1, and the free tier is a complete product for them. The rest are somewhere in between, and the right next move depends on what the last audit told them.
Where you are right now:
- Run a free Radar audit: 60 seconds, no signup, 6 tools, 1 run per domain
- See paid tier pricing: Audit packs from $5, Pro Retainer $199/month
- AI Visibility Strategy sprint: Full implementation, $4,500, 6 weeks
- Contact us: When the fix list is bigger than the tools can close alone
