
We Built Free Tools. Then We Watched What People Did With Them.
We launched 8 free AI visibility tools in February 2026. Then we launched Radar, the platform that runs all 12 tools in parallel. We did not run paid ads. We did not gate the tools behind email capture. We published them and waited.
Fifty users found the tools through organic search, ran full audits, and came back for more. No signup wall. No email nurture sequence. No sales call. Just tools that show you something you did not know about your website.
THE USER JOURNEY: FREE TOOL TO WAITLIST
How 50 users moved through the product-led growth funnel with zero sales involvement
Organic Search Discovery
Zero paid acquisitionUser finds a free tool via Google
Free Tool Reveals a Problem
9 tools, no signup requiredAI bots blocked, no citations, wrong brand info
Radar Token Request
50 token requestsUser wants the full 12-tool audit
Full Audit: All 12 Tools
10+ min avg sessionCross-tool insights surface hidden conflicts
Return Visit to Measure Impact
Repeat engagementUsers implement fixes, re-audit to compare scores
Expired Token → Waitlist
Demand exceeds accessUsers request continued access instead of leaving
Entire funnel is self-serve. No sales calls, no email sequences, no gated content.
This post covers what those 50 users taught us about the AI visibility gap, why they stay engaged, and what their behavior reveals about a market that barely existed 12 months ago.
The Entry Point: Nobody Starts With "AI Visibility"
Most users did not arrive searching for "AI visibility tools." They arrived searching for specific problems: how to check if ChatGPT can see their website, whether their robots.txt was blocking AI crawlers, or how to get cited by Perplexity.
These are symptoms. The underlying problem is larger: traditional SEO tools do not measure AI discoverability. You can have a clean Ahrefs audit and still be completely invisible to ChatGPT, Claude, and Gemini. We covered this gap in depth in our SEO vs GEO vs AEO guide, and the free tools exist to make the gap tangible.
The pattern we see: a user discovers one free tool, runs it, gets a result that surprises them, then discovers there are 11 more tools available. That is the moment they request a Radar token to run everything at once.
What Surprises Them Most
Three findings consistently trigger the "wait, what?" reaction:
1. Their robots.txt is blocking AI search bots. Most website owners set robots.txt years ago and never revisited it. The robots.txt analyzer shows them that GPTBot, ClaudeBot, or PerplexityBot are blocked. They had no idea these bots existed, let alone that blocking them means their site is invisible to the fastest-growing search channels.
2. AI models say wrong things about their brand. The citation tracker queries ChatGPT, Claude, Perplexity, and Gemini about their brand. When users see that ChatGPT describes their pricing incorrectly or that Claude does not mention them at all, it creates urgency that no amount of marketing copy could generate.
3. Their SEO score is high but their AI readiness score is low. The AI readiness scorer evaluates five dimensions that traditional SEO ignores: crawl accessibility for AI bots, llms.txt presence, structured data coverage, citation presence, and engagement signals. A site scoring 90+ on Ahrefs can score 30 on AI readiness. The gap is the story.
All 12 Tools. Not One or Two. All of Them.
This was the behavior that surprised us most. We expected users to run one or two tools and leave. Instead, users consistently ran all 12 tools in Radar.
The reason is cross-tool context. Each tool gives you one data point. Running all 12 gives you a story. Your crawl check says bots can reach your site. Your robots.txt analyzer says the directives are correct. But your citation tracker says no AI platform mentions you. Why? Because your structured data is missing the entity markup that helps AI models understand what your business does. No single tool reveals this. The combination does.
| Tool Category | What It Checks | What Users Discover |
|---|---|---|
| Technical (4 tools) | Crawl access, robots.txt, llms.txt, schema validation | Whether AI search engines can physically reach and understand their content |
| Citation (3 tools) | Brand mentions, AEO page scores, answer engine presence | Whether AI platforms actually mention and cite their brand in responses |
| Intelligence (3 tools) | Sentiment analysis, share of voice, hallucination detection | How AI platforms describe and position their brand relative to competitors |
| Monitoring (2 tools) | Reddit mentions, source influence mapping | Where conversations about their brand are happening and which sources shape AI narratives |
The 10+ minute average session time reflects the depth of this discovery process. Users are not scanning a dashboard. They are reading through findings that change how they think about their online presence.
We detailed the full tool breakdown in our Radar v2 deep-dive, which covers the four new intelligence tools (sentiment analysis, share of voice, hallucination detection, source influence mapping) that turned Radar from a technical audit into a full intelligence platform.
The Return Visit Pattern
Users come back. Not because we remind them (we do not send re-engagement emails during beta). They come back because they made changes to their site and want to measure the impact.
This is the audit loop in action:
- First visit: Run all 12 tools. Discover problems. Get a baseline AI readiness score.
- Implementation: Fix robots.txt, add llms.txt, update structured data, improve content for AI citation.
- Return visit: Run Radar again. Compare scores. See what improved and what did not.
The return visit is where cross-tool insights become most valuable. On the first audit, the insights establish a baseline. On the return audit, they show movement. "Your crawl accessibility improved from 60 to 85, but your citation rate stayed flat because your llms.txt still references pages that return 404 errors." That kind of specific, cross-tool feedback does not exist in any other free tool.
Why Expired Users Do Not Leave
During private beta, Radar tokens have a usage window. When that window closes, users have two choices: find an alternative or ask for more access. Our expired users consistently choose the second option. They join the waitlist.
This tells us something important about the market. There is no real alternative that combines technical auditing with intelligence monitoring and cross-tool conflict detection. Enterprise AI monitoring platforms like Brandwatch and Profound (which start at $40,000+ per year) track brand sentiment across AI platforms but cannot tell you what is technically broken. Free SEO tools check your technical health but do not track AI citations. Radar is the only platform in the middle: full technical audit, full intelligence monitoring, prioritized actions, free during beta.
When users discover that combination, they do not want to lose it.
The Cross-Tool Insight That Changes Everything
Individual tools give you answers. Cross-tool insights give you understanding.
Here is the type of cross-tool conflict that Radar surfaces and that no individual tool can detect:
CROSS-TOOL CONFLICTS RADAR DETECTS
Two tools show passing results. The conflict between them reveals the real problem.
GPTBot can access your site
ChatGPT does not mention your brand
Radar Insight
Access is not the problem. Your pages lack entity markup and structured data for ChatGPT to cite.
Add JSON-LD schema and llms.txt
All AI search bots allowed
Score: 35/100
Radar Insight
Allowing bots is necessary but not sufficient. Missing llms.txt, schema, and speakable markup.
Implement structured data stack
You rank 3rd in your category
Competitor blog is the primary AI source
Radar Insight
Your position depends on content you do not control. One competitor piece is shaping AI answers.
Create authoritative content on that topic
These conflicts exist between tool outputs, not within them. No single tool can detect them.
These are the categories of conflicts Radar detects during real audits. We covered the technical implementation of cross-tool intelligence in our AI visibility stack architecture post.
What This Tells Us About the Market
The behavior of these 50 users reveals four things about the AI visibility market in 2026.
1. The Problem Is Real and Underserved
If the problem were already solved by existing tools, users would not spend 10+ minutes on a new platform. They would run a quick check and leave. The engagement depth signals that existing SEO tools leave a genuine gap that users recognize the moment they see their AI visibility score.
2. Awareness Is the Bottleneck
Most of our users did not know they had an AI visibility problem before they ran their first tool. They assumed that ranking on Google meant being visible everywhere. The free tools break that assumption in 15 seconds. This means the market is much larger than the people currently searching for "AI visibility." It includes every company that thinks their SEO strategy covers AI search.
We covered this awareness gap extensively in Part 1 of our AI Search Playbook, where our own Google traffic dropped 33% as AI search absorbed queries. The shift is happening whether companies measure it or not.
3. Cross-Tool Intelligence Is the Moat
No user mentioned "I love that you have 12 tools" as the reason they stayed. They mentioned the cross-tool insights. The moment where Radar shows you a conflict between two tools, revealing a problem neither tool could surface alone, is the moment users understand why a platform is different from a collection of tools.
This is why we built Radar on Vector and Hive's architecture rather than as a standalone application. Vector's scoring intelligence powers the insight prioritization. Hive's orchestration runs 12 tools in parallel within a 60-second serverless window. The architecture makes the insights possible.
4. Product-Led Growth Works When the Product Reveals Truth
We did not gate the free tools. We did not require email addresses. We did not run a drip campaign. Users arrived, discovered a genuine problem with their website, and decided on their own that they needed a deeper audit. Then they requested a token.
This is product-led growth at its simplest: build something that shows people a truth they did not know, then offer a way to act on it.
| Growth Metric | What We Observed |
|---|---|
| Acquisition channel | Organic search (zero paid spend) |
| Free tool to Radar conversion | 50 token requests from organic traffic |
| Engagement depth | 10+ minute average sessions, all 12 tools run |
| Retention signal | Return visits to compare audit results |
| Churn behavior | Expired users join waitlist, do not leave |
| Sales involvement | None. Users self-serve entirely. |
What This Series Covers
This is Part 1 of The AI Visibility Stack, a five-part series documenting how we built an AI visibility platform from scratch. Each post builds on the last, from user validation to founder story to product deep-dives.
Parts 4 and 5 publishing through April 2026.
Is AI making things up about your brand?
Radar scans 4 LLMs in 60 seconds, flags hallucinations, finds missing citations, and gives you a ranked fix list. The same 12 tools we used to fix our own AI visibility.
How to Start Your Own Audit
You can replicate what these 50 users did in under five minutes.
Step 1: Run the Free Tools
Go to pixelmojo.io/tools and run any of the 9 free tools. No signup, no email, no credit card. Start with the AI Crawl Checker to see which AI bots can reach your site. If your site is blocked, nothing else matters.
Step 2: Check Your AI Readiness Score
The AI Readiness Score combines five dimensions into a single 0-100 number. This is your baseline. Write it down. You will want to compare it after making changes.
Step 3: Run the Full Radar Audit
Request a token at pixelmojo.io/platform to run all 12 tools simultaneously. The cross-tool insights will show you conflicts and opportunities that individual tools structurally cannot surface. The prioritized action items tell you exactly what to fix, sorted by impact and effort.
Step 4: Implement and Re-audit
Make the changes Radar recommends. Then run the audit again. Compare scores. See what moved. This is the audit loop that keeps our 50 beta users coming back.
Ready to see what AI search engines see when they look at your website?
- Try the free tools: 8 AI visibility checks, no signup required
- Get Radar access: All 12 tools in parallel with cross-tool insights
- Read the full GEO playbook: The strategy behind the tools
- Contact us: Discuss an AI visibility strategy for your team
AI Visibility Tools: Questions Users Ask
Common questions about this topic, answered.
