
What the Data Says
The Radar Brand Index tracks 50 recognizable brands across SaaS, E-commerce, Fintech, Healthcare, and Media. Each one is audited weekly with Radar's 12-tool methodology. Full results are public, updated weekly, free to browse.
The headline finding is uncomfortable. 56% of audited brands scored D or F. The average score across the 47 brands that returned a meaningful audit is 54.6 out of 100. The median is 55. Three additional brands returned no useful data at all because of paywalls or anti-bot defenses, and we mark those Audit Blocked instead of forcing a zero score.
Some of the brands in this lower half are names you would expect to be on top of AI search. Patagonia is one of the most beloved content brands of the last decade. Substack hosts thousands of writers. Stratechery is the gold standard for technology newsletters. The New York Times is The New York Times. None of them broke a C grade in our index.
This piece walks through the findings. It is not a takedown of the brands that scored low. It is a snapshot of what AI search engines actually see when they try to read these sites in May 2026. Most teams have not adapted to the technical demands of AI ingestion. The few that have show what is possible, often without the budget or recognition that suggests they should be ahead.
The full index is browsable at /labs/brand-index. This post explains what the index reveals.
How the Index Works
The index is built on Radar's 12-tool audit pipeline. Each brand is scored weekly using six content-surface tools that combine into a unified AI Readiness Score: AI bot crawlability, robots.txt configuration, llms.txt implementation, AI readiness composite, schema markup quality, and answer engine optimization signals. The full scoring formula is published at /platform/methodology. Weights, dimensions, and live-query mechanics are all transparent.
The catalog is curated, not algorithmic. Ten brands per category were chosen for global recognition, public AI/SEO presence, and a mix of leaders and challengers within each vertical. The list is not a ranking of importance. It is a sample of names readers will know.
When a brand returns no useful audit data because of a paywall, a login wall, or aggressive anti-bot defenses, we mark it Audit Blocked rather than scoring zero. This is a real and important signal. A score of zero implies the content is bad. Audit Blocked signals the content is structurally inaccessible to AI, regardless of quality. Three brands in the index (Chime, Quartz, The Information) are in this state. We will return to them later.
The Top of the Index
Six brands earned an A grade. Three of them are SaaS companies. Two are healthcare or fintech. One is consumer e-commerce. The pattern is striking, and the lesson is structural.
| Rank | Brand | Category | Score | Grade |
|---|---|---|---|---|
| 1 | BetterUp | Healthcare | 88/100 | A |
| 1 | Stripe | Fintech | 88/100 | A |
| 3 | Webflow | SaaS | 87/100 | A |
| 3 | Casper | E-commerce | 87/100 | A |
| 5 | Calendly | SaaS | 86/100 | A |
| 6 | Asana | SaaS | 85/100 | A |
Stripe and BetterUp tie at 88. Stripe is unsurprising. The company has invested heavily in technical SEO and developer documentation for years, and most of the AI visibility infrastructure (clean robots.txt, well-formed schema, rich llms.txt-equivalent documentation) was already in place before AI search was a category. BetterUp is more interesting. The coaching platform has built a content library that is technically accessible by AI in ways that most healthcare brands have not.
Webflow and Casper share third at 87. Webflow is again expected. The platform is built by people who understand machine-readable structure. Casper is the surprise. A mattress DTC company outscoring most healthcare and media brands is not what category-bias predicts. Casper has historically invested in long-form content marketing (the now-defunct Van Winkle's, sleep journalism), and the technical residue of that investment shows.
Calendly and Asana close out the A grades at 86 and 85 respectively. Both are pure-play SaaS companies, both have engineering-led product teams, both have well-structured docs and marketing sites.
The pattern: technical investment in machine readability beats brand recognition. Companies that built their websites for developers and search engines are the same companies that AI search can read.
The Surprising Bottom
The bottom of the index is the part of the data most people will find uncomfortable. These are not unknown brands or failing companies. They are recognized names that struggle with the specific technical demands of AI search.
Patagonia scored 35 out of 100, an F grade. Patagonia is one of the most respected content brands in retail. Their long-form journalism, environmental reporting, and product storytelling won industry awards for years. The score reflects technical infrastructure, not creative quality. AI crawlers struggle with their JavaScript-heavy site structure and their schema markup is incomplete. The story is the same for Warby Parker (32, F), another DTC darling whose creative voice does not translate into AI-readable signals.
Substack scored 42, a D. The platform that hosts thousands of independent writers has not optimized its own brand site for AI visibility. The publication infrastructure for Substack writers is one thing. The pixelmojo.io equivalent of Substack as a brand is another, and that brand site shows an irony that will not be lost on any of their writers.
Stratechery scored 39, an F. Ben Thompson's newsletter is one of the most-cited tech publications in the industry, frequently sourced by AI assistants when answering questions about strategy and technology. The brand site itself ranks lower than its content suggests because of paywall mechanics that block much of the writing from being crawled directly.
The New York Times scored 41, also a D. The flagship of American journalism. Same pattern: paywalls, JavaScript rendering, and a schema implementation that has not kept up with AI search expectations.
The most striking result is in healthcare. Hims and Hers, the DTC men and women health brands, both scored 4 out of 100. Both companies are publicly traded and have significant brand recognition. The score reflects an aggressive sign-up wall, JavaScript-only rendering of all clinical content, and a robots.txt configuration that blocks most non-browser user agents. The site looks healthy to a human visitor. To an AI crawler, it returns essentially nothing.
This is not a moral judgment of these brands. The decisions that produced these scores (paywalls, sign-up walls, anti-bot defenses) were rational responses to other business problems (content theft, scraping, conversion optimization). They are also the decisions that make a brand invisible to AI search.
Healthcare Is the Worst Category
The category averages tell a story.
| Category | Avg Score | Best Brand | Worst Scored |
|---|---|---|---|
| SaaS | 66.9 | Webflow (87) | Loom (lowest among SaaS) |
| E-commerce | 61.2 | Casper (87) | Warby Parker (32) |
| Fintech | 57.8 | Stripe (88) | Revolut (28) |
| Media | 47.3 | Wired (highest scored) | Stratechery (39) |
| Healthcare | 38.7 | BetterUp (88) | Hims and Hers (4) |
SaaS leads at 66.9 average. Engineering-led product teams build technically readable sites by default. The same disciplines that produced clean APIs and developer documentation now produce clean schema markup and robots.txt configurations.
E-commerce comes in second at 61.2. DTC brands have spent the last decade investing in performance marketing and SEO, and the residue helps with AI visibility too. Casper, Allbirds, and Glossier all scored above the index average.
Fintech sits at 57.8, helped enormously by Stripe at 88 and dragged down by Revolut at 28. The variance is large within the category. Modern banking interfaces tend to be JavaScript-heavy and gated, both of which work against AI crawlers.
Media at 47.3 is the surprise. The category that should be most content-rich is, on average, second worst. The reason is paywalls. Most major media brands have spent the last five years tightening paywalls in response to ad revenue collapse. The strategy was rational for direct revenue but creates a structural barrier to AI citation. AI tools that cannot read the article cannot cite the article.
Healthcare at 38.7 is the worst category by a wide margin. The 9-point gap between Healthcare and Media is larger than the entire spread from SaaS to Fintech. Healthcare DTC brands gate clinical content behind sign-ups, rely heavily on JavaScript for product pages, and frequently use anti-bot defenses to prevent automated abuse. Each decision is defensible in isolation. The combined effect is a category that AI search engines struggle to read at all.
Three Brands Are Completely Invisible to AI
Three brands in the index returned no meaningful audit data. Chime in fintech, Quartz and The Information in media. We mark them Audit Blocked rather than scoring them zero, because zero implies bad content. These brands have content. AI crawlers just cannot read it.
The Information is the clearest case. The publication is a paid subscription with a hard paywall. AI crawlers landing on a story page get a login prompt and nothing else. The brand is technically invisible to AI search by design. Subscribers see the writing. AI assistants do not.
Quartz has a similar profile, though softer. JavaScript-heavy rendering combined with paywall mechanics means that AI crawlers see significantly less than human visitors. The brand has high awareness in business journalism circles. Its AI search footprint is near zero.
Chime is the most surprising of the three. The challenger bank has tens of millions of users and a substantial public presence. The audit failure traces to anti-bot defenses, likely Cloudflare-level rules that block non-browser user agents from accessing content pages. The decision is reasonable from a security standpoint. The consequence is that AI assistants cannot reach the content that would let them describe Chime accurately to a user asking about challenger banks.
These are not failures of the audit. They are findings. The Audit Blocked treatment in the public index page explicitly explains that these brands are functionally invisible to AI for structural reasons. The index distinguishes structural invisibility from poor optimization, and that distinction is important. A brand that scores 30 has a path to 70 with technical work. A brand that is Audit Blocked has to make a strategic decision about whether to expose content to AI crawlers at all.
What This Means for Your Brand
Three takeaways from the data, in order of importance.
One. Brand recognition does not predict AI readiness. The top of the index is dominated by SaaS companies and a handful of e-commerce challengers. The bottom is full of names that have spent years building consumer trust through content. The technical work that makes a brand readable to AI search engines is different from the creative and editorial work that makes a brand resonate with humans. A team can be world-class at one and below average at the other. Most are.
Two. The structural decisions that drive low scores are usually rational responses to other problems. Paywalls protect revenue. Sign-up walls protect conversion data. Anti-bot defenses protect against scraping abuse. JavaScript rendering enables modern UX. Each decision has a defensible business case. The combined effect is invisibility to AI search. This is a strategic choice now, not an oversight. Brands have to decide which audience matters more: human visitors who can navigate paywalls and JavaScript, or AI crawlers that cannot.
Three. The path from low to high score is shorter than people think. The technical fixes for AI visibility (allowing AI bots in robots.txt, adding llms.txt, implementing schema markup, server-rendering critical pages) are well documented. None of them are research projects. A brand at 30 has a path to 70 in months, not years, if the team treats AI visibility as a technical priority. The brands at the top of the index do not have proprietary technology. They have technical hygiene that the bottom of the index lacks.
The Radar Brand Index will keep running weekly. Brands that improve their technical readability will move up. Brands that double down on paywalls and anti-bot defenses will stay where they are. The index will be the public record of which choice each brand made.
Where to Read More
The full index is at /labs/brand-index. Methodology is at /platform/methodology. The broader 82-domain benchmark behind the State of AI Visibility 2026 report is at /labs/state-of-ai-visibility-2026.
Radar Brand Index: Questions Readers Ask
Common questions about this topic, answered.
