How B2B SaaS companies lose revenue in AI search
Pipeline contribution × AI-traffic share × citation rate equals the share of pipeline now intermediated by ChatGPT, Claude, and Perplexity. Here's the math, with the numbers operators most often miss.
Where does B2B SaaS pipeline actually leak in 2026?
Three places that didn't exist five years ago.
The first — prospects asking ChatGPT or Claude before they ask Google. The discovery query that used to be "best customer-success software for 50-person SaaS" on Google is now the same query inside an AI assistant. The assistant returns three named tools with one-line descriptions. The buyer clicks through to the named tools and skips the long-tail of options that would have appeared on Google.
The second — evaluation queries inside an LLM. Once a buyer has a shortlist, they ask the model "how does Brand A compare to Brand B for [specific use case]?" The answer the model returns is built from training data plus retrieval. If your brand isn't in either, you don't show up in the comparison even when the buyer named you.
The third — integration and capability queries. Buyers ask the assistants "does Brand A support [specific integration]?" If the answer is wrong — and it's frequently wrong because the engine extracted from outdated content — the buyer disqualifies your brand on a false premise. You don't get a chance to correct the record because you don't see the conversation.
Each leak compounds. A brand that loses on discovery loses on evaluation, because the brand wasn't on the shortlist. A brand that loses on evaluation loses on integration, because the buyer didn't reach the capability question. The leaks stack.
What's the actual revenue math?
Three variables, multiplied.
Variable 1 — pipeline contribution from search. What percent of your pipeline is sourced through search-shaped intent? For most B2B SaaS companies past PMF, the number sits between 30% and 60%. Search-shaped intent includes Google organic, Google paid, content-led inbound, and now AI-search. If your pipeline is mostly outbound or mostly product-led, the search dependency is lower; if you're a content-led inbound machine, the dependency is higher.
Variable 2 — AI-traffic share of search-shaped intent. What percent of search-shaped queries in your category now route through an AI assistant rather than Google? This number is volatile and rising. Across the categories we audit at Doxia Axis through Q1 2026, the AI-share sits between 8% and 22%, depending on vertical. B2B SaaS verticals where the buyer is technical (DevTools, infra, observability) cluster at the top of that range. Verticals where the buyer is non-technical (HR tech, finance ops, sales enablement) cluster at the bottom.
Variable 3 — citation rate. When the AI assistant answers a category query, what's the probability your brand is cited in the answer? This is the lever the audit is built to measure and the engagement is built to move. Brands with no schema, no llms.txt, and no third-party citation density typically score near 0% on this variable. Brands with the canonical schema set deployed plus consistent third-party density score 30% to 60% in their category answers.
The product — pipeline contribution × AI-traffic share × (1 − citation rate) — is the share of pipeline currently leaking through the AI surface.
So what does the math produce?
For a representative B2B SaaS at $10M ARR with 45% pipeline from search, 15% AI-share inside that, and a 5% citation rate (typical for a brand that hasn't done GEO work):
- 45% × 15% × (1 − 5%) ≈ 6.4% of pipeline currently leaking through the AI surface
At a 25% pipeline-to-revenue conversion rate, that's roughly 1.6% of revenue at risk per year, compounding as AI-share rises. For a $10M ARR brand, that's $160K annualized today; for a $30M ARR brand, $480K. The numbers move with AI-share growth — most categories are seeing 1.5x to 2.5x AI-share growth year-over-year through 2026.
Closing the citation rate from 5% to 35% (the realistic 90-day target after a Doxia Axis sprint) reduces the leak to roughly 4.4% of pipeline. That's a 30%-relative recovery of the leaking portion. At the $10M ARR brand, that's $50K to $80K annualized recovered, plus the compounding benefit as AI-share continues to grow.
The numbers are illustrative. The shape is real. Operators who haven't run the math are surprised by the magnitude; operators who have run it are usually understating their AI-share variable because they're using last year's measurement.
Where do the citation-rate losses actually happen?
Five named places, in order of magnitude.
Loss 1 — your brand isn't cited at all. The engine answers the category query without naming your brand. This is the biggest loss for most B2B SaaS, because the category list the engine returns is short — three to five named brands per query. If you're not in those three to five, the buyer never sees you.
Loss 2 — your brand is cited but the description is wrong. The engine names your brand but describes it as serving a different ICP, with outdated pricing, or with a missing capability you actually have. The buyer reads the description and disqualifies. You don't see the disqualification because the conversation happened inside the assistant.
Loss 3 — your brand is cited but ranked third or fourth. The engine names you but Brand A and Brand B get top billing. In conversational interfaces, top-billed brands get clicked through at 3-to-5x the rate of lower-ranked brands. Citation alone doesn't translate to clicks if your position is bottom of the list.
Loss 4 — your brand is cited but a competitor's review is quoted. The engine quotes a G2 review for Competitor A inside an answer that mentions your brand. The quoted review carries social proof; your brand is named without a parallel quote because you don't have G2 reviews exposed via Review schema with attribution. The asymmetric quoting tilts the buyer toward the competitor.
Loss 5 — your brand is cited but the integration claim is wrong. Buyer asks "does Brand A integrate with [tool X]?" The engine answers "unclear" or "not natively" even when you do support the integration, because your integrations page isn't structured in a way the engine can extract. The buyer takes the "unclear" answer at face value.
Each loss has a specific schema or content fix. The full canon lives at what schema matters for AI visibility. The diagnostic that quantifies each loss for your specific brand lives at /audit.
What does the operator do with this read?
Three actions in sequence.
Action 1 — measure your AI-share variable. Run the same 20 to 30 category queries across ChatGPT, Claude, Perplexity, Gemini, Copilot, and Grok. Record what gets cited verbatim. Whichever brand appears most often is your category leader on AI-search. Whatever rate your brand appears at is your current citation rate. The exercise takes one operator-afternoon.
Action 2 — calibrate your pipeline-contribution variable. Pull last quarter's closed-won analysis. Filter to deals where the original source attribution is search, content, or organic-inbound. That percentage of pipeline is the variable you're modeling against. If it's above 30%, AI-search exposure matters now; if it's below 15%, it matters less, but it's the variable that grows fastest.
Action 3 — request the audit. /audit gives you a 14-page dossier with the three variables measured against your specific brand, the citation gaps named, and the revenue at risk quantified. Five business days. The dossier is the artifact you forward to your CFO.
A worked example
A mid-market B2B SaaS at $15M ARR in the developer-tools vertical:
- Pipeline contribution from search: 52% (high — content-led inbound is the dominant channel)
- AI-share inside that: 19% (high — DevTools buyers are technical, AI-share rises faster)
- Current citation rate: 8% (low — the brand has Org schema only, no FAQPage, no Article schema with citations)
Math: 52% × 19% × (1 − 8%) ≈ 9.1% of pipeline leaking through AI. At 28% pipeline-to-revenue conversion, that's $382K annualized leaking today. Closing citation rate to 40% via a Doxia Axis sprint recovers roughly $230K annualized.
The number is the number. The dossier breaks it into specific findings. The sprint ships against the highest-impact ones first.
Where to go from here
- What is GEO, exactly? /answers/what-is-geo.
- What does the audit produce? /answers/what-is-an-ai-visibility-audit.
- The window matters: the AI indexing window 2026–2027.
- Or just request the audit: /audit. The math is yours; the dossier is free.