Q2 2026 · Category-exclusive retainer slots open · reviewed weekly
DOXIA AXIS
BOOK
Revenue Modeling · Pages 06–0729 Apr 20269 min read

Sample Revenue Gap Analysis

Translating AI-visibility deficits into a twelve-month revenue-at-risk number. The trajectory and revenue-quantified-findings sections of the dossier, with the math the audit uses to tag every finding in dollars before sequencing the fix.

Why does an AI visibility audit show dollars at all?

Because "your schema coverage is at 17%" is a metric. "Closing your schema gap is worth $211K in attributable ARR over twelve months" is a decision.

The audit's job is to produce decisions, not metrics. So every finding gets tagged with a dollar number before the dossier ships. The numbers are not precise. They are sequenced. The point of revenue tagging is to tell the operator which fix to ship first, which to ship second, and which to skip if the budget is tight.

What follows is page 14 (the trajectory chart) and page 19 (the revenue-quantified findings table) from a real Tier 0 audit, anonymized.

Page 14 — the 90-day trajectory

The chart at the top of page 14 looks like this when reduced to numbers:

| Day | Citations / month | Milestone | |---|---|---| | 0 | 0 | Sprint 1 kickoff | | 14 | 4 | Sprint 1 ships | | 30 | 11 | Sprint 2 ships | | 45 | 18 | (mid-Sprint 3) | | 60 | 27 | Sprint 3 ships | | 75 | 36 | (compounding) | | 90 | 44 | Target hit |

The audited firm starts at zero monthly AI-engine citations. The model projects 44 citations per month at day 90. Confidence band: ±18% at day 90. The chart is not a forecast — it's a model output, conditional on the engagement shipping the recommended sprints on the recommended cadence.

Three things about the curve.

The kink at day 30 is not aesthetic. It's the moment Sprint 1's robots.txt unblock plus the schema deployments hit re-crawl windows for the four engines that were previously blocked. GPTBot re-crawls on a 7-to-14-day cadence after an unblock; ClaudeBot is similar. By day 30, both engines have indexed the unblocked content, the schema gives them anchor points, and citations start surfacing in the wild. The kink is the unblock fix compounding.

The slope steepens between day 60 and day 90. That's Sprint 3's authority sweep landing. Fourteen citation-shaped backlinks plus the Person and Product schema layers add ranked entity signals. The engines weight new authority signals at the next training-data refresh window. The slope reflects the engines pulling the firm into more category answers as the entity recognition compounds.

The 44-citation target is the median outcome. Eighteen percent confidence band means 5th-percentile is around 32 per month, 95th-percentile is around 56 per month. The model is conservative on purpose. Operators get burned by optimistic forecasts, so the dossier underwrites with a tight band rather than a single number.

What does 44 citations per month actually mean?

In commercial terms — for a service-business operator with measurable inquiry-to-engagement rates — the citations translate roughly:

  • 44 citations per month at average click-through and inquiry conversion typical for the vertical
  • becomes ~9 to 14 qualified inbound inquiries per month sourced specifically from AI engine surfaces (not the same prospects who would have found the firm via Google)
  • becomes ~3 to 5 new engagements per month at typical service-firm close rates
  • at the average engagement value the firm reported during intake — closing the revenue picture out

The translation isn't included on page 14 (it's on page 19). But the trajectory chart is what makes the translation legible. Operators read the curve and ask "so what does that mean for our pipeline?" — which is the question the next page answers.

Page 19 — six findings, $757K

| ID | Finding | Impact (12-mo ARR) | Confidence | Owner | |---|---|---|---|---| | .01 | Lift robots.txt block on GPTBot + Claude-Web | $148K | HIGH | Ops | | .02 | Publish llms.txt with content hierarchy | $62K | HIGH | Dev | | .03 | Close Organization + FAQPage schema gap | $211K | HIGH | Content | | .04 | Create Product + Person schema (currently 0%) | $94K | MED | Content | | .05 | Rewrite 12 underperforming articles for AI excerpt depth | $88K | MED | Content | | .06 | Category authority sweep · 14 citation backlinks | $54K | LOW | PR | | | Total · 12-month attributable | $757K | | |

Six rows. Each one a finding. Each finding tagged with three things: estimated ARR impact, confidence band, and the named owner who has to ship it.

How does each row get its dollar tag?

This is the part most audits skip. Here is exactly how the math runs in this dossier.

.01 — Lift robots.txt block — $148K

The firm's category competitors (per page 5) capture an average of 6.8 citations per ten category answers across the engine roster. The audited firm captures 0. The unblock alone, with no further work, surfaces an estimated 4 to 7 citations within thirty days. At the firm's reported close rate and engagement value, those citations translate to ~3 net-new engagements per quarter at the firm's reported ACV. Annualized: $148K. Confidence is HIGH because the unblock is mechanical and the citation-attribution model has high reliability for the first 4 to 7 citations.

.02 — Publish llms.txt — $62K

Empirical category benchmark: sites with a well-formed llms.txt receive 18 to 24% more citation density per page on engines that read llms.txt (currently a subset, but a growing one). The lift is smaller than .01 because llms.txt only affects the engines that read it and only for the pages it ranks. Modeled across the firm's content footprint: 0.4 to 0.6 net citations per month incremental, at the firm's reported close rate, equals $62K annualized. Confidence is HIGH because llms.txt deployment is mechanical and the empirical lift band is narrow.

.03 — Close Organization + FAQPage schema gap — $211K

The largest single lever in the dossier. FAQPage schema is the most-cited type in the answer engines right now. The audited firm has 8% FAQPage coverage against a category median of 52%. Closing to the median (the 90-day target, not top-decile) projects 1.4 to 2.1 net new citations per month from FAQPage extraction alone. The Organization schema fix gives the engines a stable entity anchor across all those new citations, multiplying the per-citation conversion value by approximately 1.3x. Combined, $211K annualized. Confidence is HIGH because FAQPage citation behavior is well-empirically-modeled for the engine roster.

.04 — Create Product + Person schema — $94K

The firm currently emits zero Product or Person schema. Going from zero to category-median (60% Product, 50% Person) projects 0.7 to 1.0 net new citations per month, with elevated conversion value because Product and Person schema correlate with purchase-intent and commercial-intent queries. $94K annualized. Confidence is MED because Product and Person schema's citation behavior has wider variance across engines than FAQPage.

.05 — Rewrite 12 underperforming articles — $88K

Twelve specific articles were flagged on page 8 of the dossier as having extraction-poor structure. Rewriting for AI excerpt depth (lead-with-thesis sentences, named-entity density, inline citation, question-shape headings) projects an extraction-rate lift from current 11% to category-median 38% on those twelve articles. Modeled citation lift: 0.6 to 0.9 net new per month. $88K annualized. Confidence is MED because article rewrite quality varies with operator commitment.

.06 — Category authority sweep — $54K

Fourteen citation-shaped backlinks targeted to specific authority surfaces (industry publications, trade associations, third-party review aggregators). Each backlink contributes a small, modeled lift to the entity's authority signal. Cumulative effect: 0.3 to 0.5 net new citations per month within the 12-month window, with longer compounding tails. $54K annualized. Confidence is LOW because backlink acquisition has high variance across operator effort and target willingness.

Why is the total $757K, not $1M?

Because the model is conservative.

If we summed the top-of-band estimates on every row, the total would be closer to $1.05M. We don't ship the top-of-band number because operators get burned by it. The dossier ships the median outcome plus a stated confidence band per row, and the total is the sum of the medians. The +/- on the total at the 12-month mark is approximately $190K either side.

We also don't multiply across rows for second-order compounding effects, even though we could. If .01 unblocks the engines and .03 gives them schema to extract from, the joint lift is empirically larger than the sum of the two independent lifts. We model the independent lifts and skip the joint multiplier in the dossier headline number. Operators who want the joint-multiplier modeling can request it on page 21.

What does the operator do with page 19?

Sequence the work.

Sprint 01 (days 6 to 19) — ship .01, .02, .03. Combined attributable ARR: $421K, all HIGH confidence. The cheapest, fastest, highest-confidence work goes first.

Sprint 02 (days 20 to 33) — ship .04, .05. Combined attributable ARR: $182K, MED confidence. Slightly slower work because content rewrites take operator review.

Sprint 03 (days 34 to 47) — ship .06. Attributable ARR: $54K, LOW confidence. The longest-tail work, lowest predictability, deferred until the higher-confidence revenue is locked.

The total, sequenced this way, hits the 90-day citation target on the page 14 trajectory chart and locks $603K of the $757K under HIGH or MED confidence inside 47 days. The remaining $154K accumulates over the rest of the 12-month window.

The full sprint plan is the page-32 sample at /case-studies/sample-audit/sample-14-day-ai-sprint-plan.

What the page is not

It's not a guarantee. The dossier is explicit about this. Confidence bands are stated, the model is published in the appendix, and the operator can challenge any line by asking for the source.

It's not a marketing artifact. The numbers are the same numbers we use internally to scope the engagement. If we shipped optimistic numbers to win the work, we'd miss them on delivery, and the case-study line would not exist.

It's not the only revenue model. Service businesses, e-commerce, B2B SaaS, and publishers each get a different attribution model. The compliance SaaS in this sample uses model v3. A wedding venue in Hudson Valley uses a different model — booking value × seasonal mix × AI-traffic share. The boutique inn at Coastal SC uses RevPAR × OTA-displacement × concierge-deflection. Same discipline. Different math.

Where to go from here