When should I do an AI visibility audit?
Five trigger conditions that mean now. Three conditions that mean wait. The 2026–2027 indexing window matters because foundation models freeze training data — whoever gets cited inside the window owns the canonical answer for years.
Want the short version?
Now, if any of these are true:
- You crossed product-market fit in the last 6 to 18 months and have repeatable inbound
- You haven't measured citation share-of-voice across the six AI engines (or you don't know the number)
- You're in a category where ChatGPT, Claude, or Perplexity are obvious search surfaces — B2B SaaS, professional services, hospitality, e-commerce
- A direct competitor recently launched a content or schema push you can see in their site footer
- You're entering the 12-month window before a major foundation-model retraining cycle (true for most of 2026 and 2027)
Wait, if any of these are true:
- You're pre-PMF and still hunting for the wedge
- Your data substrate isn't accessible (inputs live in PDFs, on paper, in someone's head)
- You're inside an active 6+ week ship cycle and can't carve out 90 minutes of operator time
Most operators reading this fall into the now bucket. The five trigger conditions above are reasons to pull forward, not push back.
Why does timing matter for AI visibility?
Because the foundation models freeze.
Every major LLM release — GPT-5, Claude 4, Gemini 2 — has a training-data cutoff. Content indexed before the cutoff becomes part of the model's baseline knowledge. Content indexed after the cutoff is invisible until the next retraining cycle, which runs on a 12-to-18-month cadence.
What this means in practice — whoever the engines cite as a category answer in 2026 likely keeps that position through 2028 or longer. The brands ranked first in ChatGPT's category answers for "best CRM for SaaS" or "best estate planning attorney Charlotte" aren't ranked first because their content is best. They're ranked first because their content was indexed at the right moment in the prior training window.
The full mechanism lives at the AI indexing window 2026–2027. The takeaway — citation patterns calcify across model retraining cycles. Moving in the open window is meaningfully cheaper than moving after the window closes.
Trigger condition 1 — past PMF, repeatable inbound
You can name your customer profile in one sentence. You have at least 30 paying customers in roughly the same shape. The acquisition channel that produced them is repeatable enough that you'd run it again. You have measurable revenue.
If all of those are true, AI visibility is the kind of work that compounds your existing motion. The audit names the gaps, the sprint ships the fix, the citation lift compounds for the rest of the year.
If any of them aren't true, fix the substrate first. AI visibility doesn't substitute for product-market fit; it amplifies whatever motion is already working.
Trigger condition 2 — you haven't measured citation share-of-voice
If you don't know what percentage of category questions in your space cite your brand across ChatGPT, Claude, Perplexity, Gemini, Copilot, and Grok — now is the right time to measure.
The cost of measurement is small. The cost of operating without the number is invisible — you can't see the gap, you can't quantify the leak, you can't sequence the fix. Most operators we audit are operating against a citation share they're surprised by once measured. Some are higher than expected. Many are at zero.
The full set of AI-visibility metrics with measurement instructions lives at what metrics matter for AI visibility.
Trigger condition 3 — your category is obviously AI-shaped
Some categories are more exposed to AI search than others. Three patterns:
High-exposure categories. B2B SaaS, professional services (legal, accounting, consulting, medical), hospitality (boutique hotels, wedding venues, restaurants), e-commerce, DevTools, infrastructure. Buyers in these categories increasingly start with an AI assistant before going to Google.
Medium-exposure categories. Industrial / manufacturing B2B, regulated services (financial, healthcare delivery), niche prosumer software. Buyers split between AI and traditional search depending on query specificity.
Low-exposure categories. Local services with strong word-of-mouth (HVAC, plumbing, landscaping in small markets), regulated commodity sectors. Buyers stay primarily on Google or referral channels.
If your category is high-exposure, the audit moves from "interesting" to "overdue." If your category is medium-exposure, the audit is preparatory — you'll move with the category as AI-share grows. If your category is low-exposure, the audit might surface that you're early, and we'll tell you that honestly.
Trigger condition 4 — competitive trigger
If a direct competitor recently launched something visible in their site footer — schema deployment, llms.txt, an Answers content cluster, a structured FAQ on every service page — they're probably in their own AI visibility push. Three to six months from now, the citation patterns will shift.
The asymmetric position is not catching up after they've shipped. The asymmetric position is starting now so you ship in parallel.
Trigger condition 5 — the indexing window
Most of 2026 and 2027 sits inside an active foundation-model training window. ChatGPT's next major retraining cycle is expected in the second half of 2027. Claude and Gemini are on similar cadences.
Content that gets indexed inside the window becomes part of the next-generation model's training corpus. Content that gets indexed after the window closes waits 12 to 18 months for the next training cycle.
The implication — moving now produces citation density that survives the next freeze. Moving in late 2027 produces citation density that doesn't compound until the cycle after.
What about the wait conditions?
Three patterns where the audit is the wrong move right now.
Wait condition 1 — pre-PMF
If you don't yet have repeatable inbound, AI visibility isn't the highest-leverage work. The audit is honest about this — the readiness checklist screens for it. Operators who score below 3 of 8 on that checklist are usually pre-PMF or substrate-broken. Either way, the right move is to fix the substrate first and revisit the audit in two to four weeks.
Wait condition 2 — data substrate broken
If the inputs the AI workflow needs to ground in live in PDFs, in someone's head, or in a system without an export path — the audit will diagnose that gap, but the fix requires substrate work the operator has to lead. Two weeks of operator-led data extraction is cheaper than a stalled engagement. The audit can run in parallel with the substrate work, but the implementation tier (Tier 1, 2, 3) waits.
Wait condition 3 — inside an active ship cycle
The audit takes 90 minutes of operator time over five business days. If the operator is mid-product-launch, mid-fundraise, or mid-acquisition, the 90 minutes is borrowed time and the engagement risks running through a delegate. The audit only works with operator-direct attention. If you can't carve out the time this month, queue it for next month.
What if you're not sure?
Three concrete moves.
Move 1 — score yourself on the readiness checklist. /answers/ai-readiness-checklist-post-pmf. Eight binary questions. If you score 6 to 8 yes, ship the audit this month.
Move 2 — run the citation-share manual check. Pick five queries your prospects would type into ChatGPT. Run them. Record what gets cited. If your brand is cited in 0 of 5, you have your answer.
Move 3 — request the audit, take the call, decide. /audit intake takes 5 minutes. We screen against the trigger and wait conditions before accepting the work. If we think you should wait, we'll say so honestly.
What if you wait twelve months?
A specific number — across the categories we audit, the brand that ships an audit in Q2 2026 typically achieves the same citation share-of-voice 12 to 18 months earlier than the brand that ships in Q2 2027. The lag is the foundation-model retraining cycle.
For a B2B SaaS at $10M ARR with 45% pipeline from search and 15% AI-share, the cost of a 12-month delay is roughly $80K to $200K in pipeline that leaks during the wait. The math compounds over the next training cycle.
The audit is free. The operator time is 90 minutes. The compounding cost of waiting is much larger than the cost of moving.
Where to go from here
- The readiness checklist: /answers/ai-readiness-checklist-post-pmf.
- The cost of the audit: /answers/what-does-an-ai-visibility-audit-cost.
- The window mechanics: /insights/ai-indexing-window-2026-2027.
- Or just request the audit: /audit. Five business days, no charge, dossier yours either way.