Q2 2026 · Category-exclusive retainer slots open · reviewed weekly
DOXIA AXIS
BOOK
Sprint Plan · Pages 08–0929 Apr 202610 min read

Sample 14-Day AI Sprint Plan + Risk Register

The day-1-to-day-60 deliverable schedule and the named risk register that ships with every Tier 2 sprint. Pages 32 and 37 of the dossier, rendered as long-form text — the operating plan that turns the audit into the engagement.

What is a sprint plan, exactly?

A schedule with named owners and shipped artifacts. Not a roadmap. Not a strategy doc. A schedule.

The audit's job ends with the diagnosis. The sprint plan's job is to make the prescription auditable — every fourteen-day cycle has a named deliverable that either ships or doesn't, owned by a named lane that either delivers or escalates. No vagueness. No "we'll iterate on the strategy."

What follows is page 32 (the sprint plan) and page 37 (the risk register) from a real Tier 0 audit, anonymized.

Page 32 — the 60-day cadence

Free audit, three sprints, then a measurement window. Total span: 60 days.

| Days | Phase | Artifact | Owner | |---|---|---|---| | D01–D05 | Free AI Visibility Audit | 14-page dossier | Operator | | D06–D19 | Sprint 01 · Crawler + Schema | robots.txt + llms.txt + 4 schema types | Ops + Dev | | D20–D33 | Sprint 02 · Content Depth | 12 articles rewritten for AI excerpt | Content + Operator | | D34–D47 | Sprint 03 · Authority Sweep | 14 backlinks + Person/Product schema | PR + Operator | | D48–D59 | Measurement + Rubric | Post-sprint delta report | Operator |

Five rows. Three are sprints. Two are bookends.

Why exactly 14 days per sprint?

Because two weeks is the longest cadence at which both operator focus and team focus stay coherent. Longer than fourteen days, attention drifts. Shorter than fourteen days, the work doesn't have time to land in production and the next sprint starts before the last one's measurable outcomes have surfaced.

The full reasoning lives at what is a 14-day AI sprint. The condensed version — fourteen days fits one full standup-to-standup cadence with operator review on day 7 and shipped delivery on day 14. Everything else is calibrated around that rhythm.

Inside Sprint 01 — Crawler + Schema (D06–D19)

The first sprint is the cheapest one to scope and the highest-ROI one to ship.

Day 6 — Sprint kickoff with the operator. Walk through the dossier's high-confidence findings (.01, .02, .03 from page 19). Confirm owner names. Confirm escalation paths. Confirm acceptance criteria.

Days 7 to 9 — Ops lifts the robots.txt blocks on GPTBot, Claude-Web, CCBot, and Meta-ExternalAgent. Dev publishes llms.txt at the canonical path with the content hierarchy listed in the dossier appendix. Both deployments verified by re-fetching with named user-agents and inspecting the response.

Days 10 to 14 — Content + Dev deploy Organization, FAQPage, Article, and Breadcrumb schema across the priority pages flagged on page 11 of the dossier. Each schema block validated through the Google Rich Results Test plus a custom JSON-LD validator. Each deployment shipped behind a feature flag so it can roll back if a downstream issue surfaces.

Day 15 — Operator review. Walk the audited site as a logged-out visitor, verify the schema renders, verify the crawler-test script returns the expected user-agent permissions.

Days 16 to 19 — Soak. The crawlers re-index. Citations start surfacing in the wild on the 7-to-14-day re-crawl cadence. The sprint ships at end of day 19 with the schema deployments live and the crawler unblocks verified.

Acceptance criteria for Sprint 01 — robots.txt unblocks deployed, llms.txt published, four schema types deployed at category-median coverage, no production regressions. If any criterion misses, the sprint extends by 3 days. The cycle does not start a new sprint until the prior one's acceptance criteria are met.

Inside Sprint 02 — Content Depth (D20–D33)

Sprint 02 takes longer to scope because it requires operator-side content review.

Day 20 — Sprint 02 kickoff. The dossier flagged 12 articles on page 8 as having extraction-poor structure. The list is shared with the operator. Each article gets a one-page rewrite brief: lead sentence, named-entity targets, citation list to integrate, question-shape headings to add.

Days 21 to 28 — Content rewrites the 12 articles in collaboration with the operator's subject-matter expert. Each rewrite preserves the operator's voice (the operator approves the voice anchors before any rewrite ships). Each rewrite adds 3 to 5 inline citations to verified sources. Each rewrite includes Article-with-citation-array schema, a question-shape H2 structure, and a thesis-first opening paragraph.

Days 29 to 31 — Operator review. The operator reads each rewrite end to end, marks anything off-voice or factually wrong, returns within 48 hours.

Days 32 to 33 — Final pass and ship. All 12 rewrites deployed. The Article schema with citation arrays validated. The Sprint 02 deliverable closed.

Acceptance criteria for Sprint 02 — 12 articles deployed, each with valid Article + citation schema, each approved by the operator's voice review, no production regressions.

Inside Sprint 03 — Authority Sweep (D34–D47)

The longest-tail sprint. The lowest predictability. The most operator-dependent.

Day 34 — Sprint 03 kickoff. The dossier flagged 14 candidate authority surfaces on page 24 — industry publications, trade associations, third-party review aggregators, podcast directories, Wikipedia, GitHub. Each surface gets an outreach plan with a named target, an angle, and a deadline.

Days 35 to 42 — PR runs the outreach. Operator reviews and approves any byline. The Person schema deploys for each named principal during this window, including verified sameAs links to the operator's LinkedIn, the firm's LinkedIn, the Avvo profile, and any other verified third-party surface. The Product / Service schema deploys for the firm's named offerings.

Days 43 to 46 — Citation cleanup. Each successful outreach citation gets a backlink-from-authority-surface logged in the dossier's appendix. Each Person and Product schema deployment validated.

Day 47 — Sprint close. The deliverable is the named list of citations placed plus the schema deployments live.

Acceptance criteria for Sprint 03 — Person and Product schema deployed at category-median coverage, 8 to 14 citation backlinks placed (range stated because outreach has variance), no production regressions. Sprint 03 acceptance is wider because authority work has lower predictability than schema work.

Days 48 to 59 — measurement and rubric

The 12-day post-sprint window is not idle. It's where the dossier's day-90 trajectory chart gets verified.

Days 48 to 56 — The measurement script runs daily. The same 30-query test set from page 5 runs against the same six engines. Citations get logged. The day-90 trajectory chart updates with actual measurements layered over the modeled curve.

Days 57 to 59 — The post-sprint delta report ships to the operator. Three things in the report:

  • Actual citations vs modeled. Where the engine-by-engine results landed against the day-30, day-45, day-60 model points.
  • Per-finding outcome. Each of the six findings on page 19, with the actual citation lift attributable to that finding.
  • Recommendation. Either close the engagement (if the gap to the page-5 target competitor closed as projected) or roll into a Tier 3 retainer (if compounding work is warranted).

For most operators, the recommendation is the Tier 3 retainer. The reason — most of the citation lift is happening on the day-90 slope, which is a post-sprint window. The retainer monitors the slope and ships small refinements as the engines re-train.

Page 37 — the risk register

Six rows. Six things that could move the page-19 dollar number.

| ID | Domain | Risk | Severity | Mitigation | |---|---|---|---|---| | .01 | Compliance | EU AI Act Annex III classification not performed | HIGH | Sprint 03 · classify all deployed GPAI use cases | | .02 | Operational | Delivery-lead key-person dependency | MED | Engagement export + handoff playbook | | .03 | Vendor | OpenAI / Anthropic crawler policy drift | MED | Monthly crawler-access audit · retainer cadence | | .04 | Reputation | AI-generated content indistinguishable from human | LOW | Human-in-loop edit pass · Article schema attribution | | .05 | Competitive | Competitor A fielding llms.txt before you | HIGH | Sprint 01 · ship llms.txt in week 1 | | .06 | Schedule | Content rewrite depends on SME availability | MED | Operator-drafted copy, SME review only |

Each row has a domain, a one-line risk statement, a severity, and a one-line mitigation that names the sprint or cadence the mitigation lands in.

Why does the audit ship a risk register?

Because the alternative is a deliverable that pretends nothing can go wrong, which no operator believes.

Three of the six risks here are explicit about the engagement's external dependencies:

  • .03 (vendor) — OpenAI and Anthropic publish crawler policies that change. A sudden ClaudeBot rate-limit change or a GPTBot honoring a previously-ignored robots directive can shift citation patterns. The mitigation is a monthly audit on the retainer.

  • .05 (competitive) — If a category competitor ships llms.txt before the audited firm, the firm is one cycle behind on a now-public technique. Sprint 01 ships llms.txt in week 1 specifically to reduce this exposure.

  • .06 (schedule) — Content rewrites need subject-matter-expert review. SMEs at most service businesses are also the operators, who are also the people approving the engagement. The mitigation is operator-drafted copy with SME review only, which compresses the SME's required time per rewrite from 90 minutes to 20.

The compliance row (.01) is unusual for a US-only operator but it stays in the register because the EU AI Act applies extraterritorially when the firm offers services into the EU. The mitigation is a Sprint 03 classification pass that documents the firm's GPAI use cases against Annex III. Most US operators either confirm low-risk classification in 30 minutes or discover an exposure they did not know about.

What does the operator do with the risk register?

Three actions.

Action 1 — escalate the HIGH-severity rows. .01 and .05. Both get a named owner in the operator's organization. Both get a weekly review checkpoint.

Action 2 — accept the MED-severity rows. .02, .03, .06. These are operational risks that get monitored on the retainer cadence rather than escalated.

Action 3 — log the LOW-severity rows. .04 stays in the register but doesn't trigger active mitigation unless severity escalates.

The register reruns at the 60-day measurement window. New risks get added, old risks get updated, severities get re-rated based on what shipped. This is the discipline that keeps the engagement honest as the world changes around it.

What ships at day 60

Six things, all listed in the dossier appendix, all pinned to specific deliverable IDs:

  1. The crawler + schema layer. robots.txt, llms.txt, 4 schema types deployed at category-median coverage.
  2. The content layer. 12 articles rewritten with citation-array schema and citability-rubric structure.
  3. The authority layer. 8 to 14 citation backlinks placed, Person + Product schema deployed.
  4. The measurement. Post-sprint delta report with actuals against the modeled trajectory.
  5. The updated risk register. Re-rated severities, new risks added, mitigations adjusted.
  6. The recommendation. Close the engagement, or roll into Tier 3 retainer.

The deliverables are auditable, named, sequenced. The engagement is over when the deliverables ship — not when the operator feels good about the work.

Where to go from here