AI readiness checklist for operators past PMF
Eight binary questions that determine whether you should ship AI now or harden your data layer first. Designed for post-PMF operators who have the demand signal but not yet the operational substrate.
Are you ready for this?
Most operators asking that question already are.
The gap is rarely capability. It's almost always two specific things — data hygiene and operator commitment. So before we get to the eight questions that actually decide it, the honest framing: this isn't a test of whether your business is sophisticated enough for AI. It's a test of whether you have the substrate the AI workflow needs to ground itself in.
Eight binary questions. Answer them honestly. If you score six or more yes, ship a sprint this month. If you score three to five, fix the substrate first — usually one to two weeks of operator-led cleanup before any AI work matters. If you score zero to two, AI isn't your highest-leverage problem. Spend the budget on the underlying operations.
The checklist is what we run during the free Tier 0 audit anyway. Surfaced honestly so you can self-screen before requesting it.
The eight questions
1. Do you know which workflow you want AI to absorb?
Yes if you can name it in one sentence. "I want AI to handle pre-booking questions at our inn." "I want AI to triage inbound sales leads by ICP fit." "I want AI to draft first-pass case-results summaries from our verdict history."
No if the answer is "I want AI to help us." Vagueness here cascades. The audit can sharpen the question, but the operator has to bring a starting point.
2. Does the workflow run more than 50 times per month?
Yes — the volume justifies the build cost. AI work amortizes at scale. Below a threshold, the time the operator spends scoping the build exceeds the time the AI saves. Fifty per month is the rough floor for most workflows. Higher-stakes workflows (where each run is worth more than $50 of human time) can cross the threshold at 20 to 30 runs per month.
No — reconsider. Either run the workflow manually for now and revisit when volume scales, or look for a higher-volume workflow nearby.
3. Is the workflow's input data already collected somewhere accessible?
Yes if the inputs the AI needs to operate live in a system you can query. CRM. PMS. Support inbox. Document management system. Structured database. Even a Google Sheet counts.
No if the inputs live in PDFs scattered across email, in someone's head, or in a system without an export path. Fix this first. Two weeks of operator-led data extraction work is cheaper than a stalled engagement.
4. Is there a written-down version of the workflow's correct outputs?
Yes if you have policies, FAQ documents, sample responses, decision rubrics, or any artifact that documents what good looks like. The AI is grounded against this corpus.
No if good outputs live only in the heads of senior staff. The workflow itself can be the trigger to document them — most engagements include a corpus-extraction phase as Day 2-3 of the sprint — but the operator must commit to that phase rather than expecting the AI to invent the standard.
5. Will an operator be available for the sprint kickoff, the day-7 checkpoint, and the day-14 walkthrough?
Yes if a named operator can carve out three hour-long windows in a fixed two-week period. The sprint structure depends on these checkpoints.
No if the operator wants to delegate to a project manager who delegates to a junior staffer. The shape of the engagement requires direct operator access. To the data. To the decisions. To the post-deployment iteration. Engagements that try to run through a delegate stall at the first ambiguity.
6. Is there a clear human owner for the workflow post-launch?
Yes if a specific person is accountable for monitoring the AI workflow once it ships, escalating edge cases, and approving iterations. The AI does not run unattended.
No if the answer is "the team" or "we will figure it out." Workflows without a human owner degrade. The audit surfaces this honestly and either insists on naming the owner before the sprint starts or recommends scope change.
7. Does the workflow have bounded failure consequences?
Yes if the worst case is a recoverable error. A bad email reply. A misclassified lead. A wrong recommendation that a human can override. Bounded failure modes let the AI run with appropriate confidence thresholds.
No if the failure case is unbounded. Irreversible client communications. Regulated decisions. Financial transactions without authority gates. Workflows in this category need an internal tool with a human-in-the-loop authority gate, not a free-running agent. The shape changes. The readiness does not.
8. Is the operator willing to ship a working-but-imperfect version on day 14 and iterate?
Yes if the operator understands that the day-14 deliverable is a working baseline, not a finished system. The first month post-launch is iteration — every edge case the workflow surfaces becomes a refinement to the next version.
No if the operator wants the system to be perfect on day 14 with no further work. AI workflows are never finished. They are tuned. Engagements with operators who refuse iteration plateau at the day-14 quality bar and degrade as the world changes around the workflow.
So what's your score?
- 6 to 8 yes — ship a sprint this month. The substrate is ready. The operator is committed. The workflow is well-shaped. Go to Tier 2 directly or via the free audit for diagnosis-first.
- 3 to 5 yes — fix the substrate first. The most common gap is question 3 (data accessibility) or question 4 (corpus existence). One to two weeks of operator-led cleanup, then revisit.
- 0 to 2 yes — AI isn't your highest-leverage problem right now. Spend the budget on the underlying operations. Sales process. Content infrastructure. Hiring. Revisit AI when the substrate matures.
The honest read — most operators score 5 to 7. The substrate gaps are usually surmountable in the diagnostic phase of the audit, not blockers to a sprint. The operators who score below 3 are usually pre-PMF teams who arrived at the page from the wrong direction. We redirect them honestly rather than scope work that won't land.
Want to move the score?
Three operator moves that reliably move the score from 5 to 7+:
Spend a focused day documenting the workflow. Sit with the senior staffer who handles the workflow today. Watch them do it. Write down the actual decision logic. The artifact becomes the corpus the AI grounds against. This single move fixes question 3 and question 4 simultaneously most of the time.
Name the owner explicitly. Question 6 fails most often because no one has been forced to commit. Force the commitment. The owner doesn't need to be the operator. The operator does need to assign and the assignee needs to accept.
Reduce scope to a single workflow. Operators who score low usually score low because they want too much. "AI for our sales operations" is the wrong scope. "AI that drafts first-pass discovery briefs from inbound forms" is the right scope. The narrower the workflow, the higher the readiness score.
Who is this checklist actually for?
We work with operators who score 5+ on this checklist. Operators who score below 5 either go through the audit phase first to bring substrate up to readiness, or get redirected honestly to fix substrate first.
The qualification isn't gatekeeping. It's the inverse — engagements that try to run with operators below readiness threshold fail predictably, and we've learned to refuse rather than disappoint. The free audit surfaces the readiness honestly. The operator decides what to do with the score.
Where to go next
- The shape decision: AI agents vs automations vs internal tools.
- The specific question: does my company actually need an AI agent.
- The sprint structure: what is a 14-day AI sprint.
- The qualification frame on the About page — who Doxia Axis works with, who we refuse, what we won't do.
- Or just request the audit: /audit. Five-business-day deliverable that runs this checklist against your actual operations and quantifies the gaps in dollars.