Does my company actually need an AI agent?
If a workflow has more than three conditional branches, runs more than fifty times per month, and consumes more than twenty dollars of human time per run — it needs an agent. Below that threshold, a deterministic automation is the better tool. Above it, an agent earns its keep.
The honest answer
Three thresholds. A workflow needs an agent if it crosses all three:
- More than three conditional branches — the workflow's path varies enough that a deterministic decision tree cannot express it cleanly.
- More than fifty runs per month — the volume justifies the build cost. Below this floor, an agent doesn't amortize.
- More than twenty dollars of human time per run — the savings exceed the foundation-model API cost plus the build investment.
Cross all three and an agent earns its keep. Miss any one and you should reach for either an automation, an internal tool with a human-in-the-loop authority gate, or no system at all.
The mistake this page is built to prevent — operators reaching for "AI agent" because the term is fashionable, when their workflow would ship faster, cost less, and fail more gracefully as a deterministic automation.
Walk the decision tree
Threshold 1 — branching depth
Count the if-this-then-that decisions a human currently makes when running the workflow.
Examples below the threshold (use automation):
- Inbound webhook → CRM sync → Slack notification. Two operations. No branching.
- Calendar booking → CRM update → email confirmation. Three operations. No branching.
- File upload → S3 store → metadata write → indexer trigger. Four operations. One error-handling branch.
Examples above the threshold (consider agent):
- Inbound sales inquiry → classify by industry → look up company size → check decision-maker title → check budget signal → check urgency → route to the right rep with the right context. Six conditional branches. The path varies per inbound.
- Customer support inbound → identify intent → check account status → look up order history if relevant → search knowledge base → either respond or escalate to human → if respond, decide tone based on customer tier. Seven branches. Varies per query.
Three is the rough breakpoint where deterministic logic starts to feel forced. A workflow with three branches usually still ships clean as an automation. A workflow with four or five usually does not.
Threshold 2 — volume
How many times per month does the workflow run?
- Below 50 runs/month — don't build an agent. The build cost (engineering, prompt engineering, evaluation harness, monitoring) doesn't amortize at low volume. Either run the workflow manually for now, or build it as a deterministic automation with explicit branches and accept the brittleness.
- 50–500 runs/month — agent is in the sweet spot. The build cost amortizes. The savings are measurable. The operator can monitor edge cases without drowning.
- 500+ runs/month — agent build is correct, with extra investment in evaluation, monitoring, and cost controls. Hard caps on tool calls per session become non-optional.
Two cases that bend the volume threshold:
High value per run. A workflow that runs 25 times per month but each run is worth $200 of human time crosses the economic threshold even at low volume. Build as agent. The math justifies it.
High variability per run. A workflow with extreme input variability needs agent-shape inference even at low volume. Build the agent if the alternative is no automation at all.
Threshold 3 — time-per-run
What is the human time-per-run worth, in fully-loaded labor cost?
- Less than $20/run — automation is the right shape. Agent overhead (API costs, evaluation infrastructure, monitoring, edge-case handling) exceeds the savings.
- $20–$100/run — agent is in the sweet spot. The savings comfortably exceed the agent build's marginal cost.
- More than $100/run — agent is correct, but pair with an internal tool with human authority gates if the failure case is asymmetric. The economic case is strong. The failure-mode discipline still matters.
The time-per-run number includes the loaded cost. Salary. Benefits. Overhead. A senior staffer at $200K total comp is roughly $100/hour. A 30-minute discovery brief at that staffer's loaded rate is $50. The math runs against fully-loaded numbers, not nominal salary.
When even crossing the thresholds isn't enough
Three cases where the thresholds say "agent" and the right answer is still "no agent."
1. The failure case is unbounded
If a wrong agent path produces an irreversible outcome — a contract sent to the wrong client, a financial transaction executed, a public statement issued — the agent shape is incorrect regardless of volume or time-per-run. The right shape is an internal tool that uses agent inference to draft and a human authority gate to approve. The agent can be the same. The architecture changes.
2. The data the agent needs to access is regulated
If the workflow touches data covered by HIPAA, PCI DSS, GDPR Article 9 special categories, the EU AI Act's high-risk classifications, or RBI FREE-AI obligations, the build path adds compliance overhead that often exceeds the agent's net economic value. Sometimes the right answer is to ship the workflow as a deterministic automation with no inference, accept the lower accuracy, and buy compliance simplicity.
3. The operator can't tolerate any wrong answer
Some operators say "any error is unacceptable" and mean it. Agents have a non-zero error rate by structural necessity. They are reasoning systems. Reasoning is fallible. If the operator can't tolerate any error, the agent isn't the right tool. Either re-scope to a workflow with bounded failure, accept the error rate, or do not build.
We surface this honestly during scoping. Operators who insist on zero-error agents either accept the calibration after a short conversation, or get redirected to alternatives.
A worked example, by vertical
Estate planning law firm
Workflow — drafting first-pass intake summaries from inbound contact-form submissions.
- Branching? Yes — case type, jurisdictional fit, urgency, fee structure inquiries, conflicts check, prior-attorney check. Six branches.
- Volume? 60 to 120 inbound per month. Above the floor.
- Time-per-run? 20 minutes of paralegal time, ~$15. Below the threshold.
Decision: not an agent. Build as automation that classifies the inbound by structured form fields, routes to the right paralegal, and lets the paralegal draft the summary with a templated assist. The paralegal is faster than the agent at the volume. The agent overhead doesn't amortize.
Boutique inn
Workflow — handling pre-booking inbound questions.
- Branching? Yes — parking, breakfast, dog policy, room differences, pricing, dates, cancellation, etc. Twelve common branches.
- Volume? 200+ inbound per month. Well above the floor.
- Time-per-run? 5 minutes of innkeeper time, plus the cost of OTAs winning the booking when the innkeeper is asleep. Effective time-per-run when the OTA cost is included: $80+. Above the threshold.
Decision: agent. Specifically, a property-specific AI concierge integrated with the PMS. All three thresholds clear.
B2B SaaS — inbound sales triage
Workflow — classifying inbound leads by ICP fit before sending to a sales rep.
- Branching? Yes — industry, company size, decision-maker title, budget signal, urgency. Five branches.
- Volume? 200 inbound per month.
- Time-per-run? 30 minutes of rep time at $100/hour loaded = $50. Above the threshold.
Decision: agent. Plus an internal tool review queue for borderline classifications, because the failure case is asymmetric (over-qualifying wastes a rep hour; under-qualifying loses a real prospect). Three shapes compounding.
Got no agent? Here's what to do
Three productive next steps:
Build the automation. Most workflows that fail the agent test still benefit from automation. The branching is shallow enough that a deterministic flow ships in days, not weeks, and recovers the operator's time at lower cost.
Document the workflow for future reconsideration. Workflows that score below the volume threshold today often cross it as the business scales. A documented workflow becomes the corpus for an agent build six months from now without re-running the discovery work.
Identify the next workflow up. Often the workflow the operator named isn't the highest-leverage one in the operation. A different workflow nearby — different shape, different volume, different value-per-run — may be a clean agent build that the original framing missed. The free audit surfaces these candidate workflows explicitly.
Where to go next
- The full shape comparison: AI agents vs automations vs internal tools.
- The readiness check: AI readiness checklist for operators past PMF.
- The 14-day sprint structure: what is a 14-day AI sprint.
- The product pillars on /services.
- Or just request the audit: /audit. Five-business-day deliverable that runs this decision tree against your actual workflows and surfaces the agent-shape candidates with revenue quantification.