How law firms appear in ChatGPT and Perplexity answers
Drawn from two shipped audits — a Charlotte estate-planning firm and a Savannah personal-injury firm. Three things stack: credentials marked up in schema, FAQPage on every practice-area page, and either fresh content or a structured case-results hub. Most firms have one of three. The cited firms have all three.
Want the short version first?
Three things stack — credentials marked up in Person and Attorney schema, FAQPage schema on every practice-area page, and either fresh publishing cadence or a structured case-results hub the engines can extract from.
Most firms have one of the three. The cited firms have all three.
What follows is the playbook. It's drawn from two audits we actually shipped — a Board-Certified estate-planning firm in Charlotte and a 30-year personal injury firm in Savannah. Both had strong fundamentals. Neither was getting cited. The fix shape was different per firm. The methodology was the same.
Why does this matter for legal specifically?
Three reasons the legal vertical sits at the high end of the GEO leverage curve.
Prospects research lawyers in AI search now. Estate planning. Personal injury. Criminal defense. Immigration. Family law. All categories where the prospect is asking the engine "how do I find a [X] lawyer in [Y]?" before they call anyone. The AI answer becomes the shortlist. If your firm isn't on that list, you don't get the call.
The credential layer is structured already. Bar admission. Board-Certified Specialist designations. Martindale ratings. Super Lawyers selection. AV ratings. All of these have direct schema mappings — hasCredential, award, recognizedBy. Firms that mark them up get cited. Firms that mention them in flowing prose don't.
The query shape rewards FAQ schema. "Statute of limitations on wrongful death in Chatham County." "Flat fee estate planning North Carolina." "Can I sue a city for a slip-and-fall in Georgia?" All FAQ-shaped queries the engines extract from FAQPage markup at materially higher rates than from prose.
The vertical is also a YMYL ("Your Money or Your Life") category in Google's terminology. The engines weight authority signals heavier here. That cuts both ways. Harder to win without proper signals. Easier to win once the signals are deployed.
Which schemas actually move citation?
Five. In order of leverage. Skip none.
Person with hasCredential
For every named attorney. The schema looks like this:
{
"@context": "https://schema.org",
"@type": ["Attorney", "Person"],
"name": "Jane Doe",
"jobTitle": "Partner",
"hasCredential": [
{
"@type": "EducationalOccupationalCredential",
"name": "Board-Certified Specialist in Estate Planning and Probate Law",
"recognizedBy": {
"@type": "Organization",
"name": "North Carolina State Bar Board of Legal Specialization",
"url": "https://www.nclawspecialists.gov/"
}
}
],
"memberOf": { "@type": "Organization", "name": "North Carolina State Bar" }
}
The Charlotte audit's single highest-leverage finding? This schema didn't exist on the site. The Board-Certified credential — a designation held by ~1% of NC attorneys — lived only in a paragraph mid-way through a 2,100-word founder bio. Not in any structured-data block. We named that as the single most-leveraged citability fix on the site.
LegalService (or Attorney directly on the firm)
The firm-level schema. Includes areaServed, serviceType (estate planning, probate, personal injury, etc.), and provider linking to the firm's Organization schema. Vital for the "best [X] lawyer in [Y]" class of queries.
FAQPage on every practice-area page
The single most-cited schema type in the AI answer engines for legal queries. Every practice-area page should have 5 to 10 question-and-answer pairs marked up as FAQPage. The questions should match the literal queries prospects type. Like this:
- "How long do I have to file a personal injury claim in Georgia?" (statute of limitations)
- "Do I need a will if I have a trust?" (estate planning)
- "What is the average settlement for a motorcycle accident?" (PI)
- "What does a probate attorney charge in North Carolina?" (fees)
Verbatim question shape matters. The engines match user intent against the question text directly. The Savannah audit found zero FAQPage schema across 24 practice-area pages. Highest-volume gap on the site.
Review and AggregateRating
For firms with real review volume — Avvo, Google, Birdeye, BBB, platform-specific aggregators. Deployment requires the actual review count and average rating. Don't fabricate. The engines validate against indexed sources.
The Charlotte firm had 179 real reviews across two platforms (Avvo 10.0 / 50 reviews, Birdeye 4.9 / 129 reviews). Zero Review/AggregateRating schema. The Savannah firm had 80+ reviews at 4.9 stars. Same gap. Both were leaving the star-rating rich result on the table — and the underlying social-proof signal that makes the engines trust the firm enough to cite at all.
Article or Service for case-results pages
A case-results hub structured as Article schema (or Service with case-history sub-pages) gives the engines specific outcomes to quote. "Largest motorcycle accident verdict in Chatham County"-class queries find a cited authority when the firm has a structured case-results page. Otherwise the citation goes to Martindale, Justia, or a competitor.
What about content shape?
Schema gets the engines to extract. Content shape determines what they extract. Three rules from the audits:
Open every practice-area page with a definitional thesis sentence. "A statutory will in North Carolina is a will that follows the form prescribed by NC General Statute 31-3.3, allowing for self-proving execution without notarization." That's one quote-shaped sentence. "At our firm, we believe everyone deserves a thoughtful estate plan tailored to their unique situation" — that's not.
Match question shape to query shape. If prospects search "can I sue a city for a slip-and-fall in Savannah," your H2 should literally read "Can I sue a city for a slip-and-fall in Savannah, GA?" Not "Premises Liability for Government Property." The engine matches the question, not the topic.
Cite primary sources inline. Statute citations linking to the actual statute on the state legislature's site. Court opinions linking to the published opinion. Bar Association resources linking to the actual page. The engines weight provenance. Provenance is hyperlinks.
What about the five-year publishing-silence problem?
This was the second-biggest finding in the Savannah audit.
A content layer that stopped in late 2020. Eight COVID-era blog posts. Nothing in the five years since. For a YMYL vertical, a five-year silence reads to Google as "low firm-activity signal" — regardless of how active the practice is in real life. The engines weight freshness as a proxy for whether the firm still exists, still practices, still has authority.
Fix? A quarterly publishing cadence is the minimum sufficient signal. Doesn't have to be heavy. A 600 to 800-word piece per quarter that answers a specific practice-area question, properly schema-tagged, properly linked from the practice-area page, beats a 5,000-word annual piece by a wide margin on the freshness signal.
What about the intake play?
Both audits surfaced the same gap, in different shapes.
Personal injury inquiries cluster 6pm to 11pm weekdays and all-hours weekends — the post-ER-discharge and immediate-post-incident windows. Most firms run a business-hours contact form. The form sits until Monday morning. Meanwhile, competitors with 24/7 chat are answering the same questions and capturing the booking.
The fix? A structured intake agent. Qualifying questions. Case-type triage. After-hours-specific routing. Not legal advice. Explicitly not malpractice-adjacent. Operates as a pre-engagement triage tool the firm reviews before the first attorney call. We build these as part of Tier 2 sprint engagements.
The intake agent doesn't directly affect citation. It converts citation into pipeline. A firm cited in Perplexity for "motorcycle accident lawyer Savannah" still loses if the prospect lands on a contact form at 11pm and the form sits until Monday. AI visibility plus AI intake compound. Either alone underperforms.
What does a 14-day legal-vertical sprint actually look like?
Tier 2. 14-day sprint. Day-by-day:
- Day 1. Kickoff call. Confirm the schema deployment list. Confirm the practice-area pages in scope.
- Day 2–4. Schema deployment —
Attorney,LegalService,PersonwithhasCredential,Review/AggregateRating,Offerfor any published-fee pages. - Day 5–8.
FAQPagedeployment across the practice-area pages. Typically 5 to 10 Q&As per page, written from the firm's actual answers to common client questions. - Day 7. Mid-engagement checkpoint. Re-test the firm against the AI engines. Verify the schema is rendering and being picked up.
- Day 9–11. Case-results hub if applicable, or quarterly publishing-cadence kickoff content.
- Day 12–13. Intake agent integration if scoped. After-hours routing, qualifying questions, attorney handoff with full context.
- Day 14. Delivery walkthrough. Re-test against the AI engines. Compare cited authorities pre vs post.
Ship-date guarantee — live in 14 business days or daily credits apply. The deliverable is real, on-site, measured.
What did the two shipped audits actually find?
Estate planning — Charlotte, NC. Board-Certified Specialist firm rebuilding citation authority after a rebrand. The single highest-leverage fix? Making the Board-Certified credential visible — to humans in the homepage H1, to engines in Person.hasCredential. Eight schema types deployed in under two weeks.
Personal injury — Savannah, GA. 30-year firm with strong technical chassis but a five-year content silence. The fix? FAQPage schema across 24 practice pages. A case-results hub with LegalService markup. Review / AggregateRating schema. A 24/7 structured intake agent.
Same methodology. Different diagnoses. The engines reward different things in different verticals. The audit is the only reliable way to find out which gap costs the firm the most.
Where to go next
- See the audit methodology: what is an AI visibility audit.
- See the schema set in detail: what schema matters for AI visibility.
- See FAQPage schema specifically: what is FAQPage schema.
- Read the Charlotte and Savannah cases: estate planning · personal injury.
- Or just request the audit: /audit. 5-business-day deliverable. Revenue-quantified findings. The legal-vertical schema set named with deployment specs.