The SEO Audit Delivery Framework for Agencies
What separates the audits clients execute from the ones they file
The SEO audit that wins client retention is not the thickest. It's the one a 3-person engineering team can execute next sprint — and does.
After a decade of running audits and watching what happens after delivery, the pattern is clear: audits that generate 80-page PDFs with 200 findings have a 10-20% execution rate. Audits that deliver 10-15 prioritized findings with explicit implementation notes, owner assignments, and expected uplifts have 70%+ execution rates. The delta in client results is an order of magnitude. The delta in retention is close to the same.
This pillar covers the operational reality of delivering audits that change outcomes. Scoping, structure, priority-with-dollars, and the handoff. It's as much project management as it is SEO.
The audit is the handoff, not the deliverable
The most important reframe: the PDF is not the audit. The audit is whatever changes on the client's site in the 30, 60, and 90 days after delivery. The PDF is a handoff document.
This changes everything about how you write it.
- You're writing for the engineer who will pick up the ticket in sprint 2, not for the CMO who signed the contract.
- The recommendations need to be executable without you. Every finding needs enough implementation context that a mid-level engineer can ticket it.
- Priority has to be defensible to the business, not just to you. The CMO needs to be able to justify why finding #3 happens this quarter and finding #11 doesn't.
- The audit succeeds or fails on what the client does, not on what you wrote.
Agencies that internalize this reframe deliver different audits. They cut findings aggressively, include implementation notes, attach dollar estimates, and accept that "80% complete, fully executed" beats "100% complete, 15% executed" every time.
Scoping: what you audit vs what you don't
Every audit has a scope document before kickoff. Without one, you'll spend week 3 arguing about whether the Shopify plugin review was in scope.
Minimum scope contents:
- URLs in scope: explicitly listed domains and subdomains. "crawlsense.com and *.crawlsense.com except dev.crawlsense.com" is clear; "the main website" is not.
- URL depth: if the site has 500k URLs, are you auditing all of them, a sample, or specific templates? Sampling is usually fine — audit 3 examples per template.
- Time horizon: the audit covers the site as of date X. Changes after date X are not in scope.
- Deliverable format: PDF? Slides? Notion page? Clickable tickets? Each has implications for how long it takes and how executable it is.
- Out of scope: explicitly listed. "Content strategy, copywriting, link building, paid search, and conversion rate optimization are not in scope for this audit." Write it down.
- Implementation support: is the agency going to help execute, or just hand off? Huge pricing implication — execution is 3-5x the audit itself.
- Revision rounds: how many rounds of client feedback does one audit include? Two is standard; unlimited is a trap.
Scope creep is the single biggest margin killer in audits. A 40-hour audit that expands to 80 hours during delivery destroys the margin. Either enforce the scope or raise the fee; don't eat it.
The five-layer audit framework
The actual audit content follows the layered framework:
- Crawlability — can search engines reach the site's content?
- Indexability — should what's reachable be indexed?
- Render — does Googlebot see what users see?
- Performance — does the site pass Core Web Vitals in the field?
- Structure — does ranking signal flow where it should (internal linking, IA, URL hierarchy)?
These match the technical audit framework in the technical SEO pillar, because the technical layer is what a standard audit covers. Content, E-E-A-T, and link building are separate audit types with their own frameworks.
For each layer, the audit produces:
- State: what is the current state of this layer (metric values, coverage data, identified issues)?
- Target: what does the healthy state look like for this site's context?
- Gap: which findings make up the gap between state and target?
- Priority: how much does closing this gap matter vs other gaps?
If a finding can't answer all four questions, it's not audit-worthy. Cut it.
Executive summary that survives the sales-to-CEO pass
Every audit needs a one-page executive summary that the CEO reads and passes to the VP of marketing. It has three parts:
1. The headline (1-2 sentences): what's the one thing we found that matters most? "The site is leaking 40% of its organic traffic potential to faceted URL crawl waste. Fixing this is the single highest-leverage action we recommend." This is the quote that ends up in the Monday leadership meeting.
2. The priorities (3-5 bullets): the top items, each with:
- What it is (one line)
- Estimated impact (dollar value or traffic percentage)
- Estimated effort (engineering days or dollars)
- Timeline to see results
3. The ask (explicit): what decision does the CEO need to make? "Approve ~40 engineering hours to execute the top 3 findings. Expected uplift: 20-30% organic traffic over 6 months."
Nobody reads a 60-page document. Everyone reads a one-pager. The exec summary is 10% of the audit and 90% of whether it gets executed.
Priorities with dollar values, not severity tiers
"High / Medium / Low" priority labels are the #1 reason audits don't get executed. They're lazy and they give decision-makers no basis for choosing.
Replace them with impact-and-effort estimates in concrete units:
| Finding | Impact (annual revenue) | Effort (eng days) | Priority |
|---|---|---|---|
| Noindex on product templates | €120k | 0.5 | P0 — this week |
| Canonical consolidation of color variants | €80k | 3 | P1 — this sprint |
| PLP content depth (top 20 categories) | €60k | 15 | P1 — next 2 sprints |
| INP optimization on filter UI | €30k | 8 | P2 — next quarter |
| Image SEO | €8k | 5 | P3 — when convenient |
The dollar estimates are defensible because they're grounded. Source:
- Impact: multiply the target keyword's monthly search volume × your current CTR × position lift estimate × revenue per click.
- Effort: talk to engineering. If you can't, ballpark and mark it as an estimate.
The CEO can now tell you that #3 matters more than #1 because the €120k finding is discretionary (they can run with current PDPs), while the €60k finding is strategic (they're about to launch a new category). Your priority stack adjusts. This is a better outcome than you picking for them.
Execution plan: 30 / 60 / 90 days
The audit ends with a time-boxed execution plan, not a list. Format:
Days 0-30 (this sprint):
- Finding 1 — owner: [Eng Lead Name], acceptance criteria: [measurable], expected completion: [date].
- Finding 2 — ...
Days 30-60 (next sprint):
- Finding 4 — ...
- Finding 5 — ...
Days 60-90 (following sprint):
- Finding 7 — ...
Beyond 90 days (explicitly deferred):
- Finding 10 — why it's important, why we're deferring it, when we should revisit.
The deferral section is as important as the execution section. It tells the client "we see these issues, they matter, we made a deliberate choice to defer them." Without it, issue #10 becomes "why didn't the audit catch this?" in six months.
Acceptance criteria per finding are the clearest signal of execution readiness. "Improve crawl efficiency" is not acceptable; "Reduce Googlebot requests to /search URLs by 90% as measured in weekly log analysis" is.
What to charge (and why flat fees lose on audits)
Agency pricing models for audits, in increasing sophistication:
- Flat fee per audit — simplest, easiest to sell. Destroys margin on complex sites. Usually under-priced for large sites and over-priced for small ones.
- Hourly with cap — better margin protection but clients hate "we went over."
- Tiered by site size — flat fees for predetermined URL count bands. Decent default for mid-market agencies.
- Outcome-based — percentage of organic revenue uplift over N months. Hard to structure but aligns incentives perfectly. Works only when you have execution control.
- Retainer with audit as a component — audit is included in a 6-12 month retainer. Best for agencies with recurring revenue targets.
A reasonable pricing structure for a mid-market agency in 2026:
| Audit type | URL count | Days | Price range |
|---|---|---|---|
| Triage audit | Any | 1-2 | €2,000-€4,000 |
| Standard technical | <50k URLs | 5-7 | €6,000-€12,000 |
| Comprehensive | 50k-500k URLs | 10-15 | €15,000-€30,000 |
| Enterprise | >500k URLs | 20-30+ | €40,000+ |
These are market ranges, not fixed prices — adjust to your market, agency reputation, and client profile. The principle: price by value delivered (traffic-at-stake × expected uplift), not by hours spent.
Handoff: tools, tickets, and the next 90 days
The handoff is when the audit transitions from deliverable to execution — see the audit deliverable and handoff guide for the full packaging playbook. Three formats:
- PDF handoff — traditional, easy to archive, hard to execute from. Works for senior clients who'll re-format findings into their own ticket system.
- Jira/Linear ticket export — one ticket per finding, with acceptance criteria and priority pre-filled. Best for engineering-led clients. Requires upfront work from the agency but triples execution rates.
- Notion/Confluence page — live document that evolves as execution progresses. Great when the agency stays engaged post-audit. Clutter risk if nobody maintains it.
Whatever the format, include:
- Owner assignment — who on the client side owns each finding? Nobody executes unassigned work.
- Success metric — how we'll measure completion. Not "fix the canonical tags" but "canonical chain depth goes from 3 to 1 as measured in next Screaming Frog crawl."
- Pre-execution check — what should we confirm is true before starting? Catches scope creep and dependencies.
- Rollback plan — for higher-risk fixes, how do we undo if something goes wrong?
After delivery, the standard agency cadence: 2-week check-in, 30-day status review, 60-day ranking review, 90-day impact analysis. Even if the client is self-executing, stay engaged. The audits that drive retention are the ones where the agency is visible during execution, not just at delivery.
Frequently asked questions
How long should a technical SEO audit take?
For a typical mid-market site (50k-500k URLs), 5-10 working days is the realistic range. Faster than 5 days cuts into quality. Longer than 10 days usually means scope creep.
Should I use a tool-generated template for my audits?
Tools are great for data collection; bad for deliverable structure. A report that's 80% Screaming Frog screenshots signals low effort. Use tools to gather data, then write the audit from scratch around the client's specific context.
How do I handle client pushback on priorities?
The dollar-value priority framework handles most pushback. If the CEO disagrees on priority, show them the impact/effort numbers and adjust. When they have information you don't (internal roadmap, upcoming launches), they're often right. Flexibility on order, not on the findings themselves.
What if I can't get logfile access from the client?
Document it as a limitation in the scope. Some findings you literally cannot validate without logs (crawl distribution, orphan page confirmation). Flag those as "lower-confidence findings, would be confirmed with logfile access."
How often should clients re-audit?
Once a year is standard for a full audit. After a major site change (replatform, IA overhaul, migration) within 30 days. Between audits, a monthly GSC/Core Web Vitals review catches regressions early.
Should the audit include content and link-building findings?
Separate deliverables. Trying to do everything in one audit blows the scope. If the client wants both, sell them as two audits (or three: technical, content, off-page) with distinct deliverables and priorities.
What to do next
If you deliver audits as PDFs today, pick one recent audit and try to convert it into the ticket format — one ticket per finding, with acceptance criteria. The findings that can't survive that conversion are the ones that wouldn't have been executed anyway. Cut them.
Supporting articles for this cluster: SEO client onboarding, client reporting that retains, SEO audit pricing, the 30/60/90 SEO plan. Pick the pain point that costs your agency the most in lost retention and start there.
Related articles
Migrating from Manual to Automated SEO Monitoring
Weekly manual SEO checks catch problems 3-7 days after they happen. Automated monitoring catches them in minutes. The migration from manual to automated isn't about replacing judgment — it's about catching regressions before they compound.
Running a SEO Audit in 2 Hours (The Triage Framework)
A full SEO audit takes weeks. A triage audit takes 2 hours and catches the issues that would otherwise lose another month of rankings while the full audit runs. Here's the structured framework that scales across sites and verticals.
Agency KPIs That Matter: Retention, MRR, Client LTV
Agency metrics that look good (new client count, revenue growth) can mask declining business health. The metrics that actually predict agency longevity are unglamorous: retention, client LTV, and the underlying engagement quality.