Scaling SEO Audits: Templates + Automation

The audit is half data gathering (automate this) and half judgment (don't)

Enric Ramos · · 7 min read
a computer screen with a rocket on top of it

A typical SEO audit is 50% data gathering and 50% interpretation. The data gathering is mechanical — crawl the site, export data, join with external tools, compile tables. The interpretation is judgment — which of the 147 findings matter, what's the priority, how to frame the recommendations.

Most agencies do both by hand. Automating the data half cuts audit time in half and frees agency attention for the judgment work that actually delivers value. This article covers the audit template structure, the automation patterns that work, and the parts you should refuse to automate.

The two-layer audit structure

Layer 1: data collection (automate this).

Layer 2: judgment + synthesis (don't automate this).

The template structure separates the two. Data layer produces consistent outputs; judgment layer adapts per client.

Layer 1 deliverables:

  • Full URL crawl (Screaming Frog / Sitebulb export).
  • 12-month GSC data (queries, pages, impressions, clicks).
  • 12-month GA4 data (organic traffic, conversions, page-level engagement).
  • Backlink audit (ahrefs / Semrush).
  • CWV field data (CrUX / PageSpeed Insights).
  • Log file sample (if available).
  • Competitor analysis (top 5 competitors' rankings + content).

Layer 2 deliverables:

  • Findings prioritized by impact × effort.
  • Priorities grouped into 30/60/90-day execution plan.
  • Dollar value estimates per priority.
  • Executive summary with 3-5 key recommendations.
  • Deliverable format adapted to client's execution style (Jira tickets vs PDF vs slides).

Layer 1 can be 80% templated. Layer 2 is 100% judgment — but the judgment is faster when the data is pre-structured.

The template structure

Standard audit template I use:

Section 1: Executive summary (1 page)

  • Headline finding: the single biggest opportunity or risk.
  • Top 3-5 priorities with expected impact and effort.
  • Recommended next actions with owner assignments.
  • Project scope summary.

Section 2: Data baseline (2-3 pages)

  • Current organic traffic, conversions, revenue.
  • Baseline rankings, top queries, top pages.
  • Competitive position.

Section 3: Crawlability (2-3 pages)

  • Site structure overview.
  • Crawl stats, crawl budget indicators.
  • Indexation state (indexed, excluded, with reasons).
  • Robots.txt + sitemap status.

Section 4: Indexability (2-3 pages)

  • Indexation rate.
  • Duplicate content issues.
  • Canonical issues.
  • Thin / low-value pages.

Section 5: Content analysis (3-5 pages)

  • Content coverage gaps.
  • Cannibalization findings.
  • Freshness / pruning candidates.
  • Top-performing content patterns.

Section 6: Technical performance (2-3 pages)

  • Core Web Vitals at origin + URL level.
  • Rendering diagnostic (JS-heavy sites).
  • CDN / caching status.
  • Backlink profile summary.
  • Toxic / low-quality link audit.
  • Competitor gap analysis.

Section 8: Prioritized recommendations (3-5 pages)

  • 10-15 recommendations with priority, impact, effort, owner.
  • 30/60/90-day execution plan.

Section 9: Appendix

  • Full crawl data.
  • Full keyword lists.
  • Tool screenshots for verification.

Total: 20-30 pages of audit. Consistent structure across clients; custom content per client.

What to automate

1. The data pulls

Automate data extraction via APIs:

  • GSC API: queries, pages, impressions, clicks, CTR, position.
  • GA4 API: sessions, conversions, engagement.
  • Ahrefs / Semrush API: keyword positions, backlink data, competitor data.
  • PageSpeed Insights API: CWV field data.
  • Screaming Frog CLI: crawl export.

Combine into a single audit-data script that pulls everything for a given domain and outputs structured CSVs or JSON.

2. Standard calculations

Automate derivations:

  • Indexation rate (indexed / sitemap URLs).
  • Traffic share by URL (top 20% of URLs = X% of traffic?).
  • Cannibalization detection (multiple URLs ranking for same query).
  • Orphan URLs (in crawl, not internally linked).
  • Redirect chain count and depth.

Output: a report of pre-calculated facts. Not interpretations — facts.

3. Issue flags

Automate: given the data, flag anomalies.

  • "47 URLs return 404 but are in sitemap."
  • "23 URLs have conflicting canonical + noindex."
  • "Page X has 12,000 inbound links but no outbound links to product pages."

Flag lists, not recommendations. Human interprets what to do.

4. Tables and charts

Pre-generate the common tables and charts:

  • Traffic trend line.
  • Top 20 queries by clicks.
  • Top 20 URLs by impressions.
  • Rankings distribution.

Insert into the template's appropriate section automatically.

What NOT to automate

1. Prioritization

A list of 147 findings can't be prioritized by rule. Some factors:

  • What's the client's strategic focus this quarter?
  • What engineering capacity do they have?
  • Which fixes require platform access they don't have?
  • What's the opportunity cost of each priority?

Judgment. Automation produces generic priorities that miss context.

2. Recommendations

A crawl finds 5,000 missing meta descriptions. Automation might recommend "add meta descriptions." A human knows: (a) most of those pages are low-priority, (b) Google rewrites ~70% anyway, (c) template-generated descriptions are often worse than Google's rewrites. Don't bother.

3. Framing

Narrative framing — why this matters, what the client should care about, how it fits strategy — is craft. Automation produces generic framing.

4. The executive summary

One-page. Critical. Must feel hand-crafted to the specific client. Don't template.

The workflow

Typical audit workflow for a mid-size client:

Day 0: scope + access.

Client intake form auto-generates template structure based on site characteristics (ecommerce vs blog, size, tech stack).

Day 1: data automation runs.

Automated scripts pull all the data for Sections 2-7. Output: structured data + pre-filled tables.

Days 2-3: interpretation + judgment.

Senior SEO reviews the automated output. Adds commentary, prioritization, recommendations. This is the work that can't be automated — but it's faster because data is pre-structured.

Days 4-5: executive summary + delivery.

Hand-craft the exec summary. Review end-to-end. Deliver.

Total: 5 days for an audit that traditionally takes 10-15. Quality up (less data-entry error, more time for judgment), cost down.

The template lifecycle

Templates aren't set-and-forget. They need maintenance:

  • Quarterly: review templates for sections that no longer apply or new sections needed.
  • After algorithm updates: add sections for new signals (e.g., when Google announced CWV, templates needed updates).
  • Per industry: ecommerce template differs from SaaS template differs from publisher template. Maintain variants.

Most agencies treat their audit template as gospel. Good ones treat it as a living document.

Common mistakes

Over-automating the judgment. Tools that "auto-generate your audit" produce findings-list dumps without prioritization. Clients see the auto-generated pattern and discount the work.

Under-automating the data. Hand-typing Screaming Frog stats into a PDF is expensive time on mechanical work. Automate the mechanical.

Generic templates that feel generic. If the audit could be for any site with only names changed, the client feels it. Even when template-driven, each audit needs specificity.

Not updating templates as tools change. Template still references GSC's old URL Parameters tool (deprecated 2022). Signals outdated methodology.

Templates that don't match deliverable format. Template is 30-page PDF; client wants Jira tickets. Adapt the template to the deliverable — the audit handoff guide covers the packaging choices that drive execution.

Frequently asked questions

How long should an audit take?

For a mid-size site: 5-10 days of work compressed into 2-4 weeks calendar time. Larger sites: 10-20 days. Faster than 5 days on serious sites compromises quality; longer than 4 weeks loses client engagement.

Can I charge less for automated audits?

Price on value delivered (traffic at stake × expected lift), not on hours. Automation reduces your cost, but the client's ROI on the audit doesn't change — they still get the same findings and lift. Reasonable margin is fine.

What if a finding requires client-side engineering capacity they don't have?

Document it as a finding with a note: "Requires engineering resource from your team." Offer to help scope the work or find an implementation partner. Don't pretend it's not a finding; don't pretend you can execute it alone.

Should audits be standalone or part of retainers?

Standalone for first engagement (clean scope, definite deliverable). Follow-up audits integrated into retainers (quarterly technical audit as part of ongoing work). Standalone audits that need to upsell into retainer after feels transactional.

How do I handle clients who want the 200-item audit?

Educate. Show them data from agencies delivering 10-15 item audits vs 200-item audits — execution rates differ 5x. Most clients accept once they see the pattern. For those who insist, deliver the 200-item list but prominently highlight the 10-15 that actually matter.

Related articles

a person writing on a piece of paper

Migrating from Manual to Automated SEO Monitoring

Weekly manual SEO checks catch problems 3-7 days after they happen. Automated monitoring catches them in minutes. The migration from manual to automated isn't about replacing judgment — it's about catching regressions before they compound.

· 7 min read