Running a SEO Audit in 2 Hours (The Triage Framework)

The structured approach that catches 80% of ranking-moving issues in the first two hours

Enric Ramos · · 7 min read
SEO text wallpaper

A full SEO audit takes 5-15 days of work over 2-4 weeks. In that time, a ranking-moving regression can be accumulating silently. A 2-hour triage audit catches the bulk of urgent issues on day 1, allowing you to ship fixes before the full audit completes.

The triage audit is also the go-to for: onboarding new clients, quarterly pulse checks, post-major-change regression checks, and pitch-phase due diligence on prospective client sites. This article covers the structured 2-hour framework that holds up across site types.

The 2-hour framework

Seven steps, 15-20 minutes each:

  1. GSC Crawl Stats + Page Indexing (20 min).
  2. Log file sample (30 min — can run during other steps).
  3. Render spot-check on key templates (15 min).
  4. Core Web Vitals field data (15 min).
  5. Sitemap + robots.txt sanity check (10 min).
  6. Structured data spot-check (15 min).
  7. Write up findings (15 min).

Total: 2 hours including the log analysis that runs in parallel.

Step 1: GSC Crawl Stats + Page Indexing (20 minutes)

Open Google Search Console for the client's property. Three reports:

Crawl Stats (Settings → Crawling → Crawl Stats):

  • What's the average Googlebot request rate over the last 90 days?
  • Any sudden drops or spikes?
  • Response code distribution: is 5xx rate under 1%?
  • Average response time under 1 second?

Flag: rate drops 30%+ sustained, 5xx rate climbing, response time exceeding 1.5s at p75.

Page Indexing (→ Indexing → Page indexing):

  • How many indexed URLs? Trend over 3 months?
  • What's the size of "Crawled - currently not indexed"? Larger than 10% of indexed URLs is a quality signal.
  • What's in "Excluded" categories? Any surprises (e.g., massive "Excluded by 'noindex' tag" number on pages you didn't intend to noindex)?

Flag: indexed URL count dropping, "Crawled - not indexed" growing faster than new content publishing.

Top pages/queries impression trend:

  • Scrolling top 20 pages by impressions in last 28 days vs previous 28 days.
  • Which pages lost impressions? Which gained?

Flag: unexpected drops on traffic-driving pages.

Step 2: Log file sample (parallel — 30 min)

While working on other steps, request a 7-day log sample from the client's hosting.

Once received (might be immediate or next day):

  • Filter to verified Googlebot (reverse DNS or official IP list).
  • Bucket by URL prefix.
  • Count requests per prefix per day.

Flag patterns:

  • 30%+ of Googlebot requests to non-indexable URLs (search endpoints, faceted URLs, parameter noise).
  • URLs you expected to be crawled but aren't hit at all.
  • 5xx spikes to Googlebot specifically.

Covered in depth in log file analysis.

If log access isn't immediate, note as "pending" and work around.

Step 3: Render spot-check on key templates (15 minutes)

Open GSC URL Inspection tool. For each major template (homepage, category, product, article — pick one representative URL per):

  • Click "Test Live URL."
  • Check the rendered HTML: is primary content present? H1? Meta tags? Schema?
  • Check the screenshot: looks right?
  • Check the "More Info" tab for JavaScript errors or resource load failures.

Flag: major content only visible in the render pass (not in pass 1 HTML) — see JavaScript SEO. Resource load failures on critical CSS/JS.

Step 4: Core Web Vitals field data (15 minutes)

Run PageSpeed Insights for:

  • Homepage.
  • One representative category/listing page.
  • One representative product/article page.

Check CrUX field data (the "Real-world data" section at top):

  • LCP, INP, CLS status: Good / Needs Improvement / Poor.
  • Mobile + desktop separately.

Flag: "Poor" status on any metric for key templates.

Also check GSC → Core Web Vitals report for URL-group-level status and scale of affected URLs.

Step 5: Sitemap + robots.txt sanity (10 minutes)

robots.txt:

  • Open https://client.com/robots.txt.
  • Check for obvious mistakes: Disallow: / on production? Blocked JS/CSS paths? Contradictory rules?
  • Verify Sitemap: references.

Sitemap:

  • Open the sitemap URL.
  • Is it valid XML? Returns 200?
  • How many URLs? Matches expectations?
  • Sample 10 URLs: are they all canonical, indexable, 200-OK?

Flag: robots.txt blocking critical paths, sitemap not updating, sitemap URLs returning 404/redirect.

Step 6: Structured data spot-check (15 minutes)

Run Rich Results Test on:

  • Product page (should have Product schema).
  • Article page (should have Article schema).
  • Homepage (Organization schema).

Check validity + eligibility for rich results.

Also check GSC → Enhancements reports:

  • How many URLs valid per enhancement type?
  • Any recent spikes in "Error" or "Warning" counts?

Flag: broken schema, missing schema on expected templates, recent error spikes.

Step 7: Write up findings (15 minutes)

One page. Structure:

Red flags (if any): issues requiring immediate fix. Usually 0-2 items.

Yellow flags: issues needing attention within the month. Usually 2-5 items.

Green: areas that look fine (briefly).

Next steps: specific actions, owners, timelines.

What the full audit will cover: specifics beyond this triage, scoped.

Deliver via email or Slack within the same day.

Variations for specific triage cases

Post-deploy triage

After a major site change (migration, replatform, redesign):

  • Prioritize Step 1 (index counts — did indexation hold?) and Step 2 (crawl behavior — any new patterns?).
  • Skip Steps 4-6 (assume CWV and schema need investigation separately).
  • Focus on regressions vs pre-deploy baseline.

Pitch-phase prospect audit

Auditing a prospect's site as part of sales:

  • Use public tools only (no GSC access).
  • Crawl via Screaming Frog (limited, but works).
  • Identify 5-10 concrete issues — these become the sales pitch: "here's what we'd address in engagement."
  • 30-60 minutes rather than 2 hours.

Quarterly pulse check

For existing clients:

  • Compare to last quarter's baseline.
  • Focus on Steps 1 (indexation), 4 (CWV), 7 (write up only the deltas).
  • Often 60 minutes is sufficient if continuous monitoring runs in parallel.

What you don't do in a triage audit

Explicitly not in scope:

  • Full content audit.
  • Backlink profile analysis.
  • Competitive positioning.
  • Keyword research.
  • Full site architecture review.
  • Schema validation on every page.
  • Detailed technical implementation review.

All of those are full-audit scope. Triage catches the bleed; full audit diagnoses and treats.

Common mistakes

Extending triage into full audit accidentally. 2 hours becomes 4 becomes 8 as you go deeper into one issue. Discipline: if you find something concerning, note it for the full audit; don't dive in now.

Skipping the write-up. The value of the triage is in the delivered findings, not in the work. 15 minutes to write up clearly beats another 15 minutes of investigation.

No access = no triage. Without GSC access, triage can still surface issues via public tools (Screaming Frog, site: operators, PageSpeed Insights). Set expectations that output is less certain.

Not flagging what you couldn't check. If log access wasn't available, say so in the write-up. "Could not assess crawl behavior; recommended for full audit."

Treating triage as one-time. Integrate into monthly or quarterly cadence. Running a triage every 90 days catches regressions before they compound.

Frequently asked questions

Can I charge for a triage audit separately from a retainer?

Yes. Typical range: €1,500-4,000 for a standalone triage audit with write-up. Often delivered as the entry offer that leads to fuller engagement.

Is 2 hours really enough?

For the triage purpose — identifying ranking-moving issues that need attention — yes. For everything else (content strategy, backlink building, cluster planning), you still need more time. Don't conflate triage with full audit.

What if I find a major issue in triage?

Document in the write-up with recommended fix. If urgent (Disallow: / in robots.txt, noindex on major template), notify client immediately (phone or Slack), not just email.

Do I need to re-run triage after fixing issues?

Monitor. Not necessarily a full re-triage, but confirm via GSC that the fix took effect (recrawl + reindex). Sometimes a full re-triage after 2-3 weeks is warranted for major fixes.

Can I automate triage?

Partially. Data gathering (GSC export, sitemap check, CWV fetch) automates cleanly. The judgment (what's a red flag for this site?) doesn't automate well. Build internal tooling for the data collection; keep the judgment work human.

Related articles

a person writing on a piece of paper

Migrating from Manual to Automated SEO Monitoring

Weekly manual SEO checks catch problems 3-7 days after they happen. Automated monitoring catches them in minutes. The migration from manual to automated isn't about replacing judgment — it's about catching regressions before they compound.

· 7 min read