Calculating ROI for SEO: Three Models, One You Should Use

Why share-of-voice-weighted ROI survives audit when the other two don't

Enric Ramos · · 14 min read
a person holding a calculator over a piece of paper

The first time a CFO asks for SEO ROI, you have about ninety seconds before they decide whether to keep funding you. Most SEO teams answer with cost-per-organic-session — fast to calculate, easy to format — and watch the CFO frown. The number is technically correct and strategically useless. It says nothing about revenue, nothing about counterfactual (what would we have earned without SEO), and nothing about how the moat changes if you stop investing today.

There are three serious models for calculating SEO ROI, and they answer increasingly hard questions. The cost-per-organic-session model answers "how efficient is our SEO operation?" The attributed-revenue-per-content-piece model answers "which content generates revenue?" The share-of-voice-weighted ROI model answers "what is our SEO investment actually worth to the business?" The first two are incomplete. The third is the only one that survives a serious finance audit, because it handles the counterfactual that the first two ignore.

This article walks through all three models with realistic numbers, names the failure modes, and gives you the math to defend the third in a board meeting. The goal is not to make SEO sound good. The goal is to give you a defensible number when the CFO asks.

Why the first two models fall short

Before getting to the recommended model, it's worth understanding why the alternatives fail. They fail differently, and the patterns matter.

Cost-per-organic-session is the metric SEO agencies love to quote because it's flattering: total SEO cost divided by organic sessions in a period. A site spending $20,000/month on SEO and getting 200,000 organic sessions has a cost-per-organic-session of $0.10. It sounds efficient compared to paid search at $1.50 CPC. The CFO asks one question — "what's the conversion rate of an organic session vs. a paid one?" — and the model breaks. Organic sessions and paid sessions don't convert at the same rate, don't have the same intent, and don't have the same downstream lifetime value. Comparing them on cost-per-session is comparing apples to a different fruit.

Attributed-revenue-per-content-piece is one step better. You take revenue attributed to organic, allocate it across the content pieces that earned the traffic, and divide by the cost of producing each piece. The model produces a per-page ROI you can rank by. The failure mode is that attribution is fragile in three places. GA4's attribution model choice (last-non-direct, data-driven, position-based) changes the per-page numbers significantly. Multi-touch journeys mean a single conversion gets divided across many touches, and the math depends on touch-counting rules that aren't consistent across tools. Brand-versus-non-brand isolation is hard, and most attribution models credit brand-search organic clicks even though brand demand was created by other channels.

The deeper problem with both models is that they ignore counterfactuals. They tell you what your SEO investment produced; they don't tell you what would have happened without it. SEO is a moat-building activity, and the value of a moat is what you would have lost without it, not just what you earned with it. The third model is built around this question.

Model 1: Cost-per-organic-session

Even though I just argued against using this as your primary ROI metric, it has its place — it's the right operational efficiency check, just not the right strategic ROI number. Knowing how to compute it correctly matters for sanity-checking the other models.

The formula is simple. Total SEO cost in a period, divided by organic sessions in the same period. The hard part is "total SEO cost." Most teams undercount.

A correct cost stack includes content production (in-house writer salaries fully loaded, freelance fees, editor time, designer time for content visuals), SEO tools (rank trackers, Search Console paid tools, content optimization platforms, log analyzers), agency or contractor fees, allocated engineering time for technical SEO work (this is the line most teams omit), and a portion of platform costs (CDN, hosting) attributable to SEO traffic. A reasonable rule of thumb: if your published "SEO budget" is $X, true cost is usually $1.4X to $1.7X.

A worked example. A mid-sized B2B SaaS with $30,000/month in published SEO costs (3 in-house writers at $7,000 fully loaded, $5,000 in tools, $4,000 in agency support) probably has $42,000-$50,000 in true cost when you fold in 15-20 hours/month of engineering time, 10 hours/month of design time, and the platform allocation. Their organic sessions average 180,000/month. True cost-per-organic-session: $0.23 to $0.28.

The number tells you nothing about ROI. It tells you whether your SEO operation is efficient relative to itself over time — if cost-per-session creeps up quarter over quarter without traffic growth, your operation is degrading. That's a useful internal diagnostic. It's not what the CFO is asking.

Model 2: Attributed-revenue-per-content-piece

This model is harder to compute and produces more strategically useful data than the first model. The output is a per-page (or per-cluster) revenue attribution number you can rank by, which informs investment decisions.

The formula: for each landing page that produces organic traffic, divide attributed revenue from that page's organic sessions by the fully-loaded cost of producing and maintaining the page over its lifetime.

The cost side is the easier half. Each piece of content has a discoverable cost: the writer time to produce it, the editor time to review it, any external research or data costs, and an amortized share of refresh costs over the page's life. A piece that took 12 hours to produce at a $75/hour fully-loaded rate, plus 2 hours of editing at $100/hour, has a $1,100 base cost. If it gets refreshed twice over its life at $400 per refresh, lifetime cost is $1,900.

The revenue side is the harder half, and it's where the model breaks under audit pressure. You need an attribution model decision that's both defensible and consistent. GA4's data-driven attribution is the easiest defensible choice, but it changes per-page numbers month over month as the model retrains. Last-non-direct is more stable and arguably less accurate. Position-based (40-20-40) is a reasonable middle ground for SEO specifically. See attribution models for SEO for the full comparison and recommendation.

A worked example. A 2,000-word evergreen guide cost $1,400 to produce. Over 24 months it generates 96,000 organic sessions, with a 2.1% conversion rate to a $300 LTV customer. Last-non-direct attribution credits 80% of those conversions to organic. Attributed revenue: 96,000 × 0.021 × 0.80 × $300 = $483,840. Lifetime ROI: 345x. Per-month ROI: 14.4x. The number sounds enormous, and that's the warning. Real numbers in real audits look more modest, because the attribution assumptions get questioned and trimmed.

The audit pressure on this model surfaces in three places. The CFO asks why we attribute 80% to last-non-direct organic when paid search and email touch 60% of those journeys; the number gets re-cut on multi-touch and drops by 30-50%. The CFO asks how we separate brand from non-brand organic; we re-cut and learn 60% of "organic" was brand search, which is largely demand other channels created. The CFO asks what happens to the page if we stop maintaining it; we discover most of the lifetime ROI assumes the page keeps performing without further investment, which isn't always true.

The model isn't wrong. It just produces optimistic numbers under normal assumptions and modest numbers under stress-tested assumptions, and the gap between the two is where credibility is lost.

Model 3: Share-of-voice-weighted ROI

The third model is the one I'd argue you should use for real ROI conversations. It's harder to compute, but it answers the question the first two avoid: what is your SEO investment actually worth to the business, accounting for what would have happened without it?

The conceptual move is to weight your organic value by share of voice in your relevant keyword universe, and to model the counterfactual — what your traffic and revenue would be at a competitor-average share of voice if you stopped investing.

The formula in plain language: SEO ROI equals (your current attributed revenue minus the counterfactual attributed revenue at average share of voice) divided by SEO cost. The numerator is the value of being above-average at SEO, not the value of having any SEO presence at all.

This is the formulation that survives a finance audit because it answers the right question. A company with no SEO investment would not have zero organic traffic — they'd have whatever a non-investing competitor gets, which is typically 30-50% of their current traffic for branded queries (since brand demand exists regardless) and 5-15% for non-brand. The "no SEO" counterfactual isn't zero; it's the floor. The marginal value of SEO is what's above the floor.

A worked example. The same B2B SaaS spends $42,000/month in true SEO cost and earns $400,000/month in attributed organic revenue. Their share of voice in their relevant keyword universe is 22%, against a competitor average of 8%. The counterfactual: at 8% share of voice, attributed revenue would be approximately (8/22) × $400,000 = $145,000. Marginal organic revenue from SEO investment: $400,000 - $145,000 = $255,000/month. Marginal ROI: $255,000 / $42,000 = 6.1x.

That number — 6.1x — is the one that survives audit. It's lower than Model 2's headline number (which might have produced something like 10-15x using last-non-direct attribution). It is more defensible, and the CFO can compare it directly to other channel investments at marginal-return level. A paid search channel earning $500,000 on $200,000 spend has a marginal ROI of 1.5x (after subtracting incremental brand-effect baseline). SEO at 6.1x marginal is a clear winner. SEO at 0.8x marginal would be a clear loser. The model produces actionable numbers.

The implementation is more involved than the previous two. You need a defined keyword universe (the queries you care about), a share-of-voice tool that produces a defensible number for that universe, an attribution model for converting traffic to revenue, and a competitor-average share-of-voice baseline that's updated periodically. See share-of-voice tracking for the methodology details, including how to handle the keyword-universe definition problem that breaks SoV comparisons over time.

How the share-of-voice baseline works

The baseline — the "average competitor share of voice" — is the load-bearing assumption in the model. Get it wrong and the marginal ROI number is wrong too. Two common approaches.

The simpler approach: take the average share of voice across your top 5-10 known competitors, weighted by their relevance to your keyword universe. If you have 8 competitors with SoVs of 35%, 22%, 18%, 12%, 6%, 4%, 2%, 1%, the simple average is 12.5%. Sense-check: this is the SoV you'd expect to have if you matched the average competitor's investment level.

The more rigorous approach: bin competitors by SEO investment level (you can estimate this from their content publishing cadence, technical infrastructure quality, and observable SEO team size on LinkedIn). The "minimal investment" bin is your counterfactual — what SoV would you have if you cut investment to that level? In practice, the minimal-investment bin in most B2B markets is 3-7% SoV.

The rigorous approach produces lower counterfactual numbers, which makes the marginal ROI look better — be careful not to game this. The CFO will ask which competitors you used as the baseline, and the answer needs to be defensible. Document your competitor selection in the methodology footnote that accompanies the ROI report.

A subtle but important point: the counterfactual is "what would happen if we maintained current SEO investment levels at competitor-minimum." It is not "what would happen if we deleted all our content tomorrow." Those are very different scenarios. The model assumes a stable site and infrastructure, with reduced ongoing investment. If you actually deleted all SEO content, your counterfactual would be much lower and the marginal ROI much higher — but that's not a realistic scenario for budgeting decisions.

Worked comparison: same company, three models

To make the difference concrete, here are all three models computed for the same hypothetical B2B SaaS:

True SEO cost per month: $42,000. Organic sessions per month: 180,000. Organic conversion rate: 1.8%. Average customer LTV: $300. Last-non-direct attribution to organic: 75%. Share of voice in relevant keyword universe: 22%. Competitor-average share of voice (counterfactual baseline): 8%.

Model 1: Cost-per-organic-session. $42,000 / 180,000 = $0.23 per session. Operationally efficient, strategically uninformative.

Model 2: Attributed-revenue-per-content-piece. Total attributed revenue: 180,000 × 0.018 × 0.75 × $300 = $729,000/month. ROI: $729,000 / $42,000 = 17.4x. Headline number, would not survive audit on attribution challenges.

Model 3: Share-of-voice-weighted ROI. Counterfactual revenue at 8% SoV: (8/22) × $729,000 = $265,090/month. Marginal revenue: $729,000 - $265,090 = $463,910/month. Marginal ROI: $463,910 / $42,000 = 11.0x. Defensible under audit, comparable to other channel marginal ROIs.

Notice that Model 3's number is lower than Model 2's, and that's the right outcome. Model 2's 17.4x assumes the entire organic revenue is attributable to SEO investment, which is not true — some fraction would happen at minimum investment from brand demand and existing content inertia. Model 3's 11.0x is the honest version.

Notice also that Model 3 is still excellent. A real ROI of 11x marginal is unambiguously good and easy to defend. The model isn't designed to make SEO look bad; it's designed to make SEO defensible.

When each model makes sense

The three models aren't either/or. They answer different questions and serve different audiences.

Model 1 (cost-per-organic-session) belongs in operational dashboards. SEO leads use it to monitor team efficiency over time. Don't include it in executive reports.

Model 2 (attributed revenue per content) belongs in editorial decisions. Use it to rank content for refresh, prune, or scale-up decisions. Include it in CMO reports as a content-portfolio view, not as a primary ROI number.

Model 3 (SoV-weighted ROI) belongs in budget conversations. Use it when the question is "should we invest more, the same, or less in SEO?" or when comparing SEO to paid search, paid social, or other marketing investments. This is the model for the CFO and the board. See reporting SEO to non-SEO stakeholders for the templates that contextualize the ROI number for each audience.

The relationship between the three models: Model 1 is your operational metric, Model 2 is your editorial metric, and Model 3 is your strategic metric. A complete SEO measurement stack uses all three for their respective purposes.

Common mistakes when modeling SEO ROI

Five recurring patterns that undermine ROI calculations:

Using gross revenue instead of contribution margin. Revenue isn't profit. A $400,000 organic revenue stream with 30% gross margin is $120,000 in contribution. Compare contribution to SEO cost, not revenue to cost. CFOs always do this re-cut; do it yourself first.

Ignoring the brand-search baseline. Branded organic searches happen because of brand awareness created elsewhere. Crediting all branded organic to SEO investment overstates SEO's contribution by 20-50%. Either separate brand from non-brand in your attribution or use Model 3, which handles the issue via the counterfactual baseline.

Annualizing too aggressively. A piece of content that produced strong traffic in months 1-6 might decay in months 7-12. Computing lifetime ROI on month-1 trajectory overstates the lifetime number. Use a multi-month moving average or a cohort-based decay model. See cohort analysis for SEO for the curve patterns to apply.

Not stress-testing attribution. Run your ROI under three attribution scenarios: best-case (last-non-direct), mid-case (data-driven), and worst-case (multi-touch with brand baseline removed). Report a range, not a point estimate. The range builds credibility; the point estimate destroys it the first time the CFO challenges the assumptions.

Forgetting maintenance cost. SEO is not a one-time investment. The page that earned $500,000 over two years required content refreshes, technical debt fixes, and ongoing optimization to keep performing. Lifetime ROI calculations that assume zero ongoing cost are wrong by 20-40%.

Putting share-of-voice-weighted ROI into your reporting

Once you've decided to use Model 3 as the primary ROI metric, the operational question is how to compute it consistently. The answer is to define the methodology once, document it, and update the inputs monthly without changing the methodology.

The inputs that update monthly: true SEO cost (from the cost stack), organic revenue (from your attribution model), share of voice (from your SoV tool). The inputs that update quarterly: competitor-average SoV baseline, keyword universe definition, attribution model parameters. The methodology that should never change without an explicit revision: the formula itself, the cost-stack composition, the brand-vs-non-brand split rules.

The reported number should always be paired with a confidence range and a one-sentence methodology note. "11.0x marginal ROI (range 8-14x depending on attribution assumptions). Methodology: organic revenue weighted against an 8% SoV counterfactual baseline derived from competitor average." That sentence is the difference between an ROI number that gets accepted and one that gets challenged into nothing.

For the broader frame this fits into, see the SEO analytics stack pillar for how ROI sits inside the full measurement architecture. For the share-of-voice computation that feeds Model 3, share-of-voice tracking covers the calculation methodology in depth. For the attribution-model decision that affects all three models, attribution models for SEO walks through the trade-offs.

The goal of an SEO ROI model isn't to produce the biggest number. It's to produce the most defensible number. A defensible 11x beats a fragile 17x every quarter, because the fragile number eventually fails an audit and the defensible one keeps the budget intact. Pick the model that holds up. The other two have their uses, but not for the conversation that decides whether SEO gets funded next year.

Related articles