Site Architecture for SEO: Flat vs Deep, What Actually Works

An old debate based on outdated assumptions — here's what matters now

Enric Ramos · · 9 min read
white paper

"Flat architecture is better for SEO" was repeated so many times between 2008 and 2016 that it hardened into dogma. The reasoning: fewer clicks from the homepage meant more PageRank reaching deep pages, meant better rankings for those pages. Therefore: flatten everything.

That reasoning was always incomplete, and it's increasingly wrong. PageRank distribution is still a factor, but it's one of many, and the flat-vs-deep framing misses what actually matters: can your internal link graph reinforce topical clusters, and can crawlers reach every URL you care about in a reasonable number of hops?

This article reframes the debate with what's actually true in 2026 and gives concrete architectural patterns for different site types.

Flat vs deep: the nuanced answer

A flat architecture has everything accessible within 2-3 clicks from the homepage:

/
├── /product-a
├── /product-b
├── /product-c
├── /blog-post-1
├── /blog-post-2

A deep architecture nests content hierarchically:

/
├── /shop/
│   ├── /shop/category-1/
│   │   ├── /shop/category-1/subcategory/
│   │   │   └── /shop/category-1/subcategory/product

The old claim: flat wins because PageRank flows more directly. A page 2 clicks deep gets more PageRank than a page 6 clicks deep.

What's actually true: click-depth does correlate with crawl priority and PageRank distribution. But:

  • The 3-6 hop gap doesn't matter much. Google isn't starving pages at 4-6 clicks deep of PageRank in any meaningful way on well-structured sites. The penalty is real but small.
  • Meaningful hierarchy helps topical signals. /shoes/running/pegasus tells Google the product is a running shoe. Schema, breadcrumbs, and internal linking all reinforce the nested meaning. That signal is often worth more than the PageRank savings from flattening.
  • URLs are sitemap entries, not graph paths. Modern Googlebot discovers URLs primarily from sitemaps and internal links, not from clicking through the hierarchy step-by-step. A product at /shop/category/subcategory/product is discovered as easily as /product when both are in the sitemap.

The current best answer: moderate depth (3-5 levels) with clean URL hierarchy and strong internal linking works better than extreme flatness or extreme depth. The exact nesting matters less than the navigability.

The "3-click rule" reconsidered

"Every page should be reachable in 3 clicks from the homepage" was a 2010-era heuristic. It's now usually overstated but contains a useful kernel.

The original reasoning: users get frustrated past 3 clicks; crawlers deprioritize deep pages.

What's true now:

  • Users do abandon at deep navigation. A product 6 levels deep in a menu is practically invisible to users — and by extension, less likely to be linked internally, less likely to show engagement signals, and thus less likely to rank.
  • Crawlers don't strictly care about click depth. They care about URL discoverability and inbound link strength.

The updated rule: every URL you care about SEO-wise should be reachable in 3-4 clicks from either the homepage or a high-priority landing page (pillar, top category). Not strictly 3 — 4 is fine when the deeper level adds meaningful categorization.

For URLs you don't care about SEO-wise (user-generated junk, deep pagination), click depth doesn't matter — those shouldn't be optimized for ranking anyway.

URL hierarchy as a signal

The URL itself is a minor ranking signal. /shoes/running/nike-pegasus-41 versus /p/12345 — the former is better, not because Google reads the URL as primary content, but because:

  • Breadcrumbs derived from URL hierarchy are more meaningful.
  • Users sharing URLs see context ("this is about running shoes").
  • Internal linking tools can generate better anchor text from readable URLs.
  • Clusters become self-documenting from URL patterns.

URL guidelines that still hold in 2026:

  • Lowercase, consistent.
  • Hyphens as word separators, not underscores.
  • Descriptive but concise — 3-5 words typical max.
  • Stable — URL changes require redirects that accumulate as chains.
  • No parameters in canonical URLs — parameters are for filtering state; canonicals should be clean.

Silo vs mesh linking

Two philosophies for internal linking structure:

Silo (hub-and-spoke)

Each topical cluster is internally cohesive. Pillar pages link to all supporting articles in the cluster. Supporting articles link back to pillar and to a few sibling articles. Cross-cluster linking is sparse.

      Pillar A
      /  |  \
  A1   A2   A3
      ←  →  ←  →
      Pillar B
      /  |  \
  B1   B2   B3

Pros: Clear topical clustering signals. Google sees "these URLs are all about X, they reinforce each other." Good for establishing topical authority in a defined subject area.

Cons: Undervalues natural cross-topic connections. A "running shoe" pillar and a "workout plan" pillar probably should cross-link where relevant; silos discourage that.

Mesh (organic)

Every article links to whichever other articles are contextually relevant, regardless of cluster membership. Clusters emerge from content gravity rather than being enforced.

Pros: Reflects how humans actually read and navigate. Natural cross-topic links strengthen both clusters.

Cons: Can dilute topical signal. If every article links to every other, Google sees weaker clustering.

The hybrid that works best

Primary structure is siloed — pillars and their supporting articles form tight clusters. But editorial cross-links between clusters are encouraged when genuinely relevant. Cross-cluster links should be 5-15% of total internal links, not 50%.

The practical implementation:

  • Supporting articles: 3-5 internal links. 1 to pillar, 1-2 to siblings in cluster, 0-2 cross-cluster if contextually right.
  • Pillar articles: 5-10 internal links to supporting articles in their cluster, plus 1-3 cross-cluster links to other pillars where relationships exist.
  • Homepage: links to all pillars, selected recent articles, key category/product pages. Homepage is the starting point for Googlebot's link graph traversal.

Breadcrumbs appear in the SERP (below the title on most results) and on-page. They do three things:

  1. SERP display — via BreadcrumbList schema, Google shows the hierarchy as a visual cue.
  2. On-page navigation — users see where they are and jump up levels.
  3. Internal linking — breadcrumbs add reliable internal links to parent categories.

What they don't do: replace the need for in-content internal linking. Breadcrumbs are navigation scaffolding, not topical reinforcement. Relying on them for all internal linking is insufficient.

Implementation:

  • Always include BreadcrumbList JSON-LD on any page with a clear hierarchy.
  • Match the visible breadcrumbs to the schema — don't include levels in schema that aren't visible on-page.
  • Consistent across the site — same formatting, same separator, same logic for how deep levels are shown.

Category page depth

For ecommerce and publications, category pages are the critical architecture layer. They're the landing pages for category queries ("running shoes," "technical SEO articles," "recipes with chicken") and the hubs for their subcategories.

Decisions:

How deep do categories nest?

Sites with 1,000 products: 2 levels (/shoes/running/). Sites with 10,000+: 3 levels (/shoes/running/road/). Rarely worth 4+ unless the catalog is genuinely that hierarchical.

Do subcategories have their own landing pages?

Yes — every category and subcategory deserves a URL that can rank. "Trail running shoes" is a meaningful query; don't bury it in a filter state of the "running shoes" URL.

Horizontally, yes — within the same "sibling" level (/running//walking//hiking/). Vertically, yes — every subcategory links to its parent and to its subcategories.

Faceted filters: index or not?

Covered in depth in the E-commerce pillar. Short version: selectively index facet combinations with search volume, canonical the rest, block pure UI parameters.

Architecture patterns by site type

Small blog (< 500 posts):

  • Single pillar per major topic, 5-10 supporting articles per pillar.
  • Flat URL structure: /[slug] for posts, /category/[slug] for categories.
  • Homepage + categories + pillars are the navigation; individual posts are discoverable via category + pillars.

Medium blog / publication (500 - 50,000 posts):

  • 4-8 pillars covering the taxonomy.
  • 2-level URLs: /category/[slug]. Categories get their own hubs.
  • Tags exist but aren't primary navigation. Tag archives mostly noindex unless they have real curated value.

Ecommerce (1,000 - 100,000 products):

  • Category tree 2-3 levels deep.
  • Product URLs under categories (/shoes/running/pegasus-41) or flat (/p/pegasus-41). Either works; flat is easier to migrate on catalog reorganization.
  • Filters sparingly indexed.
  • PDPs link to related products + up to parent category.

Large ecommerce (100,000+ products):

  • Category tree 3-4 levels deep.
  • Flat or nested PDPs — flat usually wins at scale because products frequently move between categories.
  • Aggressive filter index strategy — only high-volume combinations.
  • Internal linking partly algorithmic (related products, "customers also bought") but curated at the category level.

SaaS docs:

  • 1-3 levels (/docs/, /docs/getting-started/, /docs/getting-started/first-project).
  • Strong "next / previous" linking for sequential content.
  • API reference separate but cross-linked from conceptual docs.

Common architecture mistakes

Over-flattening. /about, /product-1, /blog-post-1, /category-foo all at root gets confusing as the site grows. Add one level of hierarchy (/about, /shop/product-1, /blog/post-1) — clearer for everyone.

URL changes on every CMS migration. Each migration that restructures URLs adds redirect chains. Better: pick a URL pattern that can absorb future changes (e.g., flat product slugs that don't depend on category assignment).

Orphan sections. Entire content areas that aren't linked from navigation or core pages. They get crawled rarely and rank poorly. Audit with log file analysis — what does Googlebot visit, what doesn't it?

Over-linking within clusters. Every article linking to every other 20 articles in the cluster. Signal dilutes. 3-5 internal links per article is the sweet spot.

Ignoring sitemap structure. A clean URL architecture with a messy sitemap (non-indexable URLs, outdated entries) undoes the work. See XML sitemap best practices.

Frequently asked questions

Should my URLs include dates?

For news and time-sensitive content, yes (date signals freshness). For evergreen content, no — dates in URLs create awkwardness when content is updated (the URL suggests "2019" even if last updated 2026). For blog posts that aim for evergreen, keep URLs dateless.

How many clicks should the deepest page be from the homepage?

4 clicks is the comfortable limit for most sites. 5-6 is acceptable if deep enough content justifies the depth. Past 6 without strong sitemap inclusion and internal linking, crawl/index reliability degrades.

Does /post/123 or /post/title-slug rank better?

Title-slug URLs typically win slightly for user engagement signals (users and other sites prefer readable URLs when sharing). The direct SEO difference is tiny — the bigger effect is the downstream improvement in anchor text and share rates.

Should I redirect old URLs to new ones after IA changes?

Yes, with 301s. But consolidate chains — if URL A already redirects to URL B, and B is now changing to C, update A's redirect to point directly to C rather than adding a third hop.

Do subdomains vs subdirectories matter for architecture?

For authority consolidation: subdirectories win. A subdomain (blog.example.com) builds its own authority separately from the main domain. Use subdomains only when operational requirements (separate CMS, separate team) force the split.

Related articles

a computer screen with a rocket on top of it

The Complete Guide to Technical SEO Audits

Most technical SEO audits fail the same way: they generate 80-page PDFs with 200 findings, and clients execute none of them. The audits that move rankings solve for one thing: which of five layers is broken, and which single fix restores the most value.

· 11 min read
a computer screen with a rocket on top of it

Core Web Vitals in 2026: What Still Matters

Core Web Vitals is a real but modest ranking signal — and the metrics keep shifting. INP replaced FID in March 2024. Here's what the current three metrics actually measure, what they don't, and where optimization actually moves the needle.

· 9 min read