JavaScript SEO: Rendering, Hydration, and Googlebot

Why Googlebot-compatible does not mean Googlebot-optimal

Enric Ramos · · 8 min read
white text on purple background

"Googlebot renders JavaScript now" is one of those half-truths that generates more bad decisions than outright misinformation. Yes, Googlebot has run headless Chromium on every fetched URL since 2019. No, that does not mean a client-side-rendered React app ranks equivalently to a server-rendered one.

The difference is in the two-pass indexing model, the render pass queue, and the practical gap between "technically possible to index" and "reliably indexed in a reasonable timeframe." For SEO-critical content, that gap is measurable and usually costly.

This article covers how Googlebot actually handles JavaScript in 2026, what rendering strategies work for SEO-critical content, and framework-specific advice for React/Vue/Next.js sites.

The two-pass indexing model

When Googlebot fetches a URL, indexing happens in two passes:

Pass 1: Initial HTML parse. Googlebot fetches the URL and parses the raw HTML response — before any JavaScript runs. It extracts links, basic metadata (title, meta description, canonical), and anything in the static markup. It also sees what isn't there: the placeholder div where React is going to mount.

Pass 2: Render. The URL is queued for rendering. At some later point — minutes to days — a headless Chrome instance fetches the URL, executes all the JavaScript, waits for network activity to settle, and extracts the final DOM. The content visible only after JS execution enters the index at this point.

The gap between pass 1 and pass 2 varies:

  • High-priority URLs (homepage, heavily-linked pages, recent updates): minutes to hours.
  • Medium-priority (most product, article, category URLs): hours to 1-2 days.
  • Low-priority (deep pagination, rarely-updated pages, edge cases): days to weeks, sometimes never in practice.

For SEO-critical content on low-to-medium priority URLs, client-side-only rendering can mean the content takes significantly longer to index than on a server-rendered equivalent — sometimes never reaching the index if Google deprioritizes the render pass.

What Googlebot sees in pass 1 (and what you want there)

In the first pass, Googlebot extracts from the raw HTML:

  • Page title and meta tags
  • Canonical URL
  • Hreflang annotations (if in <head> tags rather than sitemap-delivered)
  • JSON-LD structured data
  • Internal and external links (for URL discovery and link-graph building)
  • Static text content
  • <noscript> fallback content

Anything that depends on JavaScript is absent from pass 1: dynamic routing data, client-side-injected content, React component trees, AJAX-loaded elements.

Rule of thumb: the content, metadata, and links that must be indexed quickly should be in the initial HTML response. Server-side rendering, static site generation, or a hybrid pattern accomplishes this. Pure client-side rendering puts this content on the slow path.

Rendering strategies compared

Four strategies, in order of SEO safety:

1. Static Site Generation (SSG) — safest

Pages are pre-rendered at build time. Each URL has a fully-rendered HTML file sitting on disk. Framework examples: Next.js getStaticProps, Gatsby, Hugo, Jekyll, Astro.

For SEO: Perfect. Pass 1 and effectively "pass 2" are the same HTML; no gap.

Trade-off: Requires a build step per content change. Scales poorly past ~100k URLs unless using on-demand ISR (below).

2. Server-Side Rendering (SSR) — safe

Pages are rendered on the server per request. Next.js getServerSideProps, Nuxt SSR mode, Remix, etc. produce fully-rendered HTML on every request.

For SEO: As safe as SSG. Pass 1 has everything.

Trade-off: Higher server cost than SSG (each request costs rendering compute). Cache at the CDN edge to offset.

3. Incremental Static Regeneration (ISR) — safe with caveats

Pages are pre-rendered but can be regenerated on-demand or on a schedule. Next.js revalidate, Vercel's ISR model. Combines SSG's speed with SSR's freshness.

For SEO: As safe as SSG, with the caveat that cache-staleness windows must be tuned to content update frequency. Product availability, price changes, and time-sensitive data need short revalidation windows or manual invalidation.

Trade-off: Complexity in cache invalidation patterns.

4. Client-Side Rendering (CSR) — risky for SEO

Pages load a minimal HTML shell and JavaScript renders all content client-side. Create-React-App, traditional SPAs.

For SEO: Works for high-priority URLs; degrades for everything else. Low-priority URLs can take days or weeks to get rendered, sometimes never. Deep routes often index poorly.

When to accept this trade-off: App routes that aren't meant to rank (dashboard, account pages, authenticated flows). Never for content pages.

Hydration and INP

"Hydration" is what happens when a server-rendered page's HTML reaches the browser, and the React/Vue/Svelte JavaScript takes over, making the page interactive. Hydration is invisible to search engines (they've already gotten their HTML) but critical for INP.

Large SPAs can have hydration costs of 500-2000ms on real user devices. During that window, clicks aren't responsive. Users perceive lag; Core Web Vitals INP scores suffer.

Modern hydration patterns to reduce the cost:

  • Partial hydration — only hydrate components that need interactivity. Static content stays as server-rendered HTML.
  • Islands architecture (Astro, Fresh, Qwik) — explicit "islands" of interactivity. Everything else is static HTML. Hydration cost drops by 70-90%.
  • Streaming SSR — send HTML to the browser in chunks as it's rendered server-side, starting with above-the-fold content. React 18+ supports this.
  • Server Components (React Server Components, Next.js 13+ app router) — move component logic to the server entirely for non-interactive components. No hydration at all for those.
  • Deferred hydration — hydrate interactive components only when they come into viewport or get user interaction.

For SEO impact, hydration matters because:

  1. Bad hydration costs INP, which is a ranking factor.
  2. Hydration bugs can cause visible "flash of unhydrated content" where the UI shifts after hydration completes, hurting CLS.
  3. Heavy hydration blocks main thread, delaying user interactions.

Detecting rendering issues

The three tools that matter:

1. GSC URL Inspection

For any specific URL, run URL Inspection in Search Console. Click "Test Live URL." Compare:

  • Screenshot — what Google's renderer sees.
  • HTML — the rendered DOM Googlebot extracted. Critical SEO content (H1, main paragraph, metadata, structured data) should be present.

If the rendered HTML is missing your primary content, you have a rendering problem for that URL.

2. view-source vs DevTools Elements

view-source:https://yoursite.com/ shows the raw HTML Googlebot sees in pass 1. DevTools Elements shows the post-render DOM. Critical content should be in view-source for reliable indexing.

3. Fetch as Googlebot via cURL

curl -A "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" https://yoursite.com/

Shows what the server returns to Googlebot. Any geoblocking, cloaking, or dynamic content decisions based on user-agent will surface here.

Framework-specific guidance

Next.js: Use the App Router (Next.js 13+) with Server Components by default. Opt into Client Components ("use client") only for actually-interactive UI. For ISR, tune revalidate carefully per route type. Pages router (getStaticProps / getServerSideProps) still works if you're not migrating.

Nuxt (Vue): SSR mode by default. Nuxt 3+ has good server-component patterns. Avoid ssr: false for content pages.

SvelteKit: SSR or pre-render is the default; don't override to client-only for content.

Remix: Always SSR. No CSR-only mode exists, which is a feature for SEO.

Gatsby: SSG. Good for SEO; declining in popularity for other reasons. Still a viable choice for content sites that don't need per-request personalization.

Astro: Islands architecture by default, SSR mode available. Excellent for SEO-heavy content sites.

Create-React-App / Vite + React (no framework): Pure CSR. Not suitable for SEO-critical pages. Migrate to Next.js or Remix if content needs to rank.

The migration path for existing SPAs

If you have an existing CSR site that needs SEO:

Option 1: Framework migration — Move to Next.js/Nuxt/Remix. Highest lift, cleanest outcome. Typically 2-4 months for a mid-size app.

Option 2: Dynamic rendering — Serve pre-rendered HTML to search crawlers (detected by User-Agent) and the regular SPA to users. Google's official stance is "don't do this" but in practice it works. Services like prerender.io handle this. Acceptable as a short-term bridge; not a permanent solution.

Option 3: Server routes for SEO-critical pages — Keep the SPA for app routes; add server-rendered routes for content pages (/blog/*, /product/*). Half-migration, viable trade-off for many sites.

Option 4: Accept the trade-off — Some pages don't need SEO. Authenticated app routes, dashboards, workflow UIs. CSR is fine for those; don't boil the ocean.

Common mistakes

Assuming "Googlebot renders JS" means "no SSR needed." The render pass delay is real and costly for most sites.

Using <noscript> as the SEO fallback. Googlebot doesn't treat <noscript> content as primary content. It's a legacy fallback for user-agents with JS disabled (which Googlebot isn't). Don't rely on it.

Different content in pass 1 vs pass 2. If the server-rendered HTML says "Loading..." and the rendered DOM says "Product: Nike Pegasus," Google sees "Loading..." in pass 1 and might rank on that. Ensure primary content is in pass 1.

A/B testing that changes main content server-side. Cloaking risk. User-agent detection for content differences is visible to Google's quality systems. A/B test with client-side rendering variants, not server-side content swaps.

Lazy-loading the H1. Some themes lazy-load hero elements including the H1 for performance. Then Googlebot's pass-1 HTML has no H1. Move critical heading text out of lazy-load patterns.

Frequently asked questions

Does Google really render my JavaScript?

Yes — Googlebot renders pages using a current Chrome. But the render pass is queued, not immediate, and the queue has priority ordering. For high-traffic sites and primary pages, the delay is minimal. For low-priority URLs and deep routes, the delay can be long enough that SEO-critical content never reaches the index on the expected timeline.

If I use server-side rendering, do I still need to test with URL Inspection?

Yes. SSR misconfigurations (broken caching, cloaking bugs, rendering errors on bot user-agents) still happen. Weekly URL Inspection checks on representative pages catch regressions early.

Can I use client-side routing with SSR?

Yes. Next.js, Nuxt, and Remix all handle this — server-render the initial page, then use client-side routing for subsequent navigations. Google's crawler treats each URL as its own fetch + render, so it sees each route freshly rendered.

What about Web Components?

Googlebot handles Shadow DOM and Web Components since ~2019. Custom Elements render correctly. Complex Shadow DOM hierarchies with closed shadow roots can have edge cases; test with URL Inspection.

Does Googlebot execute my Service Worker?

Googlebot does not use Service Workers for rendering. It fetches the URL directly. Don't rely on SW-cached responses to serve content to bots.

Related articles

a computer screen with a rocket on top of it

The Complete Guide to Technical SEO Audits

Most technical SEO audits fail the same way: they generate 80-page PDFs with 200 findings, and clients execute none of them. The audits that move rankings solve for one thing: which of five layers is broken, and which single fix restores the most value.

· 11 min read
a computer screen with a rocket on top of it

Core Web Vitals in 2026: What Still Matters

Core Web Vitals is a real but modest ranking signal — and the metrics keep shifting. INP replaced FID in March 2024. Here's what the current three metrics actually measure, what they don't, and where optimization actually moves the needle.

· 9 min read