Technical SEO · Glossary · Updated Apr 2026

Web Rendering Service(WRS)

Definition

The Web Rendering Service (WRS) is Google's rendering subsystem, an evergreen Chromium-based engine that executes JavaScript on pages Googlebot fetches. It's the "second wave" of indexing — pages with JS-dependent content are first parsed, then queued for WRS to render before final indexing.

Find related

Long definition

WRS is what makes JavaScript-rendered content indexable by Google at all. From 2015 to 2019, WRS was based on a fixed Chrome 41 build, which couldn't run modern ES6+ syntax without polyfills. In May 2019, Google announced WRS would track stable Chrome — "evergreen" — closing most of the JS-feature gap between Googlebot and real browsers.

The two-wave model:

  1. First wave (parsing) — Googlebot fetches raw HTML, extracts links, and indexes whatever's in the static markup. Fast, runs at crawl speed.
  2. Second wave (rendering) — URLs flagged as JS-dependent enter the WRS queue. WRS executes the JS, captures the resulting DOM, and reindexes with the post-render content. Slower, queued, resource-bounded.

The gap between waves can range from seconds to weeks depending on crawl demand and queue pressure. For high-traffic sites, near-real-time. For low-priority pages, long enough that critical content not in the initial HTML can stay invisible for indexing windows.

WRS limits to know:

  • Resource budget per page — WRS won't wait forever. Long network calls, slow API endpoints, and 30-second hydrations can time out.
  • Stateless rendering — WRS doesn't persist cookies, localStorage, or sessions across renders. Auth-dependent content won't work.
  • No interaction simulation — WRS doesn't click, scroll, or hover. Content gated behind those is invisible.
  • Limited execution of certain APIs — geolocation, media autoplay, push notifications, and other prompt-heavy APIs return empty or denied.

When you ship JS-heavy content, the question is whether it survives WRS. The URL Inspection tool in Search Console renders with WRS and shows you the rendered HTML — compare it to your visible browser DOM. If they diverge in load-bearing ways (missing main content, missing internal links, missing structured data), fix the rendering pipeline before relying on WRS.

The other major search engines have caught up partially. Bing renders JavaScript with a similar but separate stack. Most AI bots (GPTBot, ClaudeBot, PerplexityBot) don't render at all — they read the raw HTML response. SSR or pre-rendering remains the safest option for full crawler coverage.

Common misconceptions

  • "WRS = Chrome, so anything that works in my browser works for Googlebot." WRS is Chromium-based and evergreen, but headless and stateless. Cookie-gated content, interaction-gated content, and long-running network calls behave differently than they do in a real Chrome session.
  • "WRS happens immediately after crawl." It's a queue. The second wave can lag minutes to weeks. Pages that change frequently and rely on JS rendering may show stale content in the index for noticeable windows.
  • "WRS executes all JavaScript on the page." It executes as much as it can within budget. Slow or asynchronous code paths may be cut off mid-render. Test with the URL Inspection live test, which uses the same engine.
  • "WRS renders all sites the same way." Resource allocation is per-page and influenced by crawl priority. Popular pages get more rendering attention than long-tail pages, which is part of why low-traffic JS-only content struggles to index.