puppeteer

Puppeteer Screenshot: Complete Guide (2026)

RendShot Team··9 min read

If you are taking screenshots programmatically in Node.js, you are probably using Puppeteer — or about to. This guide covers the full page.screenshot() API, advanced patterns, and the five production problems that make most teams eventually look for alternatives.

Basic Page Screenshot

The page.screenshot() method is the core API. Here is a complete example showing all the commonly used options:

javascript
const puppeteer = require('puppeteer')

async function takeScreenshot() {
  const browser = await puppeteer.launch()
  const page = await browser.newPage()

  // Set viewport before navigating
  await page.setViewport({ width: 1280, height: 720, deviceScaleFactor: 2 })

  await page.goto('https://example.com', { waitUntil: 'networkidle0' })

  const buffer = await page.screenshot({
    type: 'png',              // 'png' | 'jpeg' | 'webp'
    fullPage: false,          // capture entire scrollable page
    omitBackground: true,     // transparent background (PNG/WebP only)
    encoding: 'binary',       // 'binary' | 'base64'
    // quality: 80,           // 0-100, only for jpeg/webp
    // clip: { x: 0, y: 0, width: 800, height: 600 }, // crop region
    // path: './screenshot.png', // save directly to file
  })

  await browser.close()
  return buffer
}

Key parameters:

  • type — Output format. PNG is lossless and supports transparency. JPEG and WebP are smaller but lossy.
  • fullPage — When true, captures the entire scrollable content, not just the viewport. Useful for long pages but can produce very large images.
  • clip — Captures a specific rectangular region of the page. Define x, y, width, and height in CSS pixels.
  • omitBackground — Removes the default white background, producing a transparent PNG. Only works with PNG and WebP.
  • quality — Compression quality for JPEG and WebP (0-100). Ignored for PNG.
  • encoding — Set to 'base64' if you need a base64 string instead of a Buffer.

For HTML-to-image use cases (social cards, OG images), use page.setContent(html) instead of page.goto(url):

javascript
await page.setContent(html, { waitUntil: 'networkidle0' })
const png = await page.screenshot({ type: 'png', omitBackground: true })

Element Screenshots

Instead of capturing the full page, you can screenshot a specific DOM element. This is useful for generating images of individual components — a pricing card, a chart, a social preview block.

javascript
async function screenshotElement(html, selector) {
  const browser = await puppeteer.launch()
  const page = await browser.newPage()
  await page.setViewport({ width: 1200, height: 800, deviceScaleFactor: 2 })
  await page.setContent(html, { waitUntil: 'networkidle0' })

  const element = await page.$(selector)
  if (!element) throw new Error(`Element not found: ${selector}`)

  const buffer = await element.screenshot({
    type: 'png',
    omitBackground: true,
  })

  await browser.close()
  return buffer
}

// Usage: capture only the card element
const png = await screenshotElement(myHtml, '.social-card')

The element screenshot automatically crops to the element's bounding box. It respects border-radius, box-shadow, and other CSS properties. One caveat: if the element overflows its container, the overflow is clipped.

Advanced Techniques

Wait Strategies

The most common cause of broken screenshots is taking them too early — before fonts load, images render, or JavaScript finishes executing.

javascript
// Option 1: Wait for network to be idle (no requests for 500ms)
await page.goto(url, { waitUntil: 'networkidle0' })

// Option 2: Wait for DOM content only (faster, no external resources)
await page.goto(url, { waitUntil: 'domcontentloaded' })

// Option 3: Wait for a specific element to appear
await page.goto(url, { waitUntil: 'domcontentloaded' })
await page.waitForSelector('.chart-rendered', { timeout: 5000 })

// Option 4: Wait for fonts to finish loading
await page.evaluateHandle('document.fonts.ready')

networkidle0 is the safest default — it waits until there are zero in-flight network requests for 500ms. But it can be slow on pages with long-polling or WebSocket connections. In those cases, combine domcontentloaded with waitForSelector targeting an element that signals render completion.

Custom Viewport and Device Emulation

javascript
// High-DPI social card
await page.setViewport({ width: 1200, height: 630, deviceScaleFactor: 2 })

// Mobile screenshot
await page.setViewport({ width: 375, height: 812, deviceScaleFactor: 3, isMobile: true })

// Or use a built-in device preset
const iPhone = puppeteer.KnownDevices['iPhone 15 Pro']
await page.emulate(iPhone)

Injecting CSS and Fonts Before Screenshot

javascript
// Inject a Google Font
await page.addStyleTag({
  url: 'https://fonts.googleapis.com/css2?family=Inter:wght@400;600;700&display=swap',
})

// Inject custom CSS to override styles
await page.addStyleTag({
  content: `
    body { font-family: 'Inter', sans-serif; }
    .ad-banner { display: none !important; }
  `,
})

// Wait for fonts to load after injection
await page.evaluateHandle('document.fonts.ready')

Setting Cookies and Headers for Authenticated Pages

javascript
// Set cookies before navigating
await page.setCookie({
  name: 'session',
  value: 'abc123',
  domain: 'app.example.com',
})

// Set custom headers
await page.setExtraHTTPHeaders({
  'Authorization': 'Bearer token_here',
  'X-Custom-Header': 'value',
})

await page.goto('https://app.example.com/dashboard')

5 Production Pitfalls

The examples above all work on a developer's laptop. Production is a different story. These are the five problems that surface once you deploy Puppeteer to a server handling real traffic.

1. Memory Leaks from Unclosed Browsers

Every puppeteer.launch() spawns a Chromium process. If your code throws an error before browser.close(), that process stays alive — consuming 150-300MB of RAM until the container is killed.

Actual error in production

FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory

This typically appears after hours of operation when zombie Chromium processes accumulate. Your Node.js process eventually runs out of heap space and crashes.

The fix is a try/finally block around every browser lifecycle, plus a process-level safety net:

javascript
let browser
try {
  browser = await puppeteer.launch()
  const page = await browser.newPage()
  // ... screenshot logic
} finally {
  if (browser) await browser.close()
}

Even with try/finally, leaked processes can happen during ungraceful shutdowns. Production deployments need container memory limits and a process monitor that kills orphaned Chromium instances.

2. Docker Chromium Dependency Hell

Chromium requires system-level libraries that aren't present in minimal Docker images. The first time you try to run Puppeteer in Docker, you'll see something like this:

Missing shared libraries

error while loading shared libraries: libnss3.so: cannot open shared object file: No such file or directory

Or: Failed to launch the browser process! /node_modules/puppeteer/.local-chromium/linux-xxx/chrome-linux/chrome: error while loading shared libraries: libatk-1.0.so.0

The standard fix is a long list of apt-get install dependencies:

dockerfile
RUN apt-get update && apt-get install -y \
  libnss3 libatk1.0-0 libatk-bridge2.0-0 libcups2 \
  libdrm2 libxkbcommon0 libxcomposite1 libxdamage1 \
  libxrandr2 libgbm1 libasound2 libpango-1.0-0 \
  libcairo2 libxshmfence1 fonts-liberation \
  --no-install-recommends && rm -rf /var/lib/apt/lists/*

This adds ~200MB to your Docker image and introduces a maintenance burden — every Chromium version bump can change the required system libraries.

3. Concurrency Limits

Each Puppeteer page consumes 100-200MB of RAM depending on page complexity. On a server with 4GB of RAM, you can realistically handle 8-15 concurrent screenshots before hitting memory pressure.

OOM killer at scale

Running 20+ concurrent screenshots on a 4GB instance regularly triggers the Linux OOM killer: Out of memory: Killed process 1234 (chrome) total-vm:2048000kB. Your container restarts, dropping all in-flight requests.

You need a concurrency limiter — a queue that caps the number of simultaneous screenshots and rejects or queues excess requests. This is non-trivial to get right with proper backpressure and timeout handling.

4. Cold Start Latency

Launching a new Chromium instance takes 2-5 seconds. On serverless platforms (Lambda, Cloud Functions, Vercel Functions), this cold start happens on every invocation or after idle timeouts.

Latency breakdown

Browser launch: 2-4s. New page creation: 200-500ms. Page rendering: 500ms-2s. Screenshot capture: 100-300ms. Total: 3-7 seconds for a single screenshot — before any network latency to deliver the result.

These figures are from testing on a 2-vCPU Docker container in US-East (early 2026). Your results will vary based on hardware and page complexity.

The mitigation is browser pooling — keeping a warm Chromium instance and reusing it across requests. But pooling introduces its own complexity: stale state between pages, memory growth over time, and the need to periodically recycle the browser process.

5. Font Rendering Inconsistency

The same HTML renders differently on macOS, Linux, and Windows because each OS has different font rendering engines (CoreText, FreeType, DirectWrite) and different default fonts.

Cross-platform mismatch

A social card designed on macOS with San Francisco renders with Liberation Sans on Linux Docker containers. Letter spacing, kerning, and anti-aliasing all differ — your carefully designed 1200x630 card now has text that overflows its container.

The only reliable fix is to bundle every font your HTML uses inside the Docker image or load them via @font-face with absolute URLs. System fonts are never portable.

When to Use an API Instead

Puppeteer is the right tool when you need full browser automation — clicking, typing, navigating multi-page flows, or executing complex JavaScript before capture. For those use cases, nothing replaces it.

But if your goal is simply HTML in, image out — social cards, OG images, invoices, reports, email headers — you're taking on a lot of infrastructure complexity for a fundamentally simple operation.

Consider an API when:

  • You need concurrent rendering (more than 5 simultaneous screenshots)
  • You're on a serverless platform where Chromium cold starts are unacceptable
  • You don't want to maintain Chromium in Docker (dependency updates, security patches)
  • You need consistent font rendering across environments
  • Your team would rather ship product than debug OOM kills at 3am

Full disclosure: the example below uses RendShot, which is our product.

Here is the same screenshot operation with the RendShot API — three lines, no Chromium, no infrastructure:

javascript
const response = await fetch('https://api.rendshot.ai/v1/image', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer rs_live_your_key',
    'Content-Type': 'application/json',
  },
  body: JSON.stringify({
    html: '<div style="font-family: Inter; padding: 40px;">Hello World</div>',
    width: 1200,
    height: 630,
    format: 'png',
    deviceScaleFactor: 2,
  }),
})

const { url } = await response.json()
// CDN-hosted image, ready to use
python
import rendshot

client = rendshot.Client("rs_live_your_key")
result = client.generate(
    html='<div style="font-family: Inter; padding: 40px;">Hello World</div>',
    width=1200,
    height=630,
    format="png",
    device_scale_factor=2,
)
print(result.url)  # CDN-hosted image URL

No browser lifecycle. No memory management. No Docker dependencies. No font inconsistencies. The rendering infrastructure runs on managed servers with browser pooling, concurrency limiting, and consistent font environments already solved.

RendShot's free tier includes 100 renders per month — enough to evaluate whether it fits your workflow before committing.

Related Guide
Puppeteer Alternative for Screenshots
Side-by-side comparison of Puppeteer vs RendShot API for screenshot generation — performance, cost, and migration guide.

FAQ

Frequently Asked Questions