Every link shared on Twitter, LinkedIn, Slack, or Discord gets a visual preview. That preview is controlled by a single og:image meta tag — and most sites either leave it blank or point it at a static fallback image that says nothing about the page content.
The result: links that look identical in every feed, competing for attention against rich, contextual previews. Data from Ahrefs and Buffer consistently shows that posts with custom OG images get 2-3x higher click-through rates than those without. LinkedIn's own engineering blog reported that articles with unique preview images receive 98% more comments than those with generic thumbnails.
This guide covers three ways to generate OG images dynamically, with working code for each, so every page on your site gets a unique social preview without manual design work.
How Social Platforms Render Previews
When someone pastes a URL into Twitter, LinkedIn, or Slack, the platform's crawler fetches the page and looks for Open Graph meta tags in the <head>:
<meta property="og:title" content="Dynamic OG Images: Complete Guide" />
<meta property="og:description" content="Automate social previews for every page." />
<meta property="og:image" content="https://example.com/og/dynamic-og-images.png" />
<meta property="og:image:width" content="1200" />
<meta property="og:image:height" content="630" />
<meta name="twitter:card" content="summary_large_image" />The crawler downloads the image at og:image, resizes it, and displays it as the link preview. This happens once — the result is cached for hours to days depending on the platform.
Key constraints:
- Image must be publicly accessible — no auth headers, no localhost
- 1200x630 pixels is the universal safe size (1.91:1 aspect ratio)
- Under 5MB (Twitter) or 8MB (Facebook) file size
- JPEG or PNG — WebP support is inconsistent across crawlers
- The crawler has a short timeout (typically 2-5 seconds) — your image must be available fast
If the og:image tag is missing or the URL returns an error, platforms fall back to a generic preview with just the title text. That's the preview you're competing against.
Three Implementation Paths
There are three mainstream approaches to generating OG images dynamically. Each has real trade-offs in rendering fidelity, infrastructure cost, and developer experience.
1. @vercel/og + Satori (Edge-Rendered JSX)
Vercel's @vercel/og package uses Satori to convert a JSX subset into SVG, then renders it to PNG using a WebAssembly-based renderer. It runs on Vercel Edge Functions with sub-100ms response times.
// app/api/og/route.tsx (Next.js App Router)
import { ImageResponse } from 'next/og'
export const runtime = 'edge'
export async function GET(req: Request) {
const { searchParams } = new URL(req.url)
const title = searchParams.get('title') ?? 'Default Title'
return new ImageResponse(
(
<div
style={{
display: 'flex',
flexDirection: 'column',
justifyContent: 'center',
padding: '60px',
width: '100%',
height: '100%',
background: 'linear-gradient(135deg, #1a1a2e 0%, #16213e 100%)',
color: 'white',
fontFamily: 'Inter',
}}
>
<div style={{ fontSize: 64, fontWeight: 700, lineHeight: 1.2 }}>
{title}
</div>
<div style={{ fontSize: 24, marginTop: 20, opacity: 0.7 }}>
yoursite.com
</div>
</div>
),
{ width: 1200, height: 630 }
)
}Strengths:
- Extremely fast — edge-rendered, <100ms in most regions
- No external service dependency — runs within your Vercel deployment
- Native Next.js integration with
ImageResponse
Limitations:
- Subset of CSS — Satori supports flexbox but not CSS Grid, no
position: absoluteon nested elements, nobox-shadow, limitedbackgroundproperties - No real HTML rendering — it converts JSX to SVG paths, so browser-specific features (web fonts via
@font-face, complex gradients, backdrop-filter) don't work - Vercel-centric — runs on Edge Runtime, tightly coupled to Vercel's platform
- Custom fonts require loading
.ttffiles asArrayBuffer, adding cold-start weight
For simple text-and-gradient cards, this is the fastest option. But if your OG images need full CSS — custom layouts, Tailwind classes, brand typography with multiple weights — you'll hit Satori's rendering limits quickly.
2. Puppeteer (Self-Hosted Headless Chrome)
Puppeteer gives you a real Chromium instance, which means full HTML/CSS support. You render your template as a normal web page and screenshot it.
import puppeteer from 'puppeteer'
async function generateOgImage(title: string, author: string): Promise<Buffer> {
const browser = await puppeteer.launch({
args: ['--no-sandbox', '--disable-setuid-sandbox'],
})
const page = await browser.newPage()
await page.setViewport({ width: 1200, height: 630, deviceScaleFactor: 2 })
const html = `
<!DOCTYPE html>
<html>
<head>
<link href="https://fonts.googleapis.com/css2?family=Inter:wght@400;700&display=swap"
rel="stylesheet">
<style>
* { margin: 0; box-sizing: border-box; }
body {
width: 1200px; height: 630px;
display: grid; place-content: center;
padding: 60px;
background: linear-gradient(135deg, #1a1a2e, #16213e);
font-family: 'Inter', sans-serif;
color: white;
}
h1 { font-size: 56px; line-height: 1.2; }
.author { font-size: 22px; margin-top: 24px; opacity: 0.7; }
</style>
</head>
<body>
<div>
<h1>${title}</h1>
<div class="author">${author}</div>
</div>
</body>
</html>
`
await page.setContent(html, { waitUntil: 'networkidle0' })
const buffer = await page.screenshot({ type: 'png' })
await browser.close()
return buffer
}Strengths:
- Full browser rendering — CSS Grid, Flexbox, custom fonts, gradients, shadows, backdrop-filter — everything works
- JavaScript execution — dynamic charts (Chart.js, D3) render correctly
- Complete control over the rendering pipeline
Limitations:
- Chromium is ~400MB — Docker images are large and slow to deploy
- Each page consumes 100-200MB RAM; concurrent generation requires careful resource management
- Cold start is 2-5 seconds per browser launch
- You own the infrastructure — process management, health checks, memory leak prevention, scaling
Running Puppeteer in production for OG image generation means maintaining a dedicated service with headless Chrome. Most teams end up building a tab pool, a job queue, and a caching layer — effectively reimplementing an image rendering API from scratch. Factor in the engineering time, not just the server cost.
3. RendShot API (Full HTML/CSS, No Infrastructure)
Full disclosure: this is our product.
RendShot renders full HTML/CSS to images via a single API call. For OG images specifically, its template system lets you define the HTML once and pass in variables (title, author, category) per page — no string interpolation, no XSS concerns.
Define a template once:
Create a template in the RendShot dashboard (or via API) with variable placeholders:
<!-- Template: og-blog-post -->
<html>
<head>
<link href="https://fonts.googleapis.com/css2?family=Inter:wght@400;700&display=swap"
rel="stylesheet">
<style>
* { margin: 0; box-sizing: border-box; }
body {
width: 1200px; height: 630px;
display: grid; place-content: center;
padding: 60px;
background: linear-gradient(135deg, #1a1a2e, #16213e);
font-family: 'Inter', sans-serif; color: white;
}
h1 { font-size: 56px; line-height: 1.2; max-width: 900px; }
.meta { font-size: 22px; margin-top: 24px; opacity: 0.7; }
.tag {
display: inline-block; padding: 6px 16px;
border: 1px solid rgba(255,255,255,0.2);
border-radius: 20px; font-size: 14px; margin-top: 20px;
}
</style>
</head>
<body>
<div>
<h1>{{title}}</h1>
<div class="meta">{{author}} · {{date}}</div>
<span class="tag">{{category}}</span>
</div>
</body>
</html>Generate per page:
const response = await fetch('https://api.rendshot.ai/v1/image', {
method: 'POST',
headers: {
'Authorization': 'Bearer rs_live_your_key',
'Content-Type': 'application/json',
},
body: JSON.stringify({
template_id: 'tmpl_og_blog_post',
variables: {
title: 'Dynamic OG Images: Complete Guide',
author: 'RendShot Team',
date: 'April 2026',
category: 'Engineering',
},
width: 1200,
height: 630,
format: 'png',
}),
})
const { url } = await response.json()
// url → https://assets.rendshot.ai/abc123.png (CDN-hosted, ready to use)import rendshot
client = rendshot.Client("rs_live_your_key")
result = client.generate(
template_id="tmpl_og_blog_post",
variables={
"title": "Dynamic OG Images: Complete Guide",
"author": "RendShot Team",
"date": "April 2026",
"category": "Engineering",
},
width=1200,
height=630,
format="png",
)
print(result.url)Strengths:
- Full HTML/CSS rendering — same fidelity as Puppeteer (real Chromium under the hood)
- No infrastructure — no Docker, no browser processes, no memory management
- Template system — define once, generate thousands of variants with type-safe variables
- CDN-hosted output — images are served from a global CDN, no storage to manage
Limitations:
- Requires an API key (free tier: 100 renders/month)
- No JavaScript execution — HTML must be self-contained (no client-side chart libraries)
- Average latency ~1.2 seconds per render (vs <100ms for @vercel/og)
Comparison at a Glance
| Feature | @vercel/og | Puppeteer | RendShot API |
|---|---|---|---|
| Full HTML/CSS | No | Yes | Yes |
| CSS Grid support | No | Yes | Yes |
| Custom web fonts | Limited (.ttf only) | Yes | Yes |
| Tailwind CSS | No | Yes | Yes |
| JS execution | No | Yes | No |
| Avg. latency | <100ms | 2-5s | ~1.2s |
| Infrastructure | Vercel Edge | Docker + Chromium | None |
| Template system | No | DIY | Yes |
| CDN output | DIY | DIY | Yes |
| Cost | Free (Vercel) | Server costs | Free tier, then $19/mo |
Latency estimates based on testing in US-East (early 2026). Pricing as of early 2026 — check each provider's site for current rates.
Framework Integration
Next.js (App Router)
Generate OG images at build time for static pages, or on-demand for dynamic content. Use generateMetadata to wire the OG image URL into your page's meta tags.
// lib/og.ts
export async function getOgImageUrl(title: string, slug: string): Promise<string> {
// Check cache first (e.g., KV store or database)
const cached = await kv.get(`og:${slug}`)
if (cached) return cached as string
const res = await fetch('https://api.rendshot.ai/v1/image', {
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.RENDSHOT_API_KEY}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
template_id: 'tmpl_og_blog_post',
variables: { title },
width: 1200,
height: 630,
format: 'png',
}),
})
const { url } = await res.json()
await kv.set(`og:${slug}`, url, { ex: 86400 * 30 }) // cache 30 days
return url
}// app/blog/[slug]/page.tsx
import { getOgImageUrl } from '@/lib/og'
export async function generateMetadata({ params }: { params: { slug: string } }) {
const post = await getPostBySlug(params.slug)
const ogImage = await getOgImageUrl(post.title, params.slug)
return {
title: post.title,
description: post.excerpt,
openGraph: {
title: post.title,
description: post.excerpt,
images: [{ url: ogImage, width: 1200, height: 630 }],
},
twitter: {
card: 'summary_large_image',
images: [ogImage],
},
}
}Astro
Create an API endpoint that generates and redirects to the OG image.
// src/pages/og/[slug].png.ts
import type { APIRoute } from 'astro'
import { getEntry } from 'astro:content'
export const GET: APIRoute = async ({ params }) => {
const post = await getEntry('blog', params.slug!)
const res = await fetch('https://api.rendshot.ai/v1/image', {
method: 'POST',
headers: {
'Authorization': `Bearer ${import.meta.env.RENDSHOT_API_KEY}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
template_id: 'tmpl_og_blog_post',
variables: {
title: post?.data.title ?? 'Blog Post',
date: post?.data.date?.toLocaleDateString('en-US', {
month: 'long',
year: 'numeric',
}) ?? '',
},
width: 1200,
height: 630,
format: 'png',
}),
})
const { url } = await res.json()
return Response.redirect(url, 302)
}Then reference it in your layout:
<!-- BaseLayout.astro -->
<meta property="og:image" content={`${Astro.site}og/${slug}.png`} />For static blogs, generate OG images during build rather than on each request. Use getStaticPaths in Astro or generateStaticParams in Next.js to pre-render all OG images and store the resulting CDN URLs in your content metadata.
CI/CD: GitHub Actions
Generate OG images for all new or changed content during your deploy pipeline. This approach works with any framework.
# .github/workflows/generate-og.yml
name: Generate OG Images
on:
push:
branches: [main]
paths: ['content/blog/**']
jobs:
generate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 2
- name: Find changed posts
id: changed
run: |
echo "files=$(git diff --name-only HEAD~1 -- content/blog/ | tr '\n' ' ')" >> $GITHUB_OUTPUT
- name: Generate OG images
if: steps.changed.outputs.files != ''
run: |
for file in ${{ steps.changed.outputs.files }}; do
title=$(grep '^title:' "$file" | sed 's/title: "\(.*\)"/\1/')
slug=$(basename "$file" .mdx)
curl -s -X POST https://api.rendshot.ai/v1/image \
-H "Authorization: Bearer ${{ secrets.RENDSHOT_API_KEY }}" \
-H "Content-Type: application/json" \
-d "{
\"template_id\": \"tmpl_og_blog_post\",
\"variables\": { \"title\": \"$title\" },
\"width\": 1200,
\"height\": 630,
\"format\": \"png\"
}" | jq -r '.url' >> og-urls.txt
echo "$slug: $(tail -1 og-urls.txt)"
done
- name: Upload OG URL manifest
uses: actions/upload-artifact@v4
with:
name: og-urls
path: og-urls.txtThis pattern works well for content-driven sites where pages are added infrequently. The generated URLs point to CDN-hosted images that persist independently of your build artifacts.
Template Design Tips
A well-designed OG template makes every shared link a branded touchpoint. Here are practical guidelines from rendering thousands of OG images.
Safe zone. Twitter crops the top and bottom ~15px; LinkedIn crops more aggressively on mobile. Keep all text and key visuals within a 1100x530px centered area inside the 1200x630 canvas. Padding of 50-60px on all sides handles this naturally.
Typography. Use one font family, two weights maximum. Title at 48-64px, subtitle at 20-24px. Avoid going below 18px — text that small becomes unreadable in a feed thumbnail. System fonts (Inter, Geist, Roboto) load faster than custom fonts and render consistently.
Brand consistency. Use your brand's background color or a subtle gradient rather than a photograph. Photos compress poorly at social-preview sizes and compete with the text for attention. A solid-color background with bold text outperforms busy imagery in A/B tests.
Contrast. Social feeds have light and dark modes. Test your template against both #FFFFFF and #1A1A1A surrounds. White text on a dark background tends to work best across both contexts. Avoid thin font weights — they disappear at thumbnail resolution.
File format. Use PNG for OG images with flat colors, text, and sharp edges. JPEG is fine for photo-heavy templates. Avoid WebP — some social platform crawlers still don't support it. Keep file size under 500KB for fast crawler fetches.
Each social platform caches OG images independently and refreshes on different schedules. After updating an OG image, use Facebook's Sharing Debugger, Twitter's Card Validator, and LinkedIn's Post Inspector to force a cache refresh. Slack caches aggressively — you may need to wait up to 30 minutes for updates.
When to Use Which Approach
Choose @vercel/og if you're on Vercel, your OG images are simple text-on-gradient cards, and sub-100ms latency matters (e.g., generating on every request without caching).
Choose Puppeteer if you need JavaScript execution in your OG templates (interactive charts rendered to static images) and you already have container infrastructure with headroom for Chromium processes.
Choose RendShot API if your OG templates use full HTML/CSS (Tailwind, CSS Grid, custom fonts, complex layouts), you want a template system to manage variants, and you don't want to operate browser infrastructure. The API approach pairs especially well with CI/CD pipelines and static site generators where OG images are generated at build time.
For most teams shipping content sites, the API approach eliminates the most operational surface area. You define an HTML template, call an endpoint with variables, and get back a CDN URL. The rendering infrastructure is someone else's problem.