zpaper-draft

Type to search...

to open search from anywhere

Astro dev-apiからSSR / Server Islandsまで広がった会話メモ

概要

Takazudo Modularのガイド記事に埋め込んでいる「Pick Cover」ボタン (YouTube動画のサムネイルをdev環境だけで選び直す機能) の仕組みをClaudeに聞いていたら、Astroのアーキテクチャ全般 (SSRとSSGとserver islandsの違い) の話に発展した。自分用のメモとしてそのまま残す。会話は英語で進んだので英語のまま残している。

Pick Cover実装のサマリ

会話の前提がこれなので軽く触れておく。Takazudo Modularのガイド記事には <YoutubeAt> というMDXコンポーネントがあり、dev環境のときだけ「Pick Cover」ボタンが出る。押すとダイアログで別のキャプチャ画像に選び直せて、POST /api/dev/youtube-at-pick を叩くと、Astro dev serverが該当MDXファイルの capture="..." 属性を直接書き換える (temp fileにwriteしてからatomicに fs.rename)。

実装はAstroの APIRoute + injectRoute を使ったカスタムインテグレーション (src/astro/integrations/dev-api.ts)。エンドポイントファイルは src/pages/ の外 (src/astro/lib/dev-api/) に置いてあって、本番ビルドには混入しない。command === 'dev' のときだけ injectRoute を呼ぶので astro build では完全にno-op。Vite自体の機能ではなく、AstroがViteの上に載せている APIRoute 機構をdev-only toolとして「転用」している形になっている。本番は output: 'static' なのでそもそもPOSTハンドラは使えないし、deploy先にはMDXファイル自体が存在しない。

以下、そのときの会話をほぼそのまま残す。会話は英語で進んだので英語のまま。

会話

自分

our YoutubeAt has “Pick Cover” button, and it opens dialog, then we can pick new cover. does this reflect to the mdx instantly? what tech are we using for this?

Claude

Yes — picking a cover writes to the MDX on disk immediately, and the dev server then hot-reloads it. Here’s the exact chain:

Clientcomponents/mdx/youtube-at/youtube-at.tsx The <YoutubeAt> component renders a “Pick Cover” button only in dev + WIP mode (i.e. when videoDir is set and import.meta.env.DEV is true). Clicking it opens CaptureGalleryDialog; on confirm it:

  1. Reads the article slug from location.pathname (/guides/<slug>/).
  2. POST /api/dev/youtube-at-pick/ with { articleSlug, id, capture }.
  3. Also updates local React state so the cover refreshes instantly without waiting for HMR.

Serversrc/astro/lib/dev-api/youtube-at-pick.ts

  • Validates: slug [a-z0-9-]+, id [a-zA-Z0-9_-]+, capture capture-HH-MM-SS.jpg.
  • Opens src/mdx/guides/<slug>.mdx (and .en.mdx if present — skipped silently if missing).
  • Regex-matches the <YoutubeAt> block whose id="..." matches, then inserts or replaces the capture="..." attribute in-place.
  • Writes atomically: temp file → fs.rename() to final path.
  • Returns 404 if the id isn’t found, 200 with the paths on success.

Dev-only guardsrc/astro/integrations/dev-api.ts

  • Route is only injected when command === 'dev', so it never ships to production.
  • The handler file lives under src/astro/lib/dev-api/ outside src/astro/pages/ so Astro’s router can’t accidentally pick it up in a build.
  • Defense in depth: also checks import.meta.env.DEV at runtime.

Change propagation — no custom watcher. Vite’s built-in file watcher sees the .mdx change and Astro HMR refreshes the page. Because the client also patches local state, the user sees the new thumb immediately even before HMR lands.

Caveats

  • YouTubeCaptureWip uses a similar but separate flow (same dir pattern, its own dialog).
  • The regex rewriter is per-component-id, so two YoutubeAt with the same id in one MDX would be ambiguous — ids need to be unique within a file.
  • No auth / no conflict handling — fine for local dev, not safe to expose.
  • Matches only .mdx in src/mdx/guides/, so this flow is guide-scoped (notes/highlights wouldn’t be touched by it as written).

自分

what is the backend of the api? vite has such feature?

Claude

The backend is Astro, not raw Vite. Here’s what’s actually happening:

It’s Astro’s own API route system

Astro supports file-based API routes (like Next.js): any .ts file under src/pages/ that exports GET/POST/etc. functions becomes an HTTP endpoint. Those exports are typed as APIRoute from astro, and Astro handles the request/response plumbing (request.json(), new Response(...), status codes) — the Web fetch API shape.

Normally you’d just drop the file in src/pages/api/dev/youtube-at-pick.ts and be done. This project specifically avoids that, for a build-time reason explained next.

Why injectRoute + a custom integration

Astro’s default output is output: 'static' — everything is prerendered. A POST handler can’t be prerendered, so it must opt out with export const prerender = false. But then astro build refuses to run without an SSR adapter (Netlify, Node, etc.), failing with [NoAdapterInstalled]. The team didn’t want to install an adapter just to support a dev-only button.

The workaround, in src/astro/integrations/dev-api.ts:

  1. Custom Astro integration (devApiIntegration) hooks into astro:config:setup.
  2. It checks command — Astro passes 'dev' or 'build'.
  3. Only when command === 'dev' does it call injectRoute({ pattern: '/api/dev/youtube-at-pick', entrypoint: ... }).
  4. The entrypoint file is deliberately placed outside src/astro/pages/ (in src/astro/lib/dev-api/) so Astro’s file-based router can’t pick it up on its own. It only becomes a route because injectRoute adds it, and only in dev.

In astro build, the integration is a no-op → the file is never imported → prerender = false never triggers the adapter check → static build works cleanly.

So where does Vite fit?

Astro’s dev server is built on Vite — Vite handles module loading, HMR, and the HTTP server. But the API route concept (export const POST: APIRoute) is Astro’s abstraction on top of Vite, not a Vite feature. Vite by itself only ships middleware hooks (configureServer(server) { server.middlewares.use(...) }) — you’d have to wire req/res yourself. Astro’s APIRoute handler is the Web Request/Response shape wrapped over that.

Summary

LayerRole here
ViteDev server, file watcher, HMR (reloads the page after the MDX is rewritten)
AstroAPIRoute contract, injectRoute, integration lifecycle (astro:config:setup, command)
Custom integrationConditional injectRoute only when command === 'dev', so prod builds don’t need an SSR adapter
Node fsAtomic MDX rewrite (writeFile to temp → rename)

The double-guard (command !== 'dev' in the integration + import.meta.env.DEV in the handler) is specifically so a future refactor can’t accidentally expose this file-writing endpoint in production.

自分

is that astro API designed to do this kind of while-in-dev purpose? it sounds like it will never work on the deployed without using Cloudflare workers or some other backend systems.

Claude

You’ve put your finger on exactly the right tension. The short answer:

Astro API routes are general-purpose, but this project is repurposing them as a dev-only tool. You’re right that this exact handler could never run on a static deploy — that’s the whole point of the workaround.

What Astro API routes are actually for

Astro’s APIRoute is a general backend feature, designed for three production scenarios:

  1. Prerendered endpointsGET handlers that run at astro build and output static JSON files. Works on any static host. (e.g., /api/products.json baked at build time.)
  2. SSR endpoints — any handler with prerender = false, runs at request time, requires an adapter: @astrojs/node, @astrojs/netlify, @astrojs/cloudflare, @astrojs/vercel, etc. The adapter compiles the handler into whatever runtime the host supports (Node server, Netlify Function, Cloudflare Worker, Vercel Function, Deno Deploy).
  3. Hybrid — mix of the above.

So yes, a POST handler in production is always “SSR adapter + some backend.” There is no fourth mode.

Why this specific handler could never run in prod anyway

Even if the team installed @astrojs/netlify and let this POST deploy as a Netlify Function, it still wouldn’t work — for reasons that go beyond Astro:

  • The MDX source files don’t exist on the server. Netlify Functions only see the built dist/ output. src/mdx/guides/*.mdx is left behind on the build machine.
  • Serverless/edge filesystems are read-only or ephemeral. Cloudflare Workers have no filesystem at all. Netlify Functions have /tmp but it vanishes after the invocation and is per-container. fs.writeFile('src/...') is a no-op everywhere.
  • Even if you could write, there’s nothing to write to — you’d have to commit back to git via the GitHub API, go through a headless CMS, trigger a rebuild, etc. That’s a completely different architecture (a git-backed CMS like Decap/Netlify CMS, TinaCMS, etc.).

So what’s the pattern here, really?

It’s a local authoring tool that piggybacks on the dev server. The author is the only user, the “backend” is their own laptop, and the “database” is the MDX files in their working tree. The handler uses Astro’s APIRoute contract because:

  • It’s a clean Web Request/Response abstraction (nicer than raw Node req/res).
  • Astro already runs it in the same Vite process, so no separate server.
  • Integrations + injectRoute give a clean way to make it structurally impossible to leak into production (the file isn’t even imported during astro build).

A more “idiomatic Vite” alternative would be a Vite plugin with configureServer(server) { server.middlewares.use('/api/dev/...', handler) } — that’s what Vite’s plugin API is actually designed for (dev-only middleware). Functionally equivalent, slightly lower-level. The team chose Astro’s layer because this is an Astro project and the APIRoute ergonomics are nicer.

The general pattern name

In content-heavy static-site projects, this is often called a “dev-time CMS” or “local authoring endpoint”. You’ll see it in:

  • Astro/Next.js blogs with “edit this page” buttons that rewrite frontmatter in dev.
  • Storybook-style tools that save component states back to disk.
  • Tina CMS in its local-only mode.
  • Zola/Hugo projects with companion Go tools that edit TOML frontmatter.

They all share the same trick: the server is the author’s own dev machine, so you can freely fs.writeFile without any of the constraints of a deployed runtime. The deploy target stays 100% static.

Implication for your mental model

If you ever wanted something like the Pick Cover button to work on a deployed site (say, so a non-developer could edit cover images through a live URL), you’d need a totally different architecture:

  • SSR adapter (Node / Cloudflare / Netlify).
  • A persistent store — not the filesystem. Options:
    • Git-backed: POST commits to GitHub via the API, rebuild triggers on push. (TinaCMS, Decap.)
    • Runtime store: Cloudflare R2 / KV / D1, Netlify Blobs, a real database. But then MDX is no longer the source of truth — it’s just what gets rendered from the store at build time.
  • Authentication, because now the internet can hit it.
  • Preview/rollback, because you lose git’s safety net if you skip the commit path.

The current setup deliberately avoids all of that by saying: “this button only exists on Takazudo’s laptop, and the git commit afterward is the publish step.” It’s the right call for a solo-authored content site.

自分

I can understand that returns the static data easily. but what is Astro SSR? it triggers build? if so, it sounds like astro triggers the whole site build. Or do we need to one page build feature? is it possible with astro?

Claude

Great question — and the confusion is common because “SSR” and “build” are two different things that get conflated.

SSR ≠ build. SSR = per-request rendering.

Here’s the clean mental model:

ModeWhen HTML is generatedServer needed at runtime?Example
SSG / staticOnce, at astro buildNo — just a CDN serving filestakazudomodular.com today
SSROn every request, at request timeYes — an always-on function/serverTwitter timeline, dashboard
ISR / on-demandFirst request builds, subsequent requests reuse cached HTML until invalidatedYes, same as SSRNext.js blogs, Vercel-hosted sites

SSR does not trigger a build. The build still happens once, ahead of time — but what gets built is a server bundle (a JS module exporting route handlers), not static HTML files. Deploy that bundle to Node / Cloudflare Workers / Netlify Functions / Vercel Functions, and it sits there. When a request arrives at /guides/oxi-one-mkii-guide-ep9/, the handler runs right then, reads MDX (or DB, or whatever), renders HTML, returns it. Next request repeats. No rebuild.

So SSR is really “the page is a function, invoked per request.” It’s the same thing Express/Rails/Django do — Astro just gives it a nicer API and uses the framework to know how to bundle it.

What Astro actually gives you

Astro has a per-page toggle:

// src/pages/some-page.astro
export const prerender = true; // → static HTML at build time (SSG)
export const prerender = false; // → function, invoked per request (SSR)

Combined with output:

  • output: 'static'all pages prerendered. No SSR allowed (this is the zmod project today).
  • output: 'server'all pages SSR by default. Individual pages can opt into prerendering with prerender = true.
  • output: 'static' + per-page prerender = false (what Astro used to call “hybrid”, now just the default) — mix. Most pages are static, a handful are SSR. But as soon as one page is SSR, you need an adapter and an always-on runtime.

The moment you have any SSR page, every request to that page goes through the function at runtime. Not a build.

”Can Astro rebuild just one page?” — ISR territory

No, Astro core does not have a “rebuild one page” feature. What you’re describing is Incremental Static Regeneration (ISR), which Next.js popularized.

ISR works like this:

  1. Page was built statically at astro build, cached somewhere (CDN, function cache).
  2. Request comes in — CDN serves the cached HTML instantly.
  3. If the cache is older than N seconds (or has been manually invalidated), the runtime regenerates that single page in the background, swaps in the new version, and serves the new version to the next request.

ISR is a property of the hosting platform’s adapter, not Astro itself:

  • Vercel adapter (@astrojs/vercel) — supports ISR via isr: { expiration: 60 } in the adapter config. You can also mark pages as on-demand. This is the closest thing to what you’re imagining.
  • Netlify adapter (@astrojs/netlify) — supports on-demand builders (now called “persistent functions”) with stale-while-revalidate semantics.
  • Cloudflare adapter — no ISR in the traditional sense. You’d do caching via Cache API / KV manually.
  • Node adapter — no ISR. You’d implement cache yourself.

So: “rebuild one page when content changes” is possible, but only on platforms whose adapter layers ISR on top of Astro. And even then, the “rebuild” is a single page handler re-running, not a whole-site astro build.

Why this matters for the “Pick Cover” scenario

Let’s trace what would have to be true for that button to work on the deployed site:

OptionWhat happensFits Astro how?
A. Full static + git-backed CMSButton commits to GitHub → Netlify build hook → full astro build (~3–4 min per your CLAUDE.md) → CDN updatesCurrent architecture. Works, but the edit isn’t instant.
B. SSR the affected pagesMark /guides/[slug] as prerender = false, store covers in a DB/KV, read on every requestEvery guide request now runs a function. Slower, more infra, but edits are instant.
C. SSR + ISR on Vercel/NetlifyGuide pages are static-by-default, cached at the edge; the POST handler invalidates just that slug; next request regenerates that one pageNeeds a platform adapter that supports on-demand invalidation. Astro core doesn’t give you this directly.
D. Client-side fetch of dynamic dataMDX ships static, but the cover image URL is fetched from a separate /api/covers/<id> endpoint on page loadKeeps most of the site static. Only the API endpoint is SSR. Your current “static + a few Netlify Functions” posture is already this shape (search uses MiniSearch + a function).

Option D is actually what most content-heavy Astro sites do when they need a little dynamic behavior: keep SSG for the page shell, sprinkle a handful of SSR endpoints for the small pieces that need to be live.

So what IS SSR good for?

Concrete examples where SSR pays off:

  • Auth-gated pages/account/orders must see the logged-in user; can’t be prebuilt.
  • Personalization — A/B tests, geo-based content, user-specific recommendations.
  • Long-tail dynamic routes — e-commerce sites with millions of products where prebuilding all of them would take hours. Build the top 1000, SSR the rest.
  • Search result pages/search?q=foo, infinite combinations, can’t prerender.
  • Preview mode — draft content from a CMS, rendered on-demand for editors.

For a shop catalog like zmod, with ~hundreds of products and guide articles that change a few times a week, SSG + periodic rebuild is the right answer. SSR would be all downside — slower, more infra, no real benefit.

TL;DR

  • SSR = function invoked per request. Does not trigger a build.
  • Build happens once, ahead of time, producing either static HTML (SSG) or a server bundle (SSR). Usually both.
  • “Rebuild one page” is ISR, which is an adapter/platform feature (Vercel, Netlify persistent functions), not core Astro.
  • zmod stays 100% static in production precisely because none of this is worth the complexity for a content site — the “Pick Cover” trick sidesteps it entirely by pushing all the mutation into dev mode.

自分

ah sorry, my bad. I mixed SSR and SSG. so with Astro, it has SSR. but I recognize Astro’s build is standing on the temporary generated DB like thing by… like collection frontmatter of mdx, or JSON, with such way. when we use Astro’s SSR, what happens about that kind of data? It looks like we need full website build even if it is cached on Astro side.

Claude

Your intuition is exactly right, and you’ve spotted the single most important subtlety about Astro SSR: SSR does not give you a live content layer. Let me unpack it.

What the content layer actually is

In Astro 5+ (and zmod runs “Astro 6” per CLAUDE.md), content collections are powered by the Content Layer API. When you run astro dev or astro build, Astro:

  1. Walks your configured sources (glob loaders for src/mdx/**/*.mdx, custom loaders for APIs/DBs, etc.).
  2. Parses each MDX/JSON file — extracts frontmatter, validates against the Zod schema, compiles the body.
  3. Writes the result into a data store — a serialized cache at .astro/data-store.json (plus assets).
  4. Exposes it via astro:contentgetCollection('guides'), getEntry(...), etc.

This “database-like” thing you’re picturing is real. It’s an in-memory indexed store, fed by a build-time process. Queries against it are just object lookups — no filesystem I/O at request time.

What happens to it in SSR mode

Here’s the key fact: the content layer still runs at astro build time. SSR does not change that. What differs is what the build produces:

ModeWhat build producesWhere content data lives
SSG (output: 'static')Static HTML for every pageConsumed during build, then discarded — HTML is the artifact
SSR (output: 'server')A JS server bundle exporting route handlersSerialized into the server bundle itself and shipped to the runtime

When your SSR page handler calls getCollection('guides') at request time, it’s not reading from src/mdx/guides/*.mdx — those files don’t exist on the deployed function. It’s reading from the snapshot that got baked into the server bundle at build time. In-memory object lookup. Fast. Static.

So “SSR a guide page” gives you:

  • Dynamic params, cookies, headers, auth, query strings, DB reads — yes, these are per-request.
  • Fresh content from the MDX file — no, still a build-time snapshot.

You were right: updating an MDX file in production requires a rebuild, even in SSR mode. SSR buys you live code, not live content.

Why dev mode feels different

In astro dev, the content layer has a file watcher. Edit an MDX file → Astro incrementally re-runs the loader for that one entry → updates the in-memory store → Vite HMR pushes the new data to the browser. That’s why the Pick Cover button “just works” — the file write triggers the watcher, which re-indexes, which HMRs.

None of that machinery exists in production. The watcher is a dev-time Vite plugin; the in-memory store in prod is frozen at build time.

”Do we need a full site build for one MDX change?”

In Astro core: yes. astro build is all-or-nothing. There is no astro build --only src/mdx/guides/foo.mdx. It re-runs the content layer for the whole site, then renders every page (in SSG) or bundles the server (in SSR).

Astro does have some build-time caching — the content layer uses the data store as a cache, so entries whose source file didn’t change can be skipped on the parse step. But the subsequent bundling + rendering steps still touch the whole site. On your project, the 3–4 min build is realistic.

If you want per-page invalidation in production, you’re in ISR / on-demand-revalidation territory, and that’s a platform concern, not an Astro concern:

  • Vercel adapter: set isr: { expiration: 60 } or use on-demand revalidation webhooks — tells Vercel to regenerate just /guides/foo/ on the next request after invalidation.
  • Netlify adapter: on-demand builders / persistent functions — same idea, different API.
  • Cloudflare / Node adapters: no native ISR. You’d roll your own cache with KV/Cache API.

Even then — and this is the subtlety — those platforms are re-running your SSR handler, which reads from the bundled content snapshot from the last full build. They don’t re-execute the content layer per page. So ISR regenerates HTML, but the source data it reads is still from the last astro build.

To actually get “change MDX → page updates, no full rebuild,” you need to stop using the content layer as your source of truth and instead pull content from somewhere live:

// Dynamic guide route, SSR mode
export const prerender = false;

export async function GET({ params }) {
  const mdx = await fetch(`https://cms.example.com/api/guides/${params.slug}`);
  // ...render on the fly
}

That’s the headless-CMS pattern. The content layer is bypassed entirely; getCollection() isn’t involved. It’s a different architecture, with different trade-offs (DB latency per request, no Zod validation at edit time, no git history on content, etc.).

The summary you’re probably reaching for

  • Astro’s content collections are a build-time indexer. The result is a data store.
  • SSG bakes that store into HTML. SSR bakes it into the server bundle. Either way, the store is a snapshot.
  • SSR makes your route handlers dynamic. It does not make your content collections dynamic.
  • “Change MDX → see it live in prod” → no Astro-native solution. Options: rebuild, ISR at the adapter level, or move content out of collections into a live source.
  • zmod’s Pick Cover trick works precisely because it only exists in dev, where the watcher + HMR chain actually connects filesystem changes back to the UI.

This is why dev-only tools that rewrite source files are such a sweet spot for content-heavy Astro sites: you get all the SSG speed + CDN simplicity in prod, and you get live editing feel in dev, without ever needing to justify an SSR runtime.

自分

all right, so Astro SSR uses build timing snapshot data… but if so, what is that SSR for? it can use workers or netlify function result in a page — inside the astro template?

Claude

Yes — that is exactly what SSR is for. You’ve just reached the real answer on your own. Let me sharpen the mental model with concrete examples.

The one-sentence version

SSR makes the .astro page’s frontmatter code run per-request, which means you can await fetch(...) at request time — from a Worker, a Netlify Function, a database, a third-party API, whatever — and render the response directly into the template.

The minimal example

---
// src/pages/inventory.astro
export const prerender = false; // SSR this page

// This runs on every request, on the server (Worker / Function / Node)
const res = await fetch('https://api.example.com/stock');
const stock = await res.json();
---

<Layout>
  <h1>現在の在庫</h1>
  <ul>
    {stock.items.map((item) => <li>{item.name}{item.count}</li>)}
  </ul>
</Layout>

At request time, the server runs the --- block, the fetch hits a live API, and the HTML is rendered with fresh data. No build involved. Deploy that to Cloudflare via @astrojs/cloudflare and the whole Astro app is a Worker — the .astro page runs inside the Worker runtime on every request.

The killer pattern: baked + live in the same page

This is where SSR actually earns its keep. You can mix content-layer data (static, baked at build) with live API data (fetched per request) in a single template:

---
// src/pages/products/[slug].astro
import { getEntry } from 'astro:content';
export const prerender = false;

const { slug } = Astro.params;

// Static: from the build-time snapshot (your product master data, brand info, specs)
const product = await getEntry('products', slug);

// Live: from a runtime API (stock, price-in-JPY-today, reservation count)
const [stockRes, priceRes] = await Promise.all([
  fetch(`https://inventory.example.com/stock/${slug}`),
  fetch(`https://rates.example.com/jpy/${product.data.priceUSD}`),
]);
const stock = await stockRes.json();
const price = await priceRes.json();
---

<Layout title={product.data.title}>
  <h1>{product.data.title}</h1>
  <p>{product.data.description}</p>
  {/* static from MDX */}
  <p>本日の価格: ¥{price.jpy.toLocaleString()}</p>
  {/* live */}
  <p>在庫: {stock.count > 0 ? `${stock.count}` : '売り切れ'}</p>
  {/* live */}

  <product.Content />
  {/* rendered MDX body — still baked */}
</Layout>

This is the thing SSR enables that SSG can’t do. The page has both halves: stable content from MDX that doesn’t change between deploys, and volatile data that would be stale if baked. One HTML response, rendered per request, combining both.

Real examples where this pays off

Use caseStatic (content layer)Live (SSR fetch)
Product pageSpec, description, photos, brandStock count, live FX rate, “N people viewing”
Blog postArticle body, frontmatterComment count, view count, related-posts from ML service
Event pageEvent description, venue, date”Spots remaining”, waitlist status
DashboardPage chrome, nav, static help textUser-specific data from DB
Guide pageGuide content from MDXPer-user progress, “you’ve completed N/9 episodes”

The template is the composition layer. That’s SSR’s job — not to make content live (the content layer is still frozen), but to let you mix live request-time data into a page that also uses baked data, inside one rendering pass.

The alternative: keep the page static, hydrate an island

If the live data isn’t needed for SEO / first-paint, you have a second option — keep the page 100% static and put a client-side island on just the volatile piece:

---
// src/pages/products/[slug].astro  (prerendered, no SSR)
import { getEntry } from 'astro:content';
import StockBadge from '../../components/stock-badge.tsx';

const { slug } = Astro.params;
const product = await getEntry('products', slug);
---

<Layout>
  <h1>{product.data.title}</h1>
  <StockBadge client:load slug={slug} />
  {/* React/Preact island, fetches on mount */}
</Layout>
// src/components/stock-badge.tsx
export default function StockBadge({ slug }: { slug: string }) {
  const [stock, setStock] = useState<number | null>(null);
  useEffect(() => {
    fetch(`/api/stock/${slug}`)
      .then((r) => r.json())
      .then((d) => setStock(d.count));
  }, [slug]);
  return <p>在庫: {stock ?? "..."}</p>;
}

Now the whole page is static HTML on the CDN. Only the stock badge hits the network — from the browser, after hydration. /api/stock/:slug can be a Netlify Function / Cloudflare Worker / whatever.

This is actually how zmod is built today. Static Astro + a handful of Netlify Functions (search, etc.) + client-side fetches from islands when needed. No SSR at all. The dev-api Pick Cover thing is a separate hack that only exists in dev.

When to pick which

SituationAnswer
Data is static-enough to bakeSSG. Don’t overthink it.
Data is dynamic but only matters for logged-in users, doesn’t affect SEOStatic page + client-side fetch from an island. Cheapest, fastest CDN serve, no SSR infra.
Data is dynamic AND must be in the HTML (SEO, og
, first-paint)
SSR the page. Fetch at request time, render into template.
Page has too many route combinations to prerender (millions of products)SSR. Or SSG the top-N + SSR the long tail.
Auth / session state must gate renderingSSR. Cookies are server-side.
You’re tempted to “SSR because it feels modern”Don’t. Stay static, use islands.

The Worker / Netlify-Function question specifically

Three distinct setups people confuse:

  1. Astro SSR deployed as a Worker@astrojs/cloudflare. The whole Astro app is compiled into a Worker. .astro frontmatter is the Worker code. There’s no “calling a Worker” — the page is the Worker.
  2. Static Astro + separate Workers — Astro is SSG, deployed to CDN. Separate Workers / Netlify Functions exist at /api/* for dynamic bits. Pages call them via client-side fetch (from islands) or build-time fetch (if the data can be baked).
  3. Astro SSR calling OTHER Workers — Astro page runs on Cloudflare/Netlify, and its frontmatter fetches a second Worker in a different service. Classic microservice pattern. Works fine, just adds a network hop.

zmod is squarely #2. A hypothetical “live product stock” feature could go either #2 (client-side from an island) or switch to #3 (SSR the product page, fetch stock during render). The first is simpler; the second gives you SEO-visible stock numbers.

So your framing — “it can use Workers or Netlify Function result in a page inside the Astro template” — is precisely correct. That’s the core of what SSR is for. The content layer stays frozen, the template becomes dynamic, and live data shows up by await fetching it at render time.

自分

hum, so SSR is literally… SSR? there is a 2 patterns?

A: real SSR. the request makes the HTML in serverside. by workers or Netlify functions like thing B: it’s called SSR, but it’s SSG + hydration on time API request -> update the page

we can choose both with Astro? Or it is called SSR, but in fact, is it B?

Claude

Great sharpening of the question. Clean answer:

Astro SSR is genuinely A. Real SSR.

When Astro says “SSR,” it means the .astro frontmatter literally runs on a server (Node / Worker / Function) per request, and the HTML that leaves the server already contains the fresh data. Pattern A. No ambiguity.

Pattern B — “static HTML + browser-side fetch that fills in data after hydration” — exists and is very common in Astro projects, but it is not called SSR. Astro calls it islands (or “client-side data fetching in an island”). The server at that point is doing nothing per-request; it’s still serving the same bytes from the CDN to everyone, and all the dynamism lives in the browser after JS boots.

So: two different things, two different names, don’t conflate them.

The four combinations, with names

Page modeRuntime data fetch locationAstro nameInfra
prerender = truenoneSSG (static)CDN only
prerender = truebrowser (inside an island)SSG + client islandsCDN + some API endpoint somewhere
prerender = falseserver (inside .astro frontmatter)SSRSSR adapter (Node/Worker/Function)
prerender = falseboth server AND browser islandsSSR + client islandsSSR adapter + island API endpoints

Rows 2 and 3 both give you “live data on the page,” but through completely different mechanisms:

  • Row 2 — the HTML is the same cached bytes for everyone. The user’s browser downloads it, runs JS, the island mounts, fetch() hits an API, the DOM updates. Server sees nothing per-request. First paint is stale, then updates after JS. Bad for SEO on the dynamic piece. Cheap.
  • Row 3 — every request hits the SSR runtime. It runs the frontmatter, calls fetch server-side, builds the HTML with the fresh data baked in, returns it. User sees fresh data at first paint. Good for SEO. More expensive, slower TTFB, needs always-on infra.

Your question — “is Astro’s SSR secretly B?” — no. It’s A. If you want B, you write a static page with an island.

Both are available. You pick per-page.

You can mix them freely in one project:

  • src/pages/index.astroprerender = true, fully static.
  • src/pages/products/[slug].astroprerender = true, but embeds a <StockBadge client:load /> island that fetches stock in the browser. (Pattern B.)
  • src/pages/dashboard.astroprerender = false, SSR, reads cookies, calls DB in frontmatter, returns user-specific HTML. (Pattern A.)
  • src/pages/search.astroprerender = false, SSR, reads ?q=..., calls search service, renders results. (Pattern A.)

Per-page decision. Astro’s build produces a static output for the prerendered pages and a server bundle for the rest. You need an SSR adapter installed, but only the non-prerendered pages actually use it.

The one complication: “server islands” (Astro 5+)

Worth knowing it exists, since it blurs the line.

Astro 5 added a feature called server islands. You mark a component with server:defer and the rest of the page stays SSG (on the CDN), but at request time the runtime sends a follow-up request to the SSR runtime to render just that one island, and the result is stitched into the page as it streams. It looks like:

---
// page is still prerendered — ships from the CDN
---

<Layout>
  <h1>OXI ONE MKII</h1>
  <ProductDescription />
  {/* static */}
  <StockPanel server:defer>
    {/* rendered by the SSR runtime per request */}
    <p slot="fallback">読み込み中...</p>
  </StockPanel>
</Layout>

This is a genuinely third thing: the page itself is SSG (row 1), but it has holes that are filled in by real SSR (row 3) at request time, without going through a client-side fetch from a browser island. It’s somewhere between B and A — the dynamic data is server-rendered (A’s strength) without paying SSR cost for the whole page (B’s strength). Think of it as “partial SSR,” or ESI/edge-side-includes for Astro.

Good for: dynamic fragments where you want server-rendered HTML without sacrificing the CDN cache for the shell. Trade-off: requires an SSR runtime for the island even though the shell is static, and the island shows a fallback until the runtime responds.

TL;DR

  • Astro SSR is real SSR. Server generates HTML per request.
  • The “static page + browser fetch” thing is islands, not SSR. Different mechanism, different name.
  • You pick per-page which mode a page runs in.
  • Server islands are a newer third option that gives you SSR-rendered fragments inside an otherwise-static page — worth knowing but not what “SSR” means in the plain sense.

So when docs say “SSR,” read it as “the server runs your template on every request.” When you see “islands” or client:* directives, that’s browser-side hydration, which is the B pattern you were naming.

自分

about server

, is that request come as one request? so… the body part first, then next part at the later side, and workers like system handles it? but you said <p slot="fallback">読み込み中...</p> isn’t it JavaScript hydration?

Claude

Sharp catch. You’ve spotted the thing frameworks love to be vague about. Let me be exact.

It’s two requests, not one. And yes, JS does the swap.

The mechanism is:

  1. Request 1: GET /guides/foo/ → CDN

    Returns the prerendered shell. This HTML contains:

  • Everything that’s static (header, nav, body, footer)
  • The <p slot="fallback">読み込み中...</p> as literal HTML where the island will go
  • A tiny inline <script> that knows how to fetch the island and splice it in
  1. Request 2: GET /_server-islands/StockPanel?p=<encrypted-props> → SSR runtime

    Initiated by that inline script after the page loads. Hits the Astro SSR runtime (the Worker / Netlify Function / Node process). The runtime renders just the <StockPanel> component server-side and returns its HTML fragment.

  2. DOM swap

    The inline script receives the HTML fragment and replaces the fallback node with it.

So your instinct is right: the fallback is real, the swap is JS-driven, and there are two HTTP requests. The “server” in “server islands” refers to who renders the island’s HTML (the SSR runtime, not the browser), not how the island arrives in the DOM (JS fetch + innerHTML-ish swap).

But it’s not “hydration” in the strict sense

Here’s where the precision matters — because you’re conflating two things that usually go together but don’t have to:

  • Hydration = the browser downloads a framework runtime (React / Preact / Svelte / Vue), attaches it to server-rendered HTML, and takes over rendering as a live component tree. Event handlers get wired up. State becomes reactive. The component is now “alive” in the browser.
  • DOM insertion = JS receives HTML from somewhere and puts it in the page. No framework takes over. It’s just HTML.

Server islands do DOM insertion, not hydration. The swapped-in HTML is inert — no React/Preact runtime is loaded for it, no state, no event handlers beyond whatever plain <a> / <form> / <button onclick=""> the HTML itself contains. If you want the island to be interactive after it lands, you’d nest a client:load island inside the server island and pay hydration cost for that inner part only.

Contrast the two side-by-side:

<StockPanel client:load /> (browser island)<StockPanel server:defer /> (server island)
HTML requests1 (CDN only)2 (CDN + SSR runtime)
Who renders the island’s HTMLBrowser, using the component framework (React/Preact)SSR runtime, using Astro’s server renderer
Framework runtime shipped to clientYes (React/Preact JS bundle)No, unless the island nests a client island
Data fetched…in the browser, from useEffect or similaron the server, in the component’s frontmatter
Fallback while loadingwhatever you render as initial stateliteral <slot name="fallback"> HTML from the prerendered page
SEO sees the dynamic dataNo (browser fetches after load)No (CDN cache has the fallback, not the island) — same as client island for first-paint SEO
CDN cachability of shellYesYes — the whole point of this mode
Needs SSR adapterNoYes — the /_server-islands/* route is SSR

The last two rows are why server islands exist. You want:

  • The shell to sit on the CDN (fast, cheap, cached for everyone),
  • The dynamic piece to be server-rendered (so it doesn’t need a React bundle in the client just to show a number),
  • without SSR-ing the whole page (which would kill CDN caching for the shell).

Server islands hit all three. The cost is: you ship an SSR runtime somewhere to serve the /_server-islands/* endpoint, and the user sees “読み込み中…” for one network round-trip before the real content lands.

Why not a single streaming response?

You might reasonably ask: “Why two separate requests? Couldn’t the server stream the shell first, then the island content second, in a single response?” Some frameworks do this (React Server Components with streaming, Next.js App Router’s streaming SSR, etc.). Astro deliberately chose two separate requests, because:

  • If the shell and island came in one response, the CDN would have to cache or bypass the whole thing as a unit. Caching the shell while keeping the island fresh is impossible with a single response.
  • Two requests let the shell be cached for months at the edge, while the island request goes straight to the SSR runtime every time (or is cached on its own shorter TTL).
  • It also keeps the shell renderable without needing an SSR runtime at all — the shell is 100% static build output. If the SSR runtime is down, the shell still loads (with the fallback visible).

It’s an architectural trade-off favoring independent cachability over single-round-trip latency. Makes sense for content sites that want CDN wins.

So: is server
“real SSR” or “JS hydration”?

It’s genuinely a hybrid and your discomfort with the categories is warranted:

  • The island’s HTML is produced by real SSR (row 3 from the table last message — server-side rendering per request).
  • The island’s arrival in the page is JS-driven (row 2’s mechanism — client-side fetch + DOM mutation).

Neither “A” nor “B” from your previous question cleanly. It’s “A’s output glued in via B’s plumbing.” That’s why I called it a third thing — it doesn’t fit the same SSG-vs-SSR binary, because it’s SSG for the shell + SSR for a fragment + client-side orchestration to combine them.

If you want a cleaner framing: server islands are ESI (edge-side includes) for Astro, implemented in the browser instead of at the edge. The shell references a sub-fragment; something splices the sub-fragment into the shell; the user sees one composed page. Classic ESI does it at the CDN layer (Varnish, Akamai, Fastly). Astro does it in the client because CDN-level ESI isn’t universally available and doing it in the client works on any static host.

One practical caveat

Because the island arrives after the initial HTML and the swap happens in JS:

  • Users with JS disabled see the fallback forever. This is the same caveat as client islands — no-JS users are second-class citizens for the dynamic piece.
  • Layout shift is possible when the fallback’s size differs from the real island. Reserve space with CSS.
  • The island can’t be in <head> (or at least, anything it tries to put in <head> is awkward), since it’s being spliced into the body after load. Meta tags / og
    must come from the prerendered shell.

For SEO-critical dynamic data that must appear at first paint, server islands are the wrong tool — you want full SSR of the page (row 3), not server islands. Server islands are for dynamic data where “loads half a second later” is fine.