l-preview-ui-confirm
Visual regression comparison between preview deploy and production. Randomly picks ~40 pages from the sitemap, captures screenshots at multiple scroll positions on both sites, and uses parallel agents...
File Structure
l-preview-ui-confirm/
├── SKILL.md
└── scripts/
└── capture-pages.mjs
Preview UI Confirmation
Compare a preview deploy against production by randomly sampling ~40 pages, capturing screenshots at multiple scroll positions, and using parallel agents to identify all visual differences.
Step 1: Determine URLs
Parse arguments or use defaults:
- Preview: First arg, or detect from current PR branch:
https://<branch>--takazudomodular.netlify.app - Production: Second arg, or
https://takazudomodular.com
Step 2: Get All Pages from Sitemap
curl -s "<preview-url>/sitemap-0.xml" | grep -oP '<loc>\K[^<]+' | sed "s|<preview-url>||g" > /tmp/preview-ui-sitemap.txt
If sitemap is unavailable, crawl the preview site for links from the homepage and key listing pages (/notes/, /guides/, /products/, /brands/, /tags/).
Count total pages. Randomly pick ~40 using shuf -n 40.
Ensure diversity: include at least 1 from each page type:
- Homepage (
/) - Product pages (
/products/*) - Article list (
/notes/) - Article detail (
/notes/*with slug) - Guide list (
/guides/) - Guide detail (
/guides/*with slug) - Brand pages (
/brands/*) - Tag pages (
/tags/*) - EN locale pages (
/en/*) - Standalone pages (
/s/*)
Save the selected URLs to a JSON file at ~/cclogs/zmod/preview-confirm/urls.json.
Step 3: Capture Screenshots
Run the capture script:
node .claude/skills/l-preview-ui-confirm/scripts/capture-pages.mjs \
"<preview-url>" "<prod-url>" \
"$HOME/cclogs/zmod/preview-confirm/screenshots" \
"$HOME/cclogs/zmod/preview-confirm/urls.json"
This produces preview-top.png, preview-mid.png, preview-bot.png, prod-top.png, prod-mid.png, prod-bot.png for each page.
Step 4: Spawn Comparison Agents
Split the ~40 pages into 4 batches of ~10. Spawn 4 parallel agents (use Agent tool, NOT worktree agents — these are read-only comparison agents that don’t modify code).
Each agent’s prompt:
You are comparing screenshots of a preview deploy vs production for visual regression.
For each page in your batch, read the screenshot files and compare:
1. Read <dir>/preview-top.png and <dir>/prod-top.png — compare layout, content, styling
2. Read <dir>/preview-mid.png and <dir>/prod-mid.png (if exists)
3. Read <dir>/preview-bot.png and <dir>/prod-bot.png (if exists)
For each page, report:
- PATH: the URL path
- STATUS: "MATCH" (looks identical), "MINOR" (small differences), or "DIFF" (significant differences)
- ISSUES: list of specific differences found (if any)
Categorize issues as:
- LAYOUT: grid, positioning, spacing, responsive differences
- IMAGE: missing, broken, wrong size, wrong aspect ratio
- TYPOGRAPHY: font, size, weight, color differences
- CONTENT: missing elements, wrong text, missing components
- STYLE: border, background, shadow, color differences
- 404: page returns 404 on one site but not the other
Be specific: "Hero image is full-width on preview but left-column on prod" not "layout different".
Output a structured report as markdown.
Step 5: Collect and Summarize
After all agents complete, collect their reports. Create a summary:
- Pages with DIFF (significant issues to fix)
- Pages with MINOR (acceptable or expected differences)
- Pages that MATCH (no issues)
Group DIFF issues by category (LAYOUT, IMAGE, etc.) and identify patterns — e.g., if all detail pages have the same grid issue, that’s one fix, not 10.
Save the full report to ~/cclogs/zmod/preview-confirm/report.md.
Step 6: Present Findings
Show the user:
- Total pages checked
- Match/Minor/Diff counts
- Grouped issue list with specific examples
- Suggested fix topics (for
/x-wt-teamsif user wants to fix)
Notes
- Screenshots are at 1200px width (desktop). Mobile comparison can be added as a follow-up
- The capture script waits for
networkidle+ 500ms to ensure images load - Pages that 404 on preview should be flagged but may be expected (not all routes may be migrated)
- Production (Next.js) pages may have client-rendered content that differs from Astro’s static output — flag these as expected differences