← cd ../skills
$ cat ~/skills/lighthouse-audit
Run Lighthouse audits against your app and produce a structured report covering Performance, Accessibility, Best Practices, and SEO. Detects routes, runs headless Chrome, parses JSON results, and compares against previous runs.
author: Val
markdown
---
name: lighthouse-audit
description: Run Lighthouse audits against the app and produce a structured report covering Performance, Accessibility, Best Practices, and SEO. Detects routes, runs headless Chrome, parses JSON results, and compares against previous runs.
---
# Lighthouse Audit
Run Lighthouse against each page of the app and produce a structured audit report with numeric scores and Pass/Warn/Fail status for every audit.
## Instructions
1. **Detect pages to audit.** Scan `app/` for `page.tsx` files. Derive URLs from the file-system route structure (e.g., `app/page.tsx` → `/`, `app/category/page.tsx` → `/category`, `app/[slug]/page.tsx` → skip dynamic segments or ask the user for sample URLs).
2. **Ensure the dev server is running.** Check whether `localhost:3000` or `localhost:3001` responds (use `curl -s -o /dev/null -w "%{http_code}" http://localhost:3000`). If neither port responds, start the server with `npm run dev` in the background and wait for it to be ready.
3. **Run Lighthouse for each page.** Execute:
```
npx lighthouse <url> --output=json --output-path=lighthouse-reports/<name>.json --chrome-flags="--headless=new"
```
Use a descriptive `<name>` for each page (e.g., `home`, `category`, `detail`). If previous reports already exist in `lighthouse-reports/`, rename them with a `-v<N>` suffix before writing new ones so they can be used for comparison.
4. **Parse the JSON reports.** Read each generated JSON file and extract:
- `categories.<id>.score` (0–1, display as 0–100)
- `audits.<id>.score`, `.scoreDisplayMode`, `.numericValue`, `.displayValue`, `.details`
- `categories.<id>.auditRefs` for group and weight information
5. **Assign status to each audit:**
- **Pass** — `score === 1` or `score >= 0.9` for numeric audits
- **Warn** — `score >= 0.5 && score < 0.9` for numeric audits, or `score === null` with informative details
- **Fail** — `score < 0.5` for numeric, or `score === 0` for binary
- **N/A** — `scoreDisplayMode === "notApplicable"` or `scoreDisplayMode === "manual"`
6. **Report findings** using the output format below.
---
## 1. Performance
### 1.1 Core Metrics (weighted)
| Metric | Audit ID | Weight |
|---|---|---|
| First Contentful Paint (FCP) | `first-contentful-paint` | 10% |
| Largest Contentful Paint (LCP) | `largest-contentful-paint` | 25% |
| Total Blocking Time (TBT) | `total-blocking-time` | 30% |
| Cumulative Layout Shift (CLS) | `cumulative-layout-shift` | 25% |
| Speed Index (SI) | `speed-index` | 10% |
Scoring thresholds (overall category score × 100):
- **Pass** (green): >= 90
- **Warn** (orange): 50–89
- **Fail** (red): < 50
### 1.2 Diagnostics
Surface actionable findings from these audits when they report opportunities or issues:
- `unminified-css` — Unminified CSS
- `unminified-javascript` — Unminified JavaScript
- `unused-css-rules` — Unused CSS rules
- `unused-javascript` — Unused JavaScript
- `total-byte-weight` — Total byte weight of the page
- `bootup-time` — JavaScript execution time
- `mainthread-work-breakdown` — Main-thread work breakdown
- `long-tasks` — Long main-thread tasks
- `unsized-images` — Image elements without explicit dimensions
- `bf-cache` — Back/forward cache eligibility
- `non-composited-animations` — Non-composited animations
- `user-timings` — User timing marks and measures
Also check for these opportunity audits:
- `render-blocking-resources` — Render-blocking resources
- `server-response-time` — Server response time (TTFB)
- `uses-responsive-images` — Properly sized images
- `offscreen-images` — Offscreen images (lazy loading)
- `uses-optimized-images` — Efficiently encoded images
- `uses-text-compression` — Text compression (gzip/brotli)
- `dom-size` — DOM size
- `critical-request-chains` — Critical request chains
- `largest-contentful-paint-element` — LCP element breakdown
- `layout-shifts` — Layout shift sources
---
## 2. Accessibility
### 2.1 ARIA
Audit IDs: `aria-allowed-attr`, `aria-command-name`, `aria-conditional-attr`, `aria-deprecated-role`, `aria-dialog-name`, `aria-hidden-body`, `aria-hidden-focus`, `aria-input-field-name`, `aria-meter-name`, `aria-progressbar-name`, `aria-prohibited-attr`, `aria-required-attr`, `aria-required-children`, `aria-required-parent`, `aria-roles`, `aria-text`, `aria-toggle-field-name`, `aria-tooltip-name`, `aria-treeitem-name`, `aria-valid-attr-value`, `aria-valid-attr`, `duplicate-id-aria`
### 2.2 Names & Labels
Audit IDs: `button-name`, `document-title`, `form-field-multiple-labels`, `frame-title`, `image-alt`, `input-button-name`, `input-image-alt`, `label`, `link-name`, `object-alt`, `select-name`, `skip-link`, `image-redundant-alt`
### 2.3 Color Contrast
Audit IDs: `color-contrast`, `link-in-text-block`
### 2.4 Navigation
Audit IDs: `accesskeys`, `bypass`, `heading-order`, `tabindex`
### 2.5 Language
Audit IDs: `html-has-lang`, `html-lang-valid`, `html-xml-lang-mismatch`, `valid-lang`
### 2.6 Tables & Lists
Audit IDs: `definition-list`, `dlitem`, `list`, `listitem`, `td-headers-attr`, `th-has-data-cells`
### 2.7 Best Practices
Audit IDs: `meta-refresh`, `meta-viewport`, `target-size`, `landmark-one-main`, `table-duplicate-name`, `empty-heading`, `aria-allowed-role`, `identical-links-same-purpose`
### 2.8 Audio & Video
Audit IDs: `video-caption`
---
## 3. Best Practices
### 3.1 Trust & Safety
Audit IDs: `is-on-https`, `redirects-http`, `geolocation-on-start`, `notification-on-start`, `csp-xss`, `has-hsts`, `origin-isolation`, `clickjacking-mitigation`, `trusted-types-xss`
### 3.2 User Experience
Audit IDs: `paste-preventing-inputs`, `image-aspect-ratio`, `image-size-responsive`
### 3.3 Browser Compatibility
Audit IDs: `doctype`, `charset`
### 3.4 General
Audit IDs: `js-libraries`, `deprecations`, `third-party-cookies`, `errors-in-console`, `valid-source-maps`, `inspector-issues`
---
## 4. SEO
### 4.1 Crawling & Indexing
Audit IDs: `is-crawlable`, `http-status-code`, `crawlable-anchors`, `robots-txt`
### 4.2 Content Best Practices
Audit IDs: `document-title`, `meta-description`, `link-text`, `image-alt`, `hreflang`, `canonical`
### 4.3 Structured Data
Audit IDs: `structured-data`
---
## Output Format
After running all audits, produce the report in this format:
### Summary Table
```
Page: /
Category | Score | Pass | Warn | Fail | N/A
------------------|-------|------|------|------|----
Performance | XX | X | X | X | X
Accessibility | XX | X | X | X | X
Best Practices | XX | X | X | X | X
SEO | XX | X | X | X | X
```
Repeat for each audited page. If previous reports exist in `lighthouse-reports/`, include a delta column:
```
Category | Score | Δ | Pass | Warn | Fail | N/A
------------------|-------|------|------|------|------|----
Performance | 95 | +3 | ...
```
### Detailed Results
For each failing or warning audit, report:
```
[STATUS] Category — Audit Name
Finding: What Lighthouse found (include displayValue and numericValue where available).
Impact: Weighted metric (X%) or diagnostic.
Fix: Specific action to resolve the issue.
```
Skip passing and N/A audits in the detailed results to keep the report focused.
### Prioritized Fixes
List all Fail items first, then Warn items, ordered by impact:
1. **[Fail]** Weighted metric failures first (higher weight = higher priority)
2. **[Fail]** Binary audit failures (accessibility, best practices)
3. **[Warn]** Diagnostics with largest savings (bytes, ms)
4. **[Warn]** Remaining warnings
For each fix, include:
- The page(s) affected
- The estimated savings (if Lighthouse provides it in `details.overallSavingsMs` or `details.overallSavingsBytes`)
- A concrete remediation step
### Comparison (if previous reports exist)
When v1/previous reports are found in `lighthouse-reports/`, add a comparison section:
```
Page: /
Category | Previous | Current | Change
------------------|----------|---------|-------
Performance | 92 | 95 | +3
Accessibility | 100 | 100 | 0
Best Practices | 100 | 100 | 0
SEO | 91 | 100 | +9
```
Highlight any regressions (negative changes) and call out improvements.