# SEOLint — SKILL.md

> Machine-readable reference for AI agents, coding assistants, and automation tools.
> Fetch the latest version: https://seolint.dev/skills.md

---

## What it does

SEOLint is an MCP server and REST API that gives Claude a persistent SEO memory for your sites. On the first scan of any domain it builds a base understanding — what the site does, who it's for, what pages exist, and what pages are missing. Every subsequent scan deepens that picture: issues are tracked as NEW, PERSISTING, or REGRESSED, recurring problems are flagged, and content gaps are surfaced from the sitemap.

**Core capabilities:**
- **Scan** any URL → structured issues with the actual broken HTML `element` + LLM-ready `fix` instruction
- **Remember** every scan → track improvements, regressions, and recurring issues per site
- **Site intelligence** → site goal, ICP, sitemap structure, page type breakdown, cross-page template problems
- **Content gaps** → missing pages suggested from your sitemap with target keywords and copy-paste page briefs
- **MCP-native** → Claude can call all tools directly from any conversation, no context switching

---

## Recommended Claude workflow

```
1. get_site_intelligence("example.com")   ← always start here — full site picture
2. get_page_suggestions("example.com")    ← what pages are missing, with briefs
3. get_open_issues("example.com")         ← what still needs fixing across all pages
4. scan_website("https://example.com/p")  ← scan a specific page
5. [fix issues using fix + element fields]
6. mark_issues_fixed(scanId, ["issue-id"]) ← mark resolved
7. get_site_status("https://example.com") ← check trend
```

**For content creation sessions:**
```
1. get_site_intelligence("example.com")   ← understand site goal and audience
2. get_page_suggestions("example.com")    ← get missing page ideas with briefs
3. [paste brief into Claude/Cursor to create the page]
4. scan_website("https://example.com/new-page") ← audit the new page after publishing
```

---

## MCP Server setup

### Claude Code (recommended)
```bash
claude mcp add seolint --env SEOLINT_API_KEY=sl_your_key -- npx -y seolint-mcp
```

### Claude Desktop (claude_desktop_config.json)
```json
{
  "mcpServers": {
    "seolint": {
      "command": "npx",
      "args": ["-y", "seolint-mcp"],
      "env": { "SEOLINT_API_KEY": "sl_your_key" }
    }
  }
}
```

Get your API key: https://seolint.dev/dashboard → Connect

---

## MCP tools (8 total)

### scan_website(url)
Scan a URL for SEO, accessibility, performance, and AEO issues. Issues are annotated NEW / PERSISTING / REGRESSED based on scan history.

Returns: array of issues with `id`, `severity`, `title`, `description`, `fix`, `element`, `status`

---

### get_site_intelligence(domain)
Full intelligence picture for a domain. **Call this at the start of every SEO session.**

Returns markdown with:
- **Site profile** — goal, ICP, primary keyword, niche, business model (AI-analyzed from homepage on first scan)
- **Sitemap** — total URL count, page type breakdown (blog_post, pricing, comparison, etc.), structural gaps, AI narrative, unscanned pages
- **Cross-page insights** — issues affecting 3+ pages (template/structural problems)
- **Scan coverage** — which page types have been scanned, avg issues per type
- **Missing pages summary** — top content gaps (use `get_page_suggestions` for full briefs)

---

### get_page_suggestions(domain)
Missing page ideas generated from the sitemap and site profile. Each suggestion includes a copy-paste brief for creating the page in Claude or Cursor.

Returns per suggestion:
- `slug` — suggested URL path (e.g. `/blog/fix-missing-h1`)
- `title` — page title
- `cluster` — topic cluster (e.g. "technical SEO")
- `intent` — `informational` | `commercial` | `transactional`
- `target_keyword` — primary keyword to target
- `reason` — why this page is missing and how it helps
- `brief` — copy-paste prompt to create the page immediately in Claude/Cursor

**Example use:** "What pages should I create to improve my SEO?" → call `get_page_suggestions`, return the briefs, let the user pick one and paste into Claude Code.

Suggestions are generated on first scan and refreshed every 14 days.

---

### get_open_issues(domain)
All unresolved issues across every scanned page of a domain. Deduplicates across scans — no re-scanning needed.

Returns: issues grouped by page URL, each with `fix` and `element`

---

### get_site_status(url)
Trend for a single URL: issue count over time, last scan date, rescan recommendation.

Returns: `{ trend, issueHistory, lastScanAge, recommendation }`

---

### list_my_sites()
All domains you've scanned, with health summary: open issues, trend, last scan date.

---

### get_site_history(url)
Full scan history for a URL. Each scan shows which issues were NEW, FIXED, or PERSISTING.

---

### mark_issues_fixed(scanId, issueIds[])
Mark specific issues as resolved. They're removed from open issues and tracked for recurrence detection.

---

## REST API

### Base URL
```
https://www.seolint.dev/api/v1
```

### Authentication
```
Authorization: Bearer sl_your_api_key
```

---

### POST /scan
Submit a URL for scanning.

```json
{ "url": "https://example.com" }
```

Response:
```json
{ "scanId": "uuid", "pollUrl": "/api/v1/scan/uuid", "reportUrl": "/scan/uuid" }
```

---

### GET /scan/:id
Poll for scan completion (every 3s until `status === "complete"`).

```json
{
  "status": "complete",
  "issues": [
    {
      "id": "missing-title",
      "category": "seo",
      "severity": "critical",
      "title": "Missing page title",
      "description": "The page has no <title> tag...",
      "fix": "Add a <title> tag between 30–60 chars with your primary keyword.",
      "element": null,
      "gated": false
    },
    {
      "id": "title-too-long",
      "category": "seo",
      "severity": "warning",
      "title": "Page title too long",
      "description": "Your title is 75 characters...",
      "fix": "Shorten to under 60 characters.",
      "element": "<title>This Is A Very Long Title That Will Get Truncated</title>",
      "gated": false
    }
  ],
  "markdown": "# Website Audit: https://example.com\n\n..."
}
```

**`element` field:** The actual HTML extracted from the page. Use this when fixing — it tells you exactly what's broken. `null` when the issue is about something missing entirely.

**Status values:** `pending` | `complete` | `error`

---

### GET /scan/:id/markdown
Full audit report as `.md`. Returns `202` while pending.

---

### GET /site-intelligence?domain=example.com
Same as `get_site_intelligence` MCP tool.

Returns: `{ profile, sitemap, insights, page_type_coverage, page_suggestions, markdown }`

The `markdown` field is formatted for Claude — paste it directly into context.

---

### GET /suggest-pages?domain=example.com
Same as `get_page_suggestions` MCP tool.

Returns:
```json
{
  "domain": "example.com",
  "suggestions": [
    {
      "slug": "/blog/fix-missing-h1",
      "title": "How to Fix Missing H1 Tags",
      "cluster": "technical SEO",
      "intent": "informational",
      "target_keyword": "fix missing h1 tag",
      "reason": "You have no how-to content for H1 issues despite covering H1 theory",
      "brief": "Write a developer-focused post titled 'How to Fix Missing H1 Tags'..."
    }
  ],
  "profile": { "goal": "...", "icp": "...", "niche": "..." },
  "total_sitemap_urls": 84,
  "analyzed_at": "2026-04-05T...",
  "markdown": "# Page Suggestions: example.com\n\n..."
}
```

---

### GET /history?url=https://example.com
Scan history for a URL with issue diffs (NEW / FIXED / PERSISTING per scan).

---

### GET /open-issues?url=https://example.com
Unresolved issues from the latest scan for a URL.

---

### GET /sites
All domains you've scanned with summary stats.

---

### GET /site-status?url=https://example.com
Trend + rescan recommendation for a URL.

---

### POST /scan/:id/resolve
Mark issues as fixed.

```json
{ "issueIds": ["missing-title", "missing-og-tags"] }
```

---

## Issue schema

| Field | Type | Notes |
|-------|------|-------|
| `id` | string | Slug identifier, e.g. `missing-title` |
| `category` | string | `seo` \| `accessibility` \| `performance` \| `ux` \| `aeo` |
| `severity` | string | `critical` \| `warning` \| `info` |
| `title` | string | Short issue name |
| `description` | string | What's wrong and why it matters |
| `fix` | string | LLM-ready fix instruction — paste into Claude or Cursor |
| `element` | string\|null | Actual broken HTML extracted from page. Use this when fixing. |
| `gated` | boolean | `true` = paid plan required |
| `status` | string | `NEW` \| `PERSISTING` \| `REGRESSED` — only on history-aware responses |

---

## Page suggestion schema

| Field | Type | Notes |
|-------|------|-------|
| `slug` | string | Suggested URL path, e.g. `/blog/fix-missing-h1` |
| `title` | string | Suggested page title |
| `cluster` | string | Topic cluster, e.g. `technical SEO` |
| `intent` | string | `informational` \| `commercial` \| `transactional` |
| `target_keyword` | string | Primary keyword to target |
| `reason` | string | Why this page is missing and how it would help |
| `brief` | string | Copy-paste prompt for Claude/Cursor to create the page |

---

## What triggers site intelligence

Site intelligence runs automatically in the background after every scan — no extra API calls needed.

| What | When |
|------|------|
| Site profile (goal, ICP, niche) | First scan of a new domain only |
| Sitemap analysis + page suggestions | First scan, then refreshed every 14 days |
| Cross-page insights | After every scan |
| Page type classification | Every scan |

Old users: site intelligence is backfilled automatically on their next scan — no manual action needed.

---

## CLI

```bash
npx @randomcode-seolint/cli scan https://example.com
```

`--json` for structured output. Set `SEOLINT_API_KEY` in environment.

---

## Pricing

| Plan | Price | Scans | Features |
|------|-------|-------|----------|
| Pro | $19/month | 50 page scans | MCP server, CLI, AI fix instructions, dashboard |
| Team | $49/month | 250 page scans | Everything in Pro + REST API, GitHub Actions, priority |

---

## Links

- Web UI: https://seolint.dev
- API docs: https://seolint.dev/api-docs
- MCP setup guide: https://seolint.dev/blog/claude-mcp-seo-server
- Pricing: https://seolint.dev/pricing
- This file: https://seolint.dev/skills.md
