# crawler.sh > A fast, local-first web crawler and SEO analysis tool. Crawl any website in seconds, run 16 automated SEO checks, extract content as clean Markdown, and export to JSON, CSV, or Sitemap XML — from the terminal or a native desktop app. Built in Rust. Runs entirely on your machine. For full documentation including complete CLI usage, desktop app usage, all flags, data models, and examples, see [llms-full.txt](https://crawler.sh/llms-full.txt). ## Project details - **Name**: crawler.sh - **URL**: https://crawler.sh - **Category**: Developer Tools / SEO - **License**: Proprietary (freemium) - **Language**: Rust (core engine and CLI), TypeScript + React (desktop frontend) - **Platforms**: macOS (Universal — Apple Silicon and Intel). Linux coming soon. - **Install**: `curl -fsSL https://install.crawler.sh | sh` - **Pricing**: Free (600 pages/crawl, all features) · Pro $99/year (1,000 pages/crawl, Content Archive export) - **Author**: Rob Austin - **Contact**: https://crawler.sh/contact/ ## What crawler.sh does crawler.sh crawls websites using a breadth-first search algorithm with configurable concurrency, depth limits, and polite delays. It stays within the same domain and records every page's status code, response time, title, meta description, headings, canonical URL, word count, and more. After crawling, a built-in SEO analysis engine runs 16 automated checks across every page: missing/short/long/duplicate titles, missing/short/long/duplicate meta descriptions, missing H1 tags, thin content, missing content, missing viewport tags, broken internal links, redirect chains, slow pages, and missing image alt attributes. Content extraction converts the main body of any page to clean Markdown, stripping boilerplate automatically. Results export to NDJSON, JSON, CSV, TXT, and W3C-compliant Sitemap XML. ## Interfaces - **CLI** — Four subcommands: `crawl` (site crawling), `seo` (SEO analysis with CSV/TXT export), `info` (crawl file stats), `export` (format conversion). Install with `curl -fsSL https://install.crawler.sh | sh`. - **Desktop app** — Native macOS app built with Tauri 2 and React 19. Live crawl feed, SEO issue dashboard, page-status charts, and a content viewer. ## Pricing Free tier: up to 600 pages per crawl, all features including SEO analysis, all export formats. Pro ($99/year): up to 1,000 pages per crawl, Content Archive export. ## Platform availability macOS (Universal — Apple Silicon and Intel). Linux coming soon. ## Documentation - [Getting Started](https://crawler.sh/docs/): Overview and quick start guide - [Installation](https://crawler.sh/docs/installation/): How to install and uninstall the CLI and desktop app - [CLI Overview](https://crawler.sh/docs/cli/): CLI architecture and usage - [CLI Features](https://crawler.sh/docs/cli/features/): Crawl engine, subcommands, SEO checks, content extraction, .crawl format - [CLI Reference](https://crawler.sh/docs/cli/reference/): All flags, page data model, crawl events, output formats - [Desktop Overview](https://crawler.sh/docs/desktop/): Desktop app architecture and usage - [Desktop Features](https://crawler.sh/docs/desktop/features/): Dashboard cards, SEO analysis, content viewer - [Desktop Reference](https://crawler.sh/docs/desktop/reference/): Settings, data model, export formats ## Product pages - [Homepage](https://crawler.sh/): Main landing page - [Product](https://crawler.sh/product/): Feature overview for CLI and desktop app - [Technical SEO Audit](https://crawler.sh/technical-seo-audit/): SEO audit feature page with all 16 checks - [Download](https://crawler.sh/download/): Desktop app download and CLI install command - [Roadmap](https://crawler.sh/roadmap/): Planned features and development timeline - [FAQ](https://crawler.sh/faq/): Common questions about installation, configuration, and usage ## Blog - [The Complete Guide to Technical SEO Audits](https://crawler.sh/blog/technical-seo-audits/): Covers crawlability, indexation, site speed, structured data, and how to automate audits with crawler.sh ## Company - [About](https://crawler.sh/about/): Mission, team, and background - [Contact](https://crawler.sh/contact/): Bug reports, feature requests, and questions - [Privacy Policy](https://crawler.sh/privacy-policy/) - [Terms of Service](https://crawler.sh/terms-of-service/)