Changelog
February 23, 2026

v0.1.0: Initial Release of crawler.sh

The first public release of Crawler - a fast, local web crawler with CLI interface, SEO analysis, and multiple export formats.

Mehmet Kose
Mehmet Kose
2 mins read

Introducing Crawler

Crawler is a fast web crawler that runs locally on your machine. It crawls any website and gives you structured data about every page - status codes, titles, meta descriptions, canonical URLs, redirect chains, and more.

Features at Launch

Crawling

  • Concurrent, breadth-first crawling with configurable depth and page limits
  • Domain-constrained - stays on the same host, no runaway crawls
  • Redirect chain detection and loop identification
  • Configurable concurrency and request delay

SEO Analysis

  • 18 SEO check categories including missing titles, duplicate descriptions, noindex pages, and more
  • Site-level checks for robots.txt and sitemaps
  • Export results as TXT or CSV

Export Formats

  • NDJSON (.crawl) - streaming format, one JSON object per line
  • JSON - single file with all pages
  • Sitemap - XML sitemap format

CLI Commands

  • crawler crawl - crawl a site
  • crawler info - analyze a .crawl file (status distribution, response times, redirect audit)
  • crawler export - convert between formats
  • crawler seo - run SEO analysis on crawl data

Install

curl -fsSL https://install.crawler.sh | sh

Or download directly from crawler.sh/download.

Wrap-up

A CMS shouldn't slow you down. Crawler aims to expand into your workflow — whether you're coding content models, collaborating on product copy, or launching updates at 2am.

If that sounds like the kind of tooling you want to use — try Crawler .

Crawler.sh - Free Local AEO & SEO Spider and a Markdown content extractor | Product Hunt