v0.1.0: Initial Release of crawler.sh
The first public release of Crawler - a fast, local web crawler with CLI interface, SEO analysis, and multiple export formats.
2 mins read
Introducing Crawler
Crawler is a fast web crawler that runs locally on your machine. It crawls any website and gives you structured data about every page - status codes, titles, meta descriptions, canonical URLs, redirect chains, and more.
Features at Launch
Crawling
- Concurrent, breadth-first crawling with configurable depth and page limits
- Domain-constrained - stays on the same host, no runaway crawls
- Redirect chain detection and loop identification
- Configurable concurrency and request delay
SEO Analysis
- 18 SEO check categories including missing titles, duplicate descriptions, noindex pages, and more
- Site-level checks for robots.txt and sitemaps
- Export results as TXT or CSV
Export Formats
- NDJSON (
.crawl) - streaming format, one JSON object per line - JSON - single file with all pages
- Sitemap - XML sitemap format
CLI Commands
crawler crawl- crawl a sitecrawler info- analyze a.crawlfile (status distribution, response times, redirect audit)crawler export- convert between formatscrawler seo- run SEO analysis on crawl data
Install
curl -fsSL https://install.crawler.sh | shOr download directly from crawler.sh/download.
Related
Wrap-up
A CMS shouldn't slow you down. Crawler aims to expand into your workflow — whether you're coding content models, collaborating on product copy, or launching updates at 2am.
If that sounds like the kind of tooling you want to use — try Crawler .
Crawler runs locally on your machine. Use the CLI or the desktop
app — your workflow, your terms.