Comparisons
Compare crawler.sh against the cloud scrapers.
Most tools that turn a website into Markdown bill per page and run in someone else's cloud. crawler.sh runs locally, so the marginal cost per page is zero and pages never leave your machine. Pick a comparison below to see how it stacks up.
crawler.sh vs
Crawl4AI
Python crawler with headless Chrome vs single-binary desktop and CLI with a custom JavaScript engine.
Read comparison
crawler.sh vs
Firecrawl
Cloud scraping API vs local-first desktop and CLI for the same job: turning websites into clean Markdown for AI.
Read comparison
crawler.sh vs
Jina Reader
Single-URL cloud reader vs local crawler with full-site sweep and Markdown archive export.
Read comparison
crawler.sh vs
MarkItDown
Full-site Markdown extraction with metadata vs a single-file converter from Microsoft.
Read comparison
crawler.sh vs
Playwright
Single-binary crawler with built-in orchestration vs a browser automation library you wire yourself.
Read comparison
crawler.sh vs
ScrapingBee
Flat-fee local crawler vs a per-credit proxy API that routes traffic through vendor infrastructure.
Read comparison
crawler.sh vs
Screaming Frog
Lightweight desktop crawler with Markdown export and a CLI vs the classic Java-based SEO spider with report-centric output.
Read comparison
What we compare and why it matters
Every comparison on this page is built around the same set of questions we hear from teams evaluating a web crawler: where does it run, how much does it cost per page, does it render JavaScript, and what does the output look like. We test each tool ourselves and publish the results as living documents that we update when pricing or features change.
The biggest difference between crawler.sh and most alternatives is architecture. crawler.sh is a single static binary that runs on your machine. That means no API keys, no per-page billing, and no data leaving your laptop. Cloud scrapers are convenient for one-off calls, but the costs scale linearly with volume and the data path routes through someone else's infrastructure. If you are building a RAG pipeline, training a model, or auditing a site for SEO, running the crawler locally gives you predictable costs and full control over the data.
If you want to see how the numbers look for your own site, you can run crawler.sh for free up to 1,000 pages. No sign-up required.