Changelog
March 3, 2026

v0.2.3: Custom User-Agent Support

You can now set a custom User-Agent header for your crawls. Override the default in CLI with a flag or in the desktop app via Settings.

Mehmet Kose
Mehmet Kose
1 min read

What’s New in v0.2.3

Some websites block or rate-limit requests from non-browser User-Agents, returning 403 or 429 responses. Previously, Crawler always identified itself as crawler.sh/0.1 with no way to change it.

You can now set a custom User-Agent string for any crawl. This is useful when:

  • A site blocks the default crawler User-Agent
  • You want to test how your site responds to specific bots
  • You need to identify your crawl traffic in server logs with a custom string

When no custom value is set, crawls continue to use the default crawler.sh/0.1 User-Agent.

CLI Usage

Pass the --user-agent flag to the crawl command:

crawler crawl https://example.com --user-agent "Mozilla/5.0 (compatible; MyCrawler/1.0)"

You can combine it with any other flags as usual:

crawler crawl https://example.com \
--user-agent "Mozilla/5.0" \
--max-pages 500 \
--concurrency 10

Desktop App

Open the Settings card and enter your custom User-Agent in the User-Agent text field. Leave it empty to use the default. The setting persists across sessions.

Wrap-up

A CMS shouldn't slow you down. Crawler aims to expand into your workflow — whether you're coding content models, collaborating on product copy, or launching updates at 2am.

If that sounds like the kind of tooling you want to use — try Crawler .

Crawler.sh - Free Local AEO & SEO Spider and a Markdown content extractor | Product Hunt