v0.3.0: Smart Retry and Exponential Backoff
Crawler now automatically retries failed requests with exponential backoff and uses browser-like TLS fingerprints for better compatibility with protected sites.
What’s New in v0.3.0
Crawling real-world sites means dealing with flaky responses - rate limits, temporary server errors, and connection resets. Previously, Crawler treated every failure as final and moved on.
Now, failed requests are automatically retried with exponential backoff. This means transient 429 (Too Many Requests) and 5xx errors are handled gracefully without you having to re-run the entire crawl.
Retry Behavior
- Failed requests are retried up to 3 times
- Each retry waits progressively longer before trying again
- Only transient errors trigger retries - permanent failures like 404 are skipped immediately
Browser-Like Request Headers
Requests now use browser-compatible TLS fingerprints and Chrome Browser headers. This significantly improves success rates on sites that block non-browser traffic based on TLS handshake characteristics.
Who Benefits
- SEO professionals crawling client sites behind CDNs with aggressive bot protection
- Content teams archiving sites that rate-limit automated requests
- Developers testing against staging environments with intermittent connectivity
Related
Wrap-up
A CMS shouldn't slow you down. Crawler aims to expand into your workflow — whether you're coding content models, collaborating on product copy, or launching updates at 2am.
If that sounds like the kind of tooling you want to use — try Crawler .