May 6, 2026

v0.7.6: Faster JavaScript Rendering

JS rendering performance pass: smarter readiness detection, shared asset cache across the crawl, faster page parsing, plus prompt copy buttons in SEO/AEO.

Mehmet Kose
Mehmet Kose
4 mins read

What’s New in v0.7.6

This release rolls up three patches’ worth of performance work on the JavaScript rendering path (v0.7.4, v0.7.5, v0.7.6) plus a couple of desktop UX tweaks. JS-render crawls of large sites should noticeably feel faster, especially when many pages share the same scripts or hydrate via similar fetch calls.

Smarter Readiness Detection

JavaScript-heavy pages used to be rendered on a fixed 5 second timer pump regardless of what the page actually did. Static pages waited longer than they needed to; SPAs that hydrated content from an async fetch chain sometimes serialized half-built.

The render loop now picks the right exit condition for the page:

  • Static pages that finish their work synchronously exit as soon as the timer queue empties, instead of waiting out a fixed budget.
  • SPAs that hydrate via setTimeout, fetch, or Promise chains keep working until the network goes quiet for a short period (default: 500 ms with no fetch activity).
  • Pages with runaway timers still hit a hard 5 second ceiling so a misbehaving site cannot wedge a worker.

For most crawls this is invisible, just faster on the average page and more reliable on the slow ones. SPAs that previously shipped partial markdown should now produce complete extractions.

Shared Asset Cache Across the Whole Crawl

Crawling 500 pages of a site that imports the same bundle.js on every page used to issue 500 separate HTTP GETs for that file. The connection pool kept TLS handshakes cheap, but every page still paid for the full request and response.

v0.7.6 adds an in-memory asset cache that lives for the duration of the crawl session and is shared by every page render:

  • External <script src="..."> bodies are cached on first fetch.
  • JavaScript-driven fetch() and XMLHttpRequest GET responses are cached by URL.
  • Failures, oversized responses, and any non-GET method (POST, PUT, etc.) bypass the cache so transient errors cannot poison subsequent pages.
  • The cache is bounded (512 entries / 64 MB) with first-in-first-out eviction.

On a typical SPA where every route loads the same chunked bundle and the same /api/config call, this turns the second-onward pages from “fetch everything again” into “look it up in a hashmap.” The desktop app holds the engine across crawls, so the cache also helps if you run several crawls back-to-back against the same site.

Faster Page Parse Path

The per-page setup that runs before scripts execute got two fixes from a perf audit:

  • CSS selectors are now compiled once. The HTML link extractor used to compile its 13 CSS selectors fresh on every page. They are now cached at startup. On a 10,000 page crawl that is roughly 130,000 redundant selector compilations eliminated.
  • HTML is now parsed once per page, not twice. External script discovery used to do its own full HTML parse just to enumerate <script src> tags, then throw the parse away before the real worker parsed the page again. The discovery pass is now a streaming tokenizer walk that does no tree building. For a typical 200-500 KB SPA payload, this saves 5 to 20 ms per page; on a 1,000 page render crawl, roughly 5 to 20 seconds of wall time.

Desktop UX Updates

Two small but useful changes in the desktop app:

  • Per-issue Prompt copy buttons in the SEO and AEO modal. Each detected issue now has its own “Copy” button that puts a ready-to-paste prompt on your clipboard, scoped to that single issue and its affected URLs. Useful for handing one finding to an AI assistant without copying the whole report.
  • Animated counters on live scan numbers during a crawl. The current page count, link count, and issue count now roll up to their new values instead of jumping, which makes it easier to follow a long crawl at a glance.
  • Content Freshness card trimmed to a tighter layout that fits more pages in the same vertical space.

Other Changes

  • All Rust crates pass clippy with -D warnings.
  • Desktop frontend lints clean.

About crawler.sh

crawler.sh is a fast Rust-based web crawler and SEO auditing tool that runs entirely on your own machine. Use the CLI for automation, scripts, and CI pipelines, or the desktop app for a visual dashboard with live crawl progress, SEO issue charts, and one-click exports.

Every release ships across both the CLI and the desktop app. Download the latest version or run crawler update from the terminal to upgrade an existing install.

Crawler.sh - Free Local AEO & SEO Spider and a Markdown content extractor | Product Hunt