May 6, 2026

v0.7.7: Clean Stop in the Desktop App

Cancelling a crawl in the desktop app now actually stops it, and starting a new crawl no longer leaks pages from the previous one into the live feed.

Mehmet Kose
Mehmet Kose
2 mins read

What’s New in v0.7.7

A small but high-impact bug fix in the desktop app. If you ever started a crawl on a large site, hit Cancel, and then started a new crawl on a different site, you may have noticed pages from the old crawl bleeding into the new one’s live feed. That is now fixed.

What was happening

Clicking Cancel only signalled the crawl to stop. The actual workers, queue, and event stream kept going in the background while the UI flipped back to the idle state. Starting a new crawl spun up a second pipeline alongside the first, and the live feed received events from both - so a fresh crawl on Site B would show URLs from the previous crawl on Site A.

On a small site this drained quickly enough to go unnoticed. On a large e-commerce site with thousands of pages it was very visible: the live feed looked scrambled and the previous site kept getting hit by background requests after Cancel.

What changed

Stopping a crawl now waits for the crawl to actually finish stopping before returning. The Cancel button shows a brief “Stopping…” state during this window, and the Start button stays disabled until the previous crawl has fully wound down. A 5 second cap means even a hung HTTP request cannot make Cancel feel stuck.

Each crawl session now also has its own private event channel, so events from a previous session cannot land in a new session’s UI even under unusual timing.

Net effect

  • Click Cancel, click Start on a different URL: the live feed shows only the new crawl’s pages.
  • Cancelling a large crawl actually stops the background work instead of letting it drain on its own.
  • Rapid Stop then Start cycles no longer leave orphaned workers running.

No changes to the CLI, the API, or the crawling engine itself. This is purely a desktop app lifecycle fix.

About crawler.sh

crawler.sh is a fast Rust-based web crawler and SEO auditing tool that runs entirely on your own machine. Use the CLI for automation, scripts, and CI pipelines, or the desktop app for a visual dashboard with live crawl progress, SEO issue charts, and one-click exports.

Every release ships across both the CLI and the desktop app. Download the latest version or run crawler update from the terminal to upgrade an existing install.

Crawler.sh - Free Local AEO & SEO Spider and a Markdown content extractor | Product Hunt