Crawler Desktop Overview
What is Crawler Desktop?
Section titled “What is Crawler Desktop?”Crawler Desktop is a native macOS desktop application for crawling websites, analyzing SEO issues, visualizing HTTP status codes, browsing extracted content, and exporting results - all with a real-time interactive interface.
Install
Section titled “Install”Download the macOS DMG (universal binary - works on both Apple Silicon and Intel):
Open the .dmg file and drag Crawler to your Applications folder.
Uninstall
Section titled “Uninstall”Drag Crawler from your Applications folder to the Trash.
Standard Workflow
Section titled “Standard Workflow”-
Enter a URL
Section titled “Enter a URL”Type the target URL in the top input bar and click Start Crawl. The
https://prefix is added automatically. -
Configure settings
Section titled “Configure settings”Use the Settings card to adjust Max Pages, Max Depth, Concurrency, Delay, and Extract Content before starting.
-
Monitor the crawl
Section titled “Monitor the crawl”Watch the Live Feed card for real-time URL discovery with color-coded status badges. The Page Status card shows a donut chart of HTTP status codes.
-
Analyze SEO
Section titled “Analyze SEO”The SEO Issues card aggregates problems across all crawled pages. Expand it to browse all 16 issue categories with per-URL details.
-
Export results
Section titled “Export results”Use the Downloads card to export as JSON Archive or Sitemap XML. Use the SEO Issues expanded view for CSV or TXT export.
Dashboard Navigation
Section titled “Dashboard Navigation”- Click any card to expand it into an overlay with full details
- Press Escape or click the backdrop to close the expanded view
- Hover over rows in Live Feed and Page Status to reveal Copy URL and Open in browser actions
- Reset button appears after a crawl completes to clear results and start over
Tutorial: Full Desktop Walkthrough
Section titled “Tutorial: Full Desktop Walkthrough”-
Open the desktop app
Section titled “Open the desktop app”Launch Crawler from your Applications folder.
-
Configure and start a crawl
Section titled “Configure and start a crawl”Enter a URL in the top input bar. Adjust settings in the Settings card (Max Pages, Concurrency, etc.), then click Start Crawl.
-
Monitor progress
Section titled “Monitor progress”- Live Feed - watch URLs appear in real time with color-coded HTTP status badges (green for 2xx, yellow for 3xx, red for 4xx/5xx)
- Page Status - see a donut chart of status code distribution
-
Review SEO issues
Section titled “Review SEO issues”Click the SEO Issues card to expand it. Browse the 16 issue categories in an accordion view. Each category lists affected URLs with copy and open-in-browser actions.
-
Browse extracted content
Section titled “Browse extracted content”Click the Content card to open the split-pane content viewer. The left panel lists pages with word counts; the right panel renders the extracted Markdown.
-
Export results
Section titled “Export results”Use the Downloads card to export:
- JSON Archive - complete crawl data with metadata
- Sitemap XML - standard sitemap for search engines
Use the SEO Issues expanded view to export issues as CSV or TXT.
-
Reset and start over
Section titled “Reset and start over”Click the Reset button to clear all results and reconfigure for a new crawl.