25 Ways to Use Screaming Frog SEO Spider Tool & Crawler Software

tony

25 Ways to Use Screaming Frog SEO Spider Tool & Crawler Software  

When it comes to site crawling, troubleshooting, analytics, scraping in addition to a host of other crucial site functions, Screaming Frog Spider Tool is the most distinguished software for the job. Its breath and abundance of options can, however, be overwhelming. Though not an exhaustive list, below are 25 core ways to use Screaming Frog.

Crawl an entire website

Screaming Frog is perfect for crawling an entire website, but one has to check the ‘crawl all subdomains’ button under the configuration menu.

Crawl particular subdirectories and subdomains

To crawl specified subdirectories and subdomains, navigate to configuration and use the RegEx function to exclude or include subdirectories and subdomains of interest.

Generate a list of redirected domains

By using the Reverse Internet function, it is possible to list all domains redirecting to a particular destination. One can also obtain a list of sites using similar name servers, IP addresses or GA code.

Identify competitor domains

The reverse internet function in conjunction with Scraper and Google Docs can also be used to obtain a comprehensive list of competitor sites.

List all site pages

To get a list of all site pages, set Screaming Frog to crawl HTML only by omitting JavaScript, images, flash files, and CSS in the configuration menu.

Save crawl settings

Individuals tend to have custom but repetitive crawling needs. Screaming Frog offers the option of saving personal configurations to lessen the chore of optimizing settings every other time.

Crawl humongous sites

Screaming Frog is not natively built to crawl excessively large sites, but it is possible to achieve the feat by bumping up the memory allocation and compartmentalizing the crawling subdirectory-wise.

Crawl cookie-enabled sites

By navigating to configuration>advanced tab and checking the ‘allow cookies’ button, the software can crawl sites that call for cookie enablement.

Crawl at different speeds

Setting crawl rate by navigating to ‘speed’ under configuration makes it possible to crawl sites on older, lower performance servers.

Crawl password enabled pages

Under configuration, select ‘advanced’ and uncheck the ‘request authentication’ button to enable the tool to crawl pages that require authentication.

Crawl via proxy

Screaming Frog allows crawling via proxy which can be activated by selecting proxy in the configuration menu and inputting the desired proxy. In the same menu, one can also input a different user agent.

Identify redirected links

To identify redirected links, upon completion of crawling, navigate to the ‘response codes’ at the top and filter results by selecting ‘redirection (3xx).’ All redirecting links will be displayed.

Identify broken links

After crawling, sort the results in the ‘internal’ tab by status codes. You will view all 30*s, 404s, etc. By selecting any URL and clicking the ‘in links’ tab at the bottom, all linked pages will appear in addition to related data that will give insight regarding broken links.

Discover pages with little content

Upon completion of crawling, filter the results by HTML by navigating to the ‘internal’ tab. On the right side, select the column ‘word count’ and sort in ascending order to see the pages with least words.

Locate all CSS files on site

To find all CSS files, navigate to configuration and tick the ‘check CSS’ button. Then, filter results in the internal tab with respect to CSS.

Locate all JavaScript files on site

A list of all JavaScript files can also be generated using the above specified process for locating all CSS files, only that ‘check JavaScript’ will be selected instead.

Generate a list of all image links on a page

At the top, select page of interest after crawling the site and choose the tab ‘image info’ found at the bottom to view all image links.

Locate linked PDFs

Simply filter crawl results in the ‘internal’ tab with respect to PDF to view all linked files.

Locate pages with social share buttons

Create a custom filter prior to commencing crawling by heading over to ‘custom’ under the configuration menu. The filter should mirror the social share buttons you are interested in.

Determine duplicate URLs, titles and meta descriptions

Screaming Frog allows one to determine the above duplicates. Filter results by selecting URL/meta description/page title tabs after which you will see the filter by duplicates option.

Find pages with audio/video content

To locate pages with visual/audio content, use a chunk of the engrafted code to create a custom filter in the configuration menu before running the spider.

Determine pages with excessively long titles/URLs/meta description

After the crawl concludes, filter the results under page titles /URLs/ meta description by using the ‘over 70 characters’ option to find all the long titles, URLs and descriptions.

Verification of functionality of robots.txt

Screaming Frog prioritizes custom directives over Google Bot or global user agents. Set the same to verify robot.txt functionality in the configuration settings.

Develop an XML Sitemap or check an existing one

After crawling, select ‘advanced export’ then ‘XML Sitemap.’ Save the same and use MS Excel to open the file as an XML table in read-only mode.

Identify malware and spam

Determine spam/malware footprint and search for the same by developing a custom crawl configuration under the configuration menu.

Screaming Frog can do exceedingly more that the above including locating slow pages, ascertaining site migration success, troubleshooting non-ranking site aspects among others. With all the functions available, the learning curve involved is steep but mastering it is extremely beneficial.