3 Ways You Should Be Using Screaming Frog

by Jack Reid on Wednesday 26 April 2017
In his post Jack Reid takes us through the top 3 ways to use screaming frog to best optimise your SEO strategy.

Screaming Frog's SEO Spider has been around since 2010, being a household name in most SEOs’ tool arsenal. Alternatives exist such as Xenu’s Link Sleuth launching at a similar time, while new kids on the block like Botify also have a similar, cloud based offering.


If you are fairly new to the tool, you’ve probably pasted in a URL and let the spider crawl, returning a wealth of on-site analytical data. This post will take you through a few actionable ways to utilise Screaming Frog in your campaigns.

 

External Link Checking
For all the linkbuilders out there, one of the most upsetting things is discovering your hard earned links disappearing some time after you’ve curated them. You may well be keeping a spreadsheet with your link building efforts – run them through the tool using Configuration > Custom > Search tab. Here’s a step by step example to make it clearer:

 

Let’s say your client is the BBC. Go into Configuration > Custom > Search and put Does Not Contain www.bbc.co.uk

Switch over to Mode > List and upload your list of links. Upload them via file or paste at the top of the program. In this very crude example it’s essentially saying BBC had links in the past on Wikipedia and ITV.

JRBLOG2.jpg

Run the crawl and navigate in the internal tabs to Custom.

What you should see in here are links which do not contain “bbc.co.uk” within the pages listed above.


Here is what we find. No surprises that ITV are included in this tab i.e. they are not linking out to the BBC.
 

JRBLOG3.jpg

If you regularly incorporate this activity into your monthly work you can get in touch with webmasters when a link disappears – doing this in a timely fashion will mean they are more likely to reimburse your link.

 

Auditing
In 2017, consumers using mobile devices are ever increasing. Thus it’s imperative to audit for mobile first recommendations over desktop. In order to do so in Screaming Frog a few options are needed. 


Under Configuration > Rendering change to JavaScript rendering, timeout should be set to 5 seconds (as from tests, this is what Google seem to be using for the document object model snapshot time for headless browser rendering) and window size to your device of choice (Google Mobile: Smartphone).

 

You’ll also want to change your HTTP Header to GoogleBot for Smartphones to mirror how Google will interpret your page.

Crawling like this obviously takes 5 seconds or longer per URL, so be wary of this and perhaps restrict to a select list of URLs. Once you crawl your page(s), in the bottom pane, change to the Rendered Page view tab. You will see a mobile shaped snapshot of how your pages is rendered and audit from there.

 

Custom Extraction
One of the newer features is the ability to scrape custom on-page content. This is great for scraping data such as product counts for E-commerce sites or social buttons from a blog. The list goes on. Essentially any on page element you can find the CSSPath, XPath or Regex for you can scrape.


The first thing you might want to do is decide what on page content you might be after. Right-click on the content you wish to scrape and Inspect Element. There is also a neat little tool in the top left of the interface which looks like a cursor which can aid you identifying your element of choice.


Once you’ve got your element of choice highlighted right click > copy > Xpath or Outer HTML.
Please note, some elements do require knowledge of Xpath to extract them properly. Take a look over here on w3schools’ tutorial to find out more.
 

Once copied, paste your Xpath, CSSPath or regex into the extractor and select what part of that element you require. 

This can be a bit of trial and error when you first begin so try a range of options and hopefully you’ll get what you want with a bit of tinkering around. If you are interested in scraping, Mike King’s guide to scraping every single page offers a comprehensive guide.


Once you get the hang of using multiple custom extractors there is an option to save your current configuration. If you are regularly doing audits this can be useful!
 

JRBLOG7.jpg

Navigate to the right hand side of the Screaming Frog interface or the custom column to find your custom extracted data ready for use.

 

There’s heaps more you can do with the tool and loads of resources online. If you want to learn more a good place to start is the user guide and FAQs.