SEO is all too often reduced to the pursuit of keywords, the production and optimization of the best content, and the obtaining of links. But it’s also a lot, a lot of analysis. Analysis of its traffic, the internal mesh of its site, its indexing, user behavior, the proper optimization of its pages, links shaping the environment of its site, and pretty much the same thing with the sites of its competitors. This is why a powerful site analysis tool is a considerable asset for optimizing your site …

You will suspect it with the title of this article, I myself use SEO Spider software from Screaming Frog (the free version is very complete!) For my analyzes, and I will show you 7 techniques to obtain crucial information for your SEO with this tool. The tool can be downloaded for free here.

google loves me workshop banner

If you are new to technical site optimization and you see headaches when you see “hreflang”, “JavaScript”, or “404” expressions, I recommend that you take a look at the additional resources I have. taken care to add at the end of this article.

Now is the time to lift the hood of your site and get your hands dirty!

  1. Find broken URLs and links
  2. Optimizing your internal network
  3. Identify unnecessary redirects
  4. Check how Google treats some of your pages
  5. Spot Hreflang errors when its site is available in multiple languages
  6. Identify mixed content on your site
  7. Check the depth and accessibility of your pages

1. Find broken URLs and links

Pixar Studios’ 404 page – Credit: creativebloq.com

Broken URLs (with a 404 error) are well known to be the bane of SEO experts. The 404 error code is far from being a tragedy for your SEO. However, it is a matter of being aware of the pages returning this type of error and of ensuring that their mesh with other pages, internal or external, does not harm either the user experience or that of the bots browsing. your site.

Frequent analysis of your site will help you prevent these bad experiences.

How to use SEO Screaming Frog to identify broken URLs?

1. Start by crawling your domain or a part of it.

2. Click on the “Response codes” tab and, under it, apply the “Client Error (4xx)” filter to see an overview of 4xx errors.

3. In the menu, click on “Bulk export”, “Response Codes” and “Client Error (4xx) Inlinks” to download a CSV file listing all these errors.

4. Open the CSV file using Google Sheets to identify the various errors, their status (404, 403, 410 etc.), the type of content concerned (image or AHREF), their source and destination. This report is all the more useful since it includes internal links as well as outgoing links.

You can follow the same process to identify other HTTP codes (3xx and 5xx).

2. Optimizing your internal network

Credit: stock.adobe.com

Understanding the mesh of your URLs is essential for optimizing your site. This allows you to offer a logical structure to your users and to the bots that browse your pages.

Among other things, this involves grouping (or at least connecting) the pages dealing with the same subject in a global manner. Read the excellent article by Olivier Andrieu on semantic cocoons to better understand what makes the structure of a site optimized.

How to use SEO Screaming Frog to get specific information about a URL?

Once the analysis of your site, or of a section of it, is completed, right click on any result, then “Export” to obtain several options. Among these, let’s look at the following 3:

This information will be invaluable if you wish to visualize and structure your “topic clusters” (groups of pages interconnected around the same subject).

3. Identify unnecessary redirects

Credit: Ecommerce Yogi

Redirects are particularly useful for optimizing the user journey in the event of a change in structure.

However, it is sometimes difficult to keep track of the history of your site, especially if it is already a few years old and has undergone several modifications. As a result, redirect chains can be indirectly created (URL A redirects to URL B which in turn redirects to URL C).

The trouble with these redirect chains is that they slow down bots on your site, thereby affecting your “budget crawl”. And because this budget is limited, it is important to optimize it as much as possible.

In this case, you want to make URL A -> URL B -> URL C become URL A> URL C.

Be aware that some redirect chains are even more subtle than that and can be as simple as: http: // URL-A -> https: // URL-A -> https: // URL-B

Here’s how to spot these channels using Screaming Frog:

1. After submitting your site to an analysis, click in the top menu on “Reports” then “Redirect Chains” to download the CSV file.

2. The CSV file includes the source, the URL concerned and the number of redirects to the destination URL.

The goal is therefore to clean up as many of these unnecessary redirects as possible by replacing the redirect URLs with the destination URLs.

4. Check how Googlebot processes some of your pages

Test of different Javascript frameworks and their compatibility with search engines – Credit: Bartosz Góralewicz

Believe it or not: what you see on a page of your site can be interpreted differently by search engines. Several factors influence the way a page is treated by browsers as well as search engines. Among these factors are the programming methods and languages ​​(JavaScript, Ajax, etc.), as well as the rules applied to your site (through the robots.txt file for example).

When you publish a new page, you must make sure that it is properly reported and processed (the famous “fetch and render” from Google), especially if this page has a different structure from those published and verified before.

Here’s how to ensure that your page is properly optimized on this technical aspect:

1.Click on “Configuration” then on “User Agent”

2. Select one of the GoogleBot options

3. Enter a URL and start the analysis

4. Click on the URL in the results

5. Finally, select the ‘Rendered Page’ tab at the bottom of the screen

From there you can see:

  • A screenshot of the page rendering by GoogleBot
  • Each resource on this page (CSS, JS, images, etc.) as well as their status code (200, 307, etc.)

The big advantage of this option is that it gives you a quick overview of all the pages analyzed.

5. Spot Hreflang errors when its site is available in several languages

Credit: Sourcecon

To expand its activity internationally, developing versions of its site in several languages ​​is necessarily encouraged. From a technical point of view, however, there are many factors to consider. Among these, the Hreflang attribute is surely the most important in SEO.

The Hreflang attribute allows you to communicate linguistic or regional variants of the same page to search engines. In this way, internet users will be redirected to the most appropriate linguistic or regional version when they search for engines.

And there are many sites containing errors at this level!

Here is how to identify these errors with the Screaming Frog analysis tool:

1. Go to “Reports” then “Hreflang” and select the type of error you want to check.

2. If the downloaded CSV file is empty, it means that no error has been identified by Screaming Frog. Otherwise, here are the most common mistakes:

  • “Errors” – This report shows all Hreflang attributes that do not return an HTTP code of type 200 (no response, blocked by the robots.txt file, HTTP code of type 3xx, 4xx or 5xx) or that are not linked to the site ;
  • “Missing Confirmation Links” – This report shows the pages with a missing confirmation link, and which pages are affected.

Note: This detail is very important when it comes to the internationalization of its pages. It remains one of the most common mistakes. If a rel tag on page B includes a specific URL to page A, that same page A should return a “confirmation link” to page B. An example with the Venngage Infographies site below.

The es.venngage.com site (site B) follows the same procedure to avoid any missing confirmation links.
  • “Inconsistent Language Confirmation Links” – This report shows the pages whose confirmation links are inconsistent with the target language or region.

6. Identify mixed content on your site

Example of a message communicated by Firefox when opening a page containing mixed content – Credit: MozillaZine-fr

Mixed content is any secure page (https) with non-secure elements (http). When search engines like Google spot mixed content on a site, they consider that the security of the user experience on that same site is degraded.

The latest versions of the main browsers generally warn the visitor of a site when the latter contains mixed content (using a pop-up, sometimes an entire page, displaying a message of the type “This connection does not is not secure ”).

Here’s how to identify mixed content with the SEO Screaming Frog tool:

1. Click in the top menu on “Reports” then “Insecure content” to download the report.

2. Replace these resources, links, or delete them to increase the quality of your site.

7. Check the depth and accessibility of pages on your site

Credit: My Ranking Metrics

Knowing the accessibility of your pages should, in my opinion, be a crucial part of your SEO strategy. A bad structure is often characterized by deep URLs with an inconsistent path. A site containing many easily accessible pages will often make the difference with a site whose navigation is awkward.

To optimize your site, you therefore need both:

  • Minimize the path to your deep pages as much as possible;
  • Keep browsing to these same consistent pages.

Here’s how to check the accessibility of your pages with SEO Screaming Frog:

1. Launch the analysis of your site then right click on one of the URLs

2. Click on “Export” and select “Crawl Path Report”

This report shows you the shortest route used by Google spiders and other search engines to reach your deep pages.

The question you should ask yourself afterwards, and for each of these results:

“Is there a shorter and equally consistent path?”

If the answer is yes, it’s your turn!

Screaming Frog includes many more options that are worth exploring. Among the most interesting, we can notably mention:

  • JavaScript site analysis (to identify possible blocked resources);
  • Editing your page titles and meta descriptions saw their “SERP Snippet” emulator;
  • Auditing XML sitemaps.

And as promised, here are some additional resources to get started in technical SEO: