This article provides everything needed to boost the SEO of your website. It includes: our professional SEO audit checklist, instructions of how to use it, and a step-by-step guide explaining exactly what to do.
If you want Google to crawl, index and rank your website you’ll love this post. Just make sure you read it carefully and work through every step one by one.
The epicentre of any SEO audit should be a comprehensive checklist so that nothing is missed or forgotten.
Here’s an SEO audit checklist we use to great effect – click here to download!
Instructions:
Like most things in digital marketing, there are a lot of ways to do this.
The good news is that everything in this audit is possible with just two pieces of software.
This is by far the best web crawler for SEO – powerful, uncluttered, reliable and easy to use.
Click here to install Screaming Frog.
*An annual licence for the paid version is comparatively very cheap and a no brainer for anyone who’s serious about SEO. It’s also the only paid version of SEO auditing software you will ever need.
This is where a website’s presence in Google Search results is monitored, maintained and troubleshooted.
You’ll need a Gmail account to sign up and need to have added your domain. If you haven’t simply sign in, go to ‘Add property’ and enter your domain under ‘Domain’:
Add the website address in the bar at the top where it says ‘Enter URL to spider’.
Change to ‘Subdomain’ (unless you want to crawl an ‘Exact URL’ or ‘All Subdomains’, which are not the focus in this article).
Then, click ‘Start’.
Can the website pages appear in Google Search results?
The most effective way to check this is a simple ‘site:’ search. Open Google Search in a web browser (such as Chrome) and enter:
site:yourdomain (including the www. if that is being used)
Check the main pages are showing in the results i.e. homepage, about, service/ product pages, blog, contact, etc.
Also, pay attention to how many results there are – does the number make sense?
If not, pay more attention to the number of URLs in Screaming Frog (Crawl Data > Internal > HTML) compared to the number of results in the SERPs.
Go back to the URL inspection tool in GSC to ‘Test Live URL’ and learn what the issue is:
This is the preferred option if you want to investigate a particular URL.
Simply, click on ‘URL inspection’ within GSC and enter the details. You can check if it’s indexed as well as ‘request indexing’ if needs be.
We don’t just want Google to index page URLs – it needs to index our actual content.
Some of the most widely used web development languages, like JavaScript, need to be tuned carefully so that they’re Google friendly.
Check a web page by adding its URL to the following line of code in a web browser address bar:
https://google.com/search?q=cache:https://entertheURL/
Make sure all the main pages on the website are indexable and that the cached version of the page that Google serves is as expected.
A page needs to return a 200 status code (from the web server) for search engine indexing. In other words, we don’t want 4xx and 5xx error codes.
Use Screaming Frog to compile a list of all the pages with error codes by clicking the ‘Response Codes’ tab. Sort in descending order to see the 5xx and 4xx codes easily, then highlight and export to excel.
Clean up the list and take remedial action.
For example, a 404-error code means the page is no longer available and requires redirecting accordingly.
Redirects serve two main purposes:
It’s a good idea to do a quick check of the robots.txt file, just to make sure nothing untoward is living in there. This is especially important if the site seems to be having indexation issues.
A robots.txt file controls which files crawlers can access on your site. Add ‘robots.txt’ to the end of you domain to look at it for your website:
https://yourdomain/robots.txt
If all crawlers can access all files, it should look something like this:
User-agent: * Disallow: Sitemap: https://www.paladinmarketing.co.uk/sitemap.xml
Anything else and it might be blocking some access i.e. restricting crawling and indexing.
If you’re unsure whether there’s an issue, or want to double check something, there is a robots.txt file checker within GSC.
Go to GSC > Settings > Robots.txt > Open Report
It’s best practice to have all your indexable URLs (i.e. those you want to rank in Google Search) listed in an XML sitemap file. It’s not a technical requirement but makes discovering the pages for crawling as easy as possible.
Add sitemap.xml onto your domain to take a look:
https://yourdomain/sitemap.xml
Did you notice the sitemap file location in the robots.txt file example above?
Once a sitemap lists all the URLs for indexing it makes sense to ensure that search engine crawlers can find it. Just add your domain to the syntax (like above) and it’s good to go.
Finally, submit your sitemap to Google so it can be used for crawling and indexing.
Would Google find your sitemap without manual submission?
Yes, but nowhere near as quickly. Also, Google provides an Index Coverage Report which can be handy if problems are encountered.
The most valuable pages on your website should be accessible in three clicks (i.e. links) or less.
This applies to all types of pages but obviously excludes things like product configurations or other dedicated user journeys. Therefore, internal links (including navigation) between pages needs to be checked.
Choose an end point that a user could be interested in.
Start on the website homepage before counting how many clicks it takes to arrive at the chosen destination.
If you can get there via a few links, so can Google’s crawler.
A canonical tag specifies a single preferred version of a page when its available via multiple URLs. It tells a search engine which one to focus on and index, thereby preventing duplicate content that wastes ‘crawl budget’.
SEO best practice is to have canonical tags on every URL based on two simple rules:
Note – these also apply to domain patterns i.e. with or without the ‘www.’
Screaming Frog is brilliant for auditing canonicalization:
There’s a bunch of different things it enables you to do but the most important for an SEO audit is finding ‘missing’ canonical tags. Once you know this you can add them and direct crawlers to the correct pages.
So, as usual Google gets the final say with canonicals too.
Even if your user-defined canonicals are clearly marked for every page, Google won’t necessarily respect them. It uses more than just your tags as canonicalization signals, such as redirects, URL patterns, links, etc.
The easiest way to see if Google respects your canonicals is in GSC > Indexing > Pages:
Click on the ‘Duplicate, Google chose different canonical than user’, then the magnifying glass on a URL to inspect further.
The Golden Rule of Canonical Tags
Be careful when applying canonical tags as it’s easy to send mixed signals to Google without realizing it.
If a page is not to be indexed (i.e. pagination pages that have the ‘noindex’ tag) you can’t then canonicalize it to a page that is to be indexed.
Thus, the golden rule goes: do not canonicalize ‘noindexed’ pages to a URL that is indexable.
Whilst users of a website never really take much notice of the URL in the address bar, it’s good practice to ensure they’re as simple as possible.
This is because:
Google provides guidance on what it regards as good URL structures if you’re struggling.
To check if JavaScript (JS) is in use on a website or web page, simply turn it off in your browser settings then refresh and browse. A page will look the same if it doesn’t use any – if it does, the JS content will have disappeared.
There are three tools to check if Google can render a page with JavaScript present:
Screaming Frog is by far the most powerful and comprehensive, not to mention easy to carry out. Go to the JavaScript tab then filter by ‘Pages with Blocked Resources’.
Export a list of issues and pass to whoever developed the website.
Meta data is not seen by users on the front end of the website. Basically, it’s information about the information on the page so search engines can understand the content better.
After crawlability, it’s the most important aspect of SEO.
All indexable pages should have a meta title (a.k.a. page title) that fits certain criteria, as follows:
Go to the ‘Page Titles’ tab in Screaming Frog which provides lots of insight including the most important issues like: Missing, Duplicate, and Over 60 Characters.
Ensure there is a meta title for each indexable page that is: unique, under 60 characters and contains the page’s focus keyword.
These are not as important as they use to be because nowadays Google pulls content dynamically from a page for SERP descriptions.
But it will use yours if it deems it to be better that what it can do (for a given search query). Therefore, optimizing them makes sense. Make sure they include:
Also, make sure they’re unique otherwise it’s likely they’ll be ignored by Google.
A favicon is the little icon of a company’s logo that sits up in the web browser tab or next to the page title on mobile. It’s a tiny graphic that is purposely put in place by web developers.
Whilst these have no technical impact on SEO, they can play a part by attracting more click throughs in the search results. This is especially true if you’ve invested in raising your brand awareness.
A quality favicon will engender organic CTR which is good for SEO, especially if your content is an awesome UX and conversion optimized.
Structured mark-up is not a ranking factor in its own right.
That said, it helps Google understand the content of a page and is also used to generate rich snippets (which skyrocket click throughs in the SERPs). Google lists the various types it supports here.
To pass this item in the audit, you need to validate the structured data on all major website pages (e.g. home, sales pages, blog, product pages, etc, etc) using the appropriate Google tools.
And that’s it! That’s everything you need to be sure that your website is following the best SEO practices and, in doing so, performing at it’s best for you!
We hope you found this post helpful. Happy SEO-ing (and you can thank us later!)