There are a few certainties in life: you will always have to pay your taxes and no-one will ever be shy about voicing their opinion on “SEO”. The main problem with these opinions is simply that they typically always amount to the same thing – rehashed, loosely formed opinion, with little to no substance.

If you are truly serious about developing out a strong web presence which acquires visibility in search, then you need to be focussed on what really matters – and in a lot of instances, it’s about undoing what shouldn’t have been done in the first place.

Canonicalisation:

It’s a pretty big word, but from an SEO point of view all it really means is “canonical source”.

  • Search engines can interpret WWW URLs and non-WWW URLs as unique URLs – 301 redirect (via htaccess) all non-WWW URLs to their WWW counterparts.
  • Search engines can interpret HTTPs (secure pages) on your website as unique URLs. If you’re wondering whether this problem affects you, just use the following advanced search operator: “site:example.com inurl:https”. If it does, use the canonical tag and point it back to itself (search for more information on this).
  • Search engines can interpret default pages e.g. /index.aspx as separate URLs – 301 redirect all default pages back to the main URL e.g. http://www.example.com/
  • Search engines can treat trailing slash URLs as distinct from non-trailing slash URLs e.g. http://www.example.com can be considered distinct from http://www.example.com/ – simply 301 redirect all non-trailing slash URLs to their trailing slash counterparts (search for more information on how to do this).

There are more examples, but these are a few of the more important issues. Why is all of this important? Well, basically it all revolves around that commonly used term: “duplicate content”. If you have duplicate content on your website (the same page of content across two different URLs), then you can, conceivably (but not always), have two pages of content competing against each other for position in the Search Engine Results Pages (SERPs).

Click Depth and Accessibility:

How accessible is your website? Search engines utilise web crawlers (which are essentially computer programs) – these crawlers crawl millions upon millions of links, and they always originate from the USA (if you are targeting different geographical regions then do not use IP redirection).

You need to make sure that the most important content on your website is easily reached by the web crawler, which essentially means you have to avoid burying your content deep within your website hierarchy.

The best way to do this is to utilise your homepage (which is just another page, much like any other page, on your website) and ensure that your important content (the content you want to rank in search) doesn’t fall more than five clicks away from your homepage.

There are lots of different ways to achieve this, including creating additional sub-categories, increasing the number of products listed per page (for e-commerce websites), reducing the amount of pagination available on blog pages, and so on.

Ultimately, however, if you want all of your most important content to be accessed, then you need to be focussed on making it as easy as possible for a search engine to get to (another good option is to upload an XML sitemap, this is imperative for larger websites e.g. over 10,000 pages).

Sharing is caring!