Natural Search Blog


SEO Best Practices – Correlation Testing

Hi everyone,
This post is about the latest correlation testing done by the great folks at SEOmoz in regards to search engine ranking factors. Presented below is a list of the best SEO practices with some startling revelations. It is not exhaustive but covers the crucial factors.

SEOmoz has been indexing the web with their killer app Linkscape and they have been analysing the data to arrive at solid results. Even though the SEO field has lots of grey areas, good evidence in the form of solid data proves that search engine optimisation can be a definitive science in certain areas rather than a skillful craft.

The search engine ranking factors that were analysed are outlined below:

1) Title Tag:
Best Practice:

Primary Keyword – Secondary Keywords | Brand
Or
Brand Name | Primary Keyword and Secondary Keywords

If the keyword targeted is in an extremely competitive industry, then it is good to put it at the beginning of the title. If the targeted keyword is not very competitive and if the branding can help attract clicks to the site on the SERPs, then the brand name can appear at the start of the title tag followed by the keyword phrase.

2) Usefulness of H1 tags:
The H1 tag is more useful to users than to search engines. This is a hard hitting result considering that H1 tag has been an important on page factor and search engines appear to weight it.

This test reveals that H1 is still important for informational hierarchy and semantic purposes which help users know the heading under which the content appears. But search engines do not give the H1 tag the same ranking weight than was originally estimated. It is still important to help the user identify the information heirarchy on a given page.

It could be that the use of CSS has led to h1 tags being formatted to make them look like something else than what they originally stand for – prominent headings.

3) Usefulness of Nofollow:
Using nofollow tag for internal pagerank sculpting to flow link juice is no longer recommended. It is good to use the nofollow tag for disabling spammers in potential user generated content.

4) Usefulness of Canonical Tag:
The canonical tag is still in its infancy and is best used to provide hint to the search engines for duplicate content. From public statements, it is also believed to deplete between 1% to 10% of link juice as in the case of 301 redirects.

For example if page A has 10 link points and it is redirected using 301 redirect to page B, then page B would receive between 1 and 9 link points.

Since the search engines are still testing the canonical tag, it would be wise not to rely too much on its functionality in these early days. It would be wise to adopt confirmed architectural solutions like the hash tag at the end of the URL of duplicate content to prevent it from being indexed by the search engines.

5) Use of Alt Text in Images:
The recommendation is clearly to include good relevant alt text with images on pages that are targeting competitive keywords. This is based on two reasons.

Firstly, the alt tag helps users with disabilities to understand what a page is about and also computers using semantics to make the information more useful. Secondly, the correlation data at hand proves the alt attribute is a far more important ranking factor than what was assumed.

In the past, excessive abuse of alt tag by stuffing keywords has almost convinced many SEOs to deny it any importance. But the latest tests show otherwise.

6) Use of Meta Keywords tag:
This has long been abused and has lost its relevance as a ranking factor. But the recommendation here is to use the meta keywords tag if the site owner is not worried about making her site’s use of keywords public. Moreover, Yahoo still considers it as one of the several factors that influence ranking on its SERPs. The data shows that Google and Bing still ignore the meta keywords tag.

7) Use of Parameter Driven URLs:
The recommendation is not to use them unless absolutely necessary (in the case where the CMS being used employs parameters). Even then, it should be restricted to a maximum of 2 parameters.

The spiders have found it hard to parse URLs with dynamic content and often encounter duplicate content. The correlation data reveals that pages with static URLs tend to rank higher on the SERPs.

8. Usefulness of Footer Links:
Footer links are to be used sparingly and a limit of 25 relevant navigational links is recommended. This number is not a hard limit. The intent behind the keywords chosen for such links is an important issue.

Google has been known to penalise sites with keyword stuffed footer links and this manipulation can be detected through its algorithms. The testing shows that only an excess of 25 footer links that looked spammy invited Google’s automated penalty.

9) Use of Javascript and Flash:
The use of Javascript and Flash in navigation that search engines follow to index pages on a site is not recommended.

The main reason proposed is that the way the search engines parse flash and javascript is still inferior when compared to html. It is a risk that can well be avoided. Navigation using Flash and Javascript code leads to lower search engine rankings.

10) Use of 301 Redirects:
The best way to redirect pages is using 301 redirects though there are inherent disadvantages. A 301 redirect depletes about 1% to 10% of link juice. This is far better than a javascript or 302 redirect which passes no juice at all. Meta refreshes appear to perform similarly to 301s but the search engines still prefer 301 redirects.

11) Blocking Content from Search Engines:
The Meta Robots tag (noindex, follow) is a better method to block pages from the bots than the robots.txt. The robots.txt is to be used sparingly only in cases where the meta robots tag cannot be employed.

The main reason behind this conclusion is that the use of robots.txt stops the spiders from seeing what is on the blocked page. But they know the existence of such blocked pages. This leads to results on the SERPs for such blocked pages as just the site URL with no title and description. It is also referred to as Partially Indexed Pages (PIP).

Another negative is the blocking of flowing of link juice that a blocked page may have accrued. The bots cannot see the links on the blocked page and cannot follow them. The link juice is not distributed via those links.

The use of noindex follow parameters with the meta robots tag helps the page from being indexed but allows the link juice to flow to other pages through the links on that page. This is highly recommended.

12) Google Search Wiki’s Effect on Rankings:
The search wiki does not have any influence on rankings. The recommendation is not to spend time or resources on search wiki.

Google Searchwiki allows a user to customise her search results by re-ranking, deleting, adding, and commenting on search results. The user has to sign in to her Google account for this and she can only customise the way her search results appear after she is logged in, aka personal search.

13) Negative Links from Bad Neighbourhood:
The effect of links from bad neighbourhood to good neighbourhood is minimal provided the links are one way and not reciprocal.

Tests show that the search engines can easily identify the hubs in a particular industry and get to know the legitmiate resources. All good sites receive spammy links and this is a fact of life on the web. As long as the percentage of inbound spammy links is a small proportion of the total number of inbound links and the site is reasonably well linked to legitimate resource hubs for that industry, there is no cause for concern.

14) Relationship between Traffic and Rankings:
The user metrics to a given site do not determine the rankings of that site. Though traffic and rankings are highly correlated, one does not cause the other. The reason is that the popular websites get more links and this in turn causes higher rankings.

Search engines have tried metrics like time spent by user on a page for ranking results but this turned out to be a noisy signal. If the engines do not detect a user click on their SERPs, then that is a more trusted signal for them to improve their search results.

The recommendation suggests not to trust unique visitor counts (opposed to total visitors) as it can be a fundamental flaw of the web analytics software. It is much better to look at the traffic derived from the various sources and comparison of traffic from one month to the next.

The tests conducted do throw interesting light on several assumptions associated with some of the SEO best practices outlined above. The full SEOmoz post on SEO Best Practices is an interesting read along with the comments that follow it.

Ravi Venkatesan is a senior pay per click marketing consultant at Netconcepts, a well established and highly trusted Auckland search marketing consultancy that endeavours to provide the best practice white hat SEO and Pay per click services.

where to buy testosterone cypionate

No comments for SEO Best Practices – Correlation Testing

No comments yet.

Sorry, the comment form is closed at this time.

RSS Feeds
Categories
Archives
Other