I’ve been seeing indications that Google has shifted their weighting of the ~200 various signals they use in their ranking soup over the past couple of years. It used to be that PageRank along with the number of keyword references on a page were some of the strongest signals used for what page comes up highest in the search results, but I’ve seen more and more cases where PageRank and keyword density seem relatively weaker than they once were. I see a lot of reasons to believe that quality ratings have become weighted more heavily for rankings, particularly among more popular search keywords. Google continues to lead the pack in the search marketplace, so their evolution will likely influence their competitors in similar directions, too.
So, what is my evidence that Google’s development of Quality criteria is becoming more influential in their rankings than PageRank and other classic optimization elements? Read on and I’ll explain. Nearly since inception, Google has focused strongly upon Usability and User Experience to create a service that Marissa Mayer (Google’s Vice President of Search Products & User Experience), has referred to as a useful tool which they made approachable/accessible by applying a usable interface on top of it. We know that they haven’t stopped working to improve the user experience on their site, but are following the practice of continuous quality improvement in this area. Way back in 2002 Mayer reported:
“we’re user testing almost every week. We’ll do a site-wide test once a month or so, with some tasks, but more free-form, just to see where people go, where they encounter problems. The other three weeks of the month, we test specific features.”
You can safely bet that they’ve increased or evolved their user testing methods here in 2006. Google has been hiring Quality Evaluators, Quality Raters, and Usability Researchers at an astonishing pace during this past year.
In 2005, Henk van Ess reported various details of instructions that Google provides to their human evaluators to use in rating the quality of pages appearing in their search results. His first two entries on the subject were eye-openers for many:
Google, Yahoo, and MSN all use some human evaluators in their ongoing fight against spam sites. John Battelle relates in his book, The Search (2005) pg 240, that Yahoo exercises editorial discretion to customize some types of content for various keyword SERPs. He also wrote, “Google sees the problem as one that can be solved mainly through technology — clever algorithms and sheer computational horsepower will prevail. Humans enter the search picture only when algorithms fail — and then only grudgingly.” Yet, Google more than any other has apparently expanded the role of humans in evaluating their search results, and their role has gone beyond just red-flagging spam sites. They are apparently rating pages based upon quality and appropriateness for the search keyword as well.
Wall Street reports that Google recently changed their internal focus from deploying more products to improving core services and making them integrate better with each other. You can read the subtext here: improve the quality of their core products, and their central product is search.
They’ve also turned attention to the quality of pages that the ads running on their network are linked. Jeremy Shoemacher (aka “Shoemoney”) and others have only just reported that Google is now rating the Quality of AdWords landing pages. (Read the AdWords blog entry on the subject, too.) This is another indicator of their dedication to the quality of user experience — they’re essentially penalizing their advertisers who have lower quality.
So, there’s plenty of evidence that Google is now obsessed with taking the quality of their SERPs to the next level by testing to see if the pages appearing in results are apropos, and then altering the rankings of the results to better target user’s desires/intentions. How do they do this?
You can read the documents exposed on Henk’s Search Bistro site for specifics, but I can summarize a bit:
- Their evaluators are presented the search results page for a keyword. They must check out each of the top pages appearing for that keyword term, and vote on whether the page is good quality or not.
- If the page content seems inappropriate for the keyword, it will get a bad rating for that keyword.
- They particularly will give a bad rating to pages which are primarily composed of ads.
- They particularly will give a bad rating to pages which are primarily composed of affiliate links, unless the site has included some significant value-add content in conjunction with the affiliate content.
- Google desires that the pages linked in their SERP for a keyword should not all contain identical content, because they believe this would be a bad user-experience. So, syndicated content appearing on many multiple sites may only rank well for the originator or most-authoritative site, according to how their algorithms identify authoritativeness.
- Pages which are hiding keywords or using misleading TITLE text or META descriptions may expect to be negatively rated. (I recently blogged in detail about the new emphasis on META descriptions.)
- It’s quite possible that Google could be using a process like TrustRank wherein their evaluators may rate a sample set of pages from a large site, and then they could use the resulting average trust rating across all the site’s pages. (Various of us have fantasized that the mysterious “IndyRank” element that was accidentally exposed in a Google error page could be referring to a human rating value for a site or page.) If this is occuring, your entire site could suffer if you have one section that has low quality according to their scoring guidelines.
The end result of these trends is that the practice of SEO is transforming from a stew of technical wizardry into applying methods for good user-centered design. Sure, we’ll still need to insure that pages will include actual text content, and expose database content through crawlable links. But tech tricks for increasing the relevancy of a page for a keyword may not be sustainable over the longterm.
Stephan Spencer and others have said that the SEO industry might not even survive long-term, because SEO is parasitically dependent upon the failures of designers to build usable websites, and the failures of search engines to perfectly rank pages for their subject matter.
I’ve attended Search Engine Strategies conferences for many years, and I’ve heard engineers from each of the major engines recommend that webmasters concentrate on usability more than on how to game the SERPs, but Google appears to be actually quantifying usability. If they master this, it really could be the end of the SEO industry. The other SEs will just play follow-the-leader when this happens.
Are you a SEO professional who is ready for this sea change? Are you educated about usability testing and user-centered design? One book I recommend on the subject is Steve Krug’s “Don’t Make Me Think”.
Possible Related Posts