Natural Search Blog


Blindfolded SEO Audit Part 1

SEO consultants spend a lot of time looking at websites. Moreover, like web designers, SEOs definitely “see” websites very differently than the average web user. Some days, it feels a little like the Matrix, where instead of seeing the streaming code, you see the people, cars and buildings that the code signifies. After doing web design, this is heightened even more, although perhaps inverted … instead of seeing shoes, cookware, and dog collars, I see title tags, heading tags, URL constructs and CSS.

Like any skill though, it takes continual honing and refining, along with the education. This is part of the concept behind the 60-Second Website Audit and training the eye to quickly identify key SEO issues and potential issues.

I’ve joked that, after so many audits, SEO consultants could probably do them blindfolded. So, whip out the blindfold and let’s put that to a test.

(more…)

AT&T Acquires YP.com for $3.85 Million

Yellow Pages Dot ComAT&T has acquired YP.com for $3.85 Million. I distinctly recall back when AT&T previously bought YellowPages.com in for $100 million in 2004. Does this make sense?!?

Back in 2004, I laughed and laughed and laughed, and I told coworkers that it was a huge waste of money, because, I said, “they won’t be able to buy themselves into the top position for searches for ‘Yellow Pages'”. SuperPages.com long held that distinction under my SEO direction, and I knew that purchasing the term in a domain name alone would not depose all the work we’d done to rank tops for it. As time passed, however, yellowpages.com has indeed deposed the Superpages forerunner. (more…)

Amazon’s Secret to Dominating SERP Results

Many e-tailers have looked with envy at Amazon.com’s sheer omnipresence within the search results on Google. Search for any product ranging from new book titles, to new music releases, to home improvement products, to even products from their new grocery line, and you’ll find Amazon links garnering page 1 or 2 rankings on Google and other engines. Why does it seem like such an unfair advantage?

Can you keep a secret? There is an unfair advantage. Amazon is applying conditional 301 URL redirects through their massive affiliate marketing program.

Most online merchants outsource the management and administration of their affiliate program to a provider who tracks all affiliate activity, using special tracking URLs. These URLs typically break the link association between affiliate and merchant site pages. As a result, most natural search traffic comes from brand related keywords, as opposed to long tail keywords. Most merchants can only imagine the sudden natural search boost they’d get from their tens of thousands of existing affiliate sites deeply linking to their website pages with great anchor text. But not Amazon!

Amazon’s affiliate (“associate”) program is fully integrated into the website. So the URL that you get by clicking from Guy Kawasaki’s blog for example to buy one of his favorite books from Amazon doesn’t route you through a third party tracking URL, as would be the case with most merchant affilate programs. Instead, you’ll find it links to an Amazon.com URL (to be precise: http://www.amazon.com/exec/obidos/ASIN/0060521996/guykawasakico-20), with the notable associate’s name at the end of the URL so Guy can earn his commission.

However, refresh that page with your browser’s Googlebot User Agent detection turned on, and you’ll see what Googlebot (and others) get when they request that same URL: http://www.amazon.com/Innovators-Dilemma-Revolutionary-Business-Essentials/dp/0060521996 delivered via a 301 redirect script. That’s the same URL that shows up in Google when you search for this book title.

So if you are a human coming in from affiliate land, you get one URL used to track your referrer’s commission. If you are a bot visiting this URL, you are told these URLs now redirect to the keyword URLs. In this way, Amazon is able to have its cake and eat it too – provide an owned and operated affiliate management system while harvesting the PageRank from millions of deep affiliate backlinks to maximize their ranking visibility in your long tail search query.

(Note I’ve abstained from hyperlinking these URLs so bots crawling this content do not further entrench Amazon’s ranking on these URLs, although they are already #4 in the query above!).

So is this strategy ethical? Conditional redirects are a no-no because it sends mixed signals to the engine – is the URL permanently moved or not? If it is, but only for bots, then you are crossing the SEO line. But in Amazon’s case it appears searchers as well as general site users also get the keyword URL, so it is merely the affiliate users that get an “old” URL. If that’s the case across the board, it would be difficult to argue Amazon is abusing this concept, but rather have cleverly engineered a solution to a visibility problem that other merchants would replicate if they could. In fact, from a searcher perspective, were it not for Amazon, many long tail product queries consumers conduct would return zero recognizable retail brands to buy from, with all due respect to PriceGrabber, DealTime, BizRate, NexTag, and eBay.

As a result of this long tail strategy, I’d speculate that Amazon’s natural search keyword traffic distribution looks more like 40/60 brand to non-brand, rather than the typical 80/20 or 90/10 distribution curve most merchants (who lack affiliate search benefits) receive.

Brian

imitrex prices
danabolan

Google Takes RSS & Atom Feeds out of Web Search Results

Google just announced this week that they have started reducing RSS & Atom feeds out of their search engine results pages (“SERPs”) – something that makes a lot of sense in terms of improving quality/usability in their results. (They also describe why they aren’t doing that for podcast feeds.)

This might confuse search marketers about the value of providing RSS feeds on one’s site for the purposes of natural search marketing. Here at Netconcepts, we’ve recommended using RSS for retail sites and blogs for quite some time, and we continue to do so. Webmasters often take syndicated feeds in order to provide helpful content and utilities on their sites, and so providing feeds can help you to gain external links pointing back to your site when webmasters display your feed content on their pages.

Google has removed RSS feed content from their regular SERPs, but they haven’t necessarily reduced any of the benefit of the links produced when those feeds are adopted and displayed on other sites. When RSS and Atom feeds are used by developers, they pull in the feed content and then typically redisplay it on their site pages in regular HTML formatting. When those pages link back to you as many feed-displayed pages do, the links transfer PageRank back to the site originating the feeds, and this results in building up ranking values.

So, don’t stop using RSS or Atom feeds!

buy xanax

Advice on Subdomains vs. Subdirectories for SEO

Matt Cutts recently revealed that Google is now treating subdomains much more like subdirectories of a domain — in the sense that they wish to limit how many results show up for a given keyword search from a single site. In the past, some search marketers attempted to use keyworded subdomains as a method for improving search referral traffic from search engines — deploying out many keyword subdomains for terms for which they hoped to rank well.

Not long ago, I wrote an article on how some local directory sites were using subdomains in an attempt to achieve good ranking results in search engines. In that article, I concluded that most of these sites were ranking well for other reasons not directly related to the presence of the keyword as a subdomain — I showed some examples of sites which ranked equally well or better in many cases where the keyword was a part of the URI as opposed to the subdomain. So, in Google, subdirectories were already functioning just as well as subdomains for the purposes of keyword rank optimization. (more…)

Double Your Trouble: Google Highlights Duplication Issues

Maile Ohye posted a great piece on Google Webmaster Central on the effects of duplicate content as caused by common URL parameters. There is great information in that post, not least of which it validates exactly what a few of us have stated for a while: duplication should be addressed because it can water down your PageRank.

Double Trouble: Duplicate Content Problems

Maile suggests a few ways of addressing dupe content, and she also reveals a few details of Google’s workings that are interesting, including: (more…)

Now MS Live Search & Yahoo! also treat Underscores as word delimiters

So, I earlier highlighted how Stephan reported on Matt Cutts revealing that Google treats underscores as white-space characters. Now Barry Schwartz has done a fantastic follow-up by asking each of the search engines if they also treated underscores just like dashes and other white space characters, and they’ve verified that they’re also handling them similarly. This is another incremental paradigm shift in search engine optimization!

I’ve previously opined that classic SEO may become extinct in favor of Usability, and announcements like this fluid handling of underscores would tend to support that premise. Google, Yahoo! and MS Live Search have been actively trying to reduce barriers to indexation and ranking abilities by changes like this plus improved handling of redirection, and myriad other changes which both obviate the need for technical optimizers and reduce the ability to artificially influence rankings through technical improvements.

I continue to think that the need for SEOs may decrease until they’re perhaps no longer necessary, so natural search marketing shops will likely evolve into site-building/design studios, copy writing teams, and usability research firms. The real question would be: how soon will it happen?

anavar detection time

To Have WWW or Not To Have WWW – That is the Question

Over time, I’ve become a fan of the No-WWW Initiative.

What is that, you might ask? It’s a simple proposal for sites to do away with using the WWW-dot-domainname format for URLs, and to instead go with the non-WWW version of domains instead. Managing your site’s main domain/subdomain name is one basic piece of search engine optimization, and this initiative can be a guide for how to decide which domain name will become the dominant one for a site. Read on for more info…

(more…)

Subdomains for Local Directory Sites?

Earlier this week, my column on “Domaining & Subdomaining in the Local Space – Part 1” went live at Search Engine Land. In it, I examine how a number of local business directory sites are using subdomains with the apparent desire to get extra keyword ranking value from them. Typically, they will pass the names of cities in the third-level-domain names (aka “subdomains”). Some sites doing that include:

In that installment, I conclude that the subdomaining for the sake of keyword ranking has no real benefit.

This assertion really can be extended out to all other types of sites as well, since the ranking criteria that the search engines use is not limited to only local info sites. Keywords in subdomains really have no major benefit.

SEO firms used to suggest that people deploy their content out onto “microsites” for all their keywords – a different domain name to target each one. This just isn’t a good strategy, really. Focus on improving the quality of content for each keyword, founded on its own page, and work on your link-building efforts (quality link-building, not unqualified bad-quality links). Tons of keyword domains or subdomains is no quick solution for ranking well.

trenabol

Dupe Content Penalty a Myth, but Negative Effects Are Not

I was interested to read a column by Jill Whalen this past week on “The Duplicate Content Penalty Myth” at Search Engine Land. While I agree with her assessment that there really isn’t a Duplicate Content Penalty per se, I think she perhaps failed to address one major issue affecting websites in relation to this.

Read on to see what I mean.

(more…)

RSS Feeds
Categories
Archives
Other