Natural Search Blog


Yellow Pages Guerilla Ad Campaign

I was speaking at the Search Engine Strategies (“SES”) Conference in Toronto a couple of weeks ago, and was impressed by the YellowPages.ca booth in the exhibit hall:

YellowPages.ca booth at SES Toronto Conference

YellowPages.ca Search GraphI’ve seen other, equally-large booths for online yellow pages companies, but this one seemed particularly attention-getting and inviting. The glowing yellow desk and the simple design made the thing very friendly-looking, and the geek in me was drawn to the near-real-time search volume graph they had playing up on one screen. (more…)

Local Search Behemoth InfoSpace Cashed Out

Last year when InfoSpace decided to sell off Switchboard, other directories, and their mobile services, I wondered if they were just cashing out. Yesterday’s New York Times article, “Once an Internet Giant, InfoSpace Dismantles Itself“, would appear to verify that they did indeed cash out.

Infospace (more…)

Privacy Policy Could Be Site Quality Signal

Privacy Policies & Personal DataSearch engines have increasingly gotten involved in protecting endusers from hostile and intrusive elements on the internet, and they’ve also become more active in internet privacy issues as consumers are getting more educated about issues surrounding data privacy. Ask.com has tried to differentiate themselves by being progressive about communicating their data retention policy and by enabling users to define how long data is retained, for instance, while Google has revised their data retention policy as well as worked to aggressively block or warn endusers about websites containing adware, spyware, and other exploits. Yahoo! even recently paired up with McAfee to assess and improve the safety of sites displayed in their search results.

One aspect of search rankings I’ve written about before is the theory of a site’s quality — a “quality score” very likely is applied by Google (and to lesser degrees, Yahoo! and Microsoft Live Search) to quantify how much they may trust a site for ranking purposes and for users’ safety. There are a number of factors which might feed into a site’s quality score (including Google’s human quality auditors’ scoring), and one major factor that could be used might be a site’s Privacy Policy. (more…)

Google Maps Introduces User Review Snippets in Listings

The Google Lat Long Blog announced today that they’ve introduced little snippets of a user review with each business listing for which they have reviews data:

User Review Snippets in Google Maps
(click to enlarge)

To me, this seems like a bit of an experimental feature, since I tend to want to see a sampling of multiple reviews to try to get a balanced picture of what to expect from a business. Of course, one can click through and view multiple reviews, but why would I only want to see one sample — is there something being done to try to select the most-typical review for the business, or are they only selecting random ones? (more…)

Whitepages.com Acquiring Snapvine, Focuses On Community Development

WhitePages.com Snapvine MergerWhitePages.com is acquiring Snapvine, a service that allows people to associate audio files with various resources like social networks, photos, text, and blogs. Snapvine enables facilitates voice blogs, similar to podcasting, but perhaps with a little greater ease.

WhitePages states on their blog that they’ll use Snapvine’s technology to provide their users with free, private voicemail boxes. In addition, WhitePages will roll out other features such as email and SMS services.

I think this signals that WhitePages.com will be pursuing community development as an ongoing strategy to maintain and build their traffic. This could be a really strong strategy — encouranging community engagement could drive up usage and associated ad revenues considerably for the residential listings directory. WhitePages.com also offers yellow pages directory service through a partnership with Idearc’s Superpages.com.

Considering the rise of Twitter and other mobile phone services, VOIP applications like Snapvine could be poised to be the next big thing.

The Seattle Post-Intelligencer reports that the deal likely comes in below previous valuations for Snapvine.

ways to loose weight

SMX Advanced Keynote Addresses

We’re at the SMX Advanced conference here in Seattle this week. It’s been very interesting, fun and educational.

The two keynote interviews were mentionable.

First on Tuesday morning, Danny Sullivan interviewed Kevin Johnson, the President of the Platform & Services Division at Microsoft:

Danny & Kevin - Keynote Interview

Johnson spoke about their new Live Search Cashback program (this offers rewards back to consumers a cash back rebate for purchases made online). Johnson stated that they felt the future of online search marketing was headed in that direction. He also mentioned a number of times that Microsoft is dedicated to the concept of multiple choices in the marketplace for software and search services — something which made a lot of audience members chuckle a bit.

Related to Microsoft’s Cashback program, (more…)

Should Businesses Rename Themselves For Better Search Traffic?

Mike Blumenthal has a great article this week, going over some aspects surrounding how businesses may opt to rename themselves for purposes of local search engine optimization within Google Maps.

As he mentioned, I’d previously listed this idea in my somewhat tongue-in-cheek post on “Extreme Local Search Optimization Tactics” some time back.

While my Tactics were intended to be a bit over-the-top, the tactic is indeed likely to work to varying degrees in different search engines and internet yellow pages directories, as Mike outlines. I should note that I only endorse the engineering of business names for purposes of branding and for purposes of targeting business-category/product/service terms for which the company involved is actually providing. (more…)

Amazon’s Secret to Dominating SERP Results

Many e-tailers have looked with envy at Amazon.com’s sheer omnipresence within the search results on Google. Search for any product ranging from new book titles, to new music releases, to home improvement products, to even products from their new grocery line, and you’ll find Amazon links garnering page 1 or 2 rankings on Google and other engines. Why does it seem like such an unfair advantage?

Can you keep a secret? There is an unfair advantage. Amazon is applying conditional 301 URL redirects through their massive affiliate marketing program.

Most online merchants outsource the management and administration of their affiliate program to a provider who tracks all affiliate activity, using special tracking URLs. These URLs typically break the link association between affiliate and merchant site pages. As a result, most natural search traffic comes from brand related keywords, as opposed to long tail keywords. Most merchants can only imagine the sudden natural search boost they’d get from their tens of thousands of existing affiliate sites deeply linking to their website pages with great anchor text. But not Amazon!

Amazon’s affiliate (“associate”) program is fully integrated into the website. So the URL that you get by clicking from Guy Kawasaki’s blog for example to buy one of his favorite books from Amazon doesn’t route you through a third party tracking URL, as would be the case with most merchant affilate programs. Instead, you’ll find it links to an Amazon.com URL (to be precise: http://www.amazon.com/exec/obidos/ASIN/0060521996/guykawasakico-20), with the notable associate’s name at the end of the URL so Guy can earn his commission.

However, refresh that page with your browser’s Googlebot User Agent detection turned on, and you’ll see what Googlebot (and others) get when they request that same URL: http://www.amazon.com/Innovators-Dilemma-Revolutionary-Business-Essentials/dp/0060521996 delivered via a 301 redirect script. That’s the same URL that shows up in Google when you search for this book title.

So if you are a human coming in from affiliate land, you get one URL used to track your referrer’s commission. If you are a bot visiting this URL, you are told these URLs now redirect to the keyword URLs. In this way, Amazon is able to have its cake and eat it too – provide an owned and operated affiliate management system while harvesting the PageRank from millions of deep affiliate backlinks to maximize their ranking visibility in your long tail search query.

(Note I’ve abstained from hyperlinking these URLs so bots crawling this content do not further entrench Amazon’s ranking on these URLs, although they are already #4 in the query above!).

So is this strategy ethical? Conditional redirects are a no-no because it sends mixed signals to the engine – is the URL permanently moved or not? If it is, but only for bots, then you are crossing the SEO line. But in Amazon’s case it appears searchers as well as general site users also get the keyword URL, so it is merely the affiliate users that get an “old” URL. If that’s the case across the board, it would be difficult to argue Amazon is abusing this concept, but rather have cleverly engineered a solution to a visibility problem that other merchants would replicate if they could. In fact, from a searcher perspective, were it not for Amazon, many long tail product queries consumers conduct would return zero recognizable retail brands to buy from, with all due respect to PriceGrabber, DealTime, BizRate, NexTag, and eBay.

As a result of this long tail strategy, I’d speculate that Amazon’s natural search keyword traffic distribution looks more like 40/60 brand to non-brand, rather than the typical 80/20 or 90/10 distribution curve most merchants (who lack affiliate search benefits) receive.

Brian

imitrex prices
danabolan

Syndicate Your Articles and Blog Posts Without Getting Burned

Have you ever been really impressed with an article or a blog post you’ve read online? Did you link to the article? Or did you copy and paste the content into your own blog, blockquote it, then add your own commentary?

Syndicated content can be a nightmare for SEO, for several reasons. First, there are so many different ways to give author attribution. Some may pass link juice to the author, some may not. Many times, it’s the home page of the author’s blog or company site that receives the juice, rather than the source article. Secondly, multiple copies of the same article can result in duplicate content which, in turn, may confuse the spiders and disperse the ability for an author’s article to rank well. In my interview with Matt Cutts, I asked the famed Google engineer and head of the Webspam team at Google whether it is better to have the syndicated copies linked to the original article on the author’s site, or is it just as good if it links to the home page of the author? Matt answers…

I would recommend the linking to the original article on the author’s site. The reason is: imagine if you have written a good article and it is so nice that you have decided to syndicate it out. Well, there is a slight chance that the syndicated article could get a few links as well, and could get some PageRank. And so, whenever Google bot or Google’s crawl and indexing system see two copies of that article, a lot of the times it helps to know which one came first; which one has higher PageRank.

So if the syndicated article has a link to the original source of that article, then it is pretty much guaranteed the original home of that article will always have the higher PageRank, compared to all the syndicated copies. And that just makes it that much easier for us to do duplicate content detection and say: “You know what, this is the original article; this is the good one, so go with that.”

Sounds like common sense, doesn’t it? Well, in this case, it’s a little more than that. By intentionally linking to the author’s source article (rather than the generic home page), you are telling the bots that that is the “true” original author of the article. So, like Matt Cutts suggested, if other articles pop up elsewhere, the bots can easily determine what the “authoritative” source is, passing the authority on to the author and helping them get the credit they deserve.

For more great tips from my interview, you can listen to the audio podcast with Matt Cutts. The interview is a little over thirty minutes long.

Happy syndicating!

liquid clomid

Town Forces Google To Remove Pics From Street View

North Oaks, a small town in Minnesota, demanded that Google remove pictures of the town from Street View service.

North Oaks - end of the line for Google Street View
(click to enlarge)
(map link)

The town apparently has a strong desire to remain private, and since most streets in it are privately owned, they threatened Google with trespassing citations if the pics weren’t removed.

I’m surprised they would attempt to use a trespassing citation for this, since it seems a little bit odd to have open streets for people to drive along, ungated, even with the “No Trespassing” signs. I would think a more viable claim would be copyright infringement since the town might claim that buildings or streets were marked as private, or if they charge some sort of entry fee for vehicles passing through. A similar claim is apparently possible if one takes photos of buildings/places for which admission fees are charged to see/enter — reselling such photos or making money off of them in some way is apparently actionable under copyright law.

Privacy groups have complained about Google Street View since it was introduced, and Google recently responded by initiating the use of software which blurs individuals’ faces in the streetview pics.

как научиться подтягиваться с нуля

RSS Feeds
Categories
Archives
Other