Natural Search Blog

Tips for Local Search Engine Optimization for Your Site

Increasingly, businesses are becoming aware of Local Search, and how optimizing for this channel is vital for those that have local outlets. Each of the main search engines has focused effort on their local search tools as the best strategy for continuing growth in online advertising, and the subject has become sufficiently important enough to merit a special Search Engine Strategies Conference devoted to the subject tomorrow in Denver. The importance of Local Search is further underscored by stats issued in a press release today by comScore, showing that Local Search continues to gain in marketshare.

So, how exactly could one optimize towards Local Search?

Read on and I’ll outline a few key tips.

Robots Meta Tag Not Well Documented by Search Engines

Those of us who do SEO have been increasingly pleased with the various search engines for providing or allowing tools and protocols to allow us to help direct, control, and manage how our sites are indexed. However, the search engines still have a significant need to keep much of their workings a secret out of fear of being exploited by ruthless black-hats who will seek to improve page rankings for keywords regardless of appropriateness. This often leaves the rest of us with tools that can be used in some limited cases, but there’s little or no documentation to tell us how those tools operate functionally in the complex real world. The Robots META tag is a case in point.

The idea behind the protocol was simple, and convenient. It’s sometimes hard to use a robots.txt file to manage all the types of pages delivered up by large, dynamic sites. So, what could be better than using a tag directly on a page to tell the SE whether to spider and index the page or not?  Here’s how the tag should look, if you wanted a page to NOT be indexed, and for links found on it to NOT be crawled:

<meta content=”noindex,nofollow” name=”ROBOTS”>

Alternatively, here’s the tag if you wanted to expressly tell the bot to index the page and crawl the links on it:

<meta content=”index,follow” name=”ROBOTS”>

But, what if you wanted the page to not be indexed, while you still wanted the links to be spidered? Or, what if you needed the page indexed, but the links not followed? The major search engines don’t clearly describe how they treat these combinations, and the effects may not be what you’d otherwise expect. Read on and I’ll explain how using this simple protocol with the odd combos had some undesirable effects.


New WordPress Plugin for SEO

I’ve just released “SEO Title Tag”, a plugin for WordPress. As the name implies, it allows you to optimize your WordPress site’s title tags in ways not supported by the default WordPress installation. For example:

Get the plugin now: SEO Title Tag WordPress Plugin

I’d love your feedback, as this is my first WordPress plugin.


Need more traffic? Try Image Search Optimization

With all the focus on optimization of textual page content and near-obsessive concentration on text-oriented web search engine results pages (“SERPs”), most webmasters and SEOs neglect an area of their potential repertoire which could provide a lot of benefit to their site and business: image search optimization.

One aspect of effective optimization is to keep your eyes open for all the various avenues for referral traffic which can convert to a sale on your site. Depending upon the products or services you offer, it may be very valuable to consider the possibilities of optimizing for the Image Search utilities offered by the various search engines. Even if your site isn’t a product or services website, if you’re looking to increase organic referral traffic, optimizing for image search could work well for you. Read on and I’ll explain…

The end for Google bombing?

Reports are coming in that “Google bombing” doesn’t really work any more. Specifically, the theory is that now at least one of the words in the hyperlink text has to appear on the page being linked to, for the Google bomb to still be effective. But if that’s the case, why is George W. Bush’s bio page on still #1 for “miserable failure” in Google? I scoured his page for the word “miserable” and the word “failure” and found neither one! 😉 Look here and you’ll see that Google couldn’t find the words either.

In any event, do bear in mind that focusing all your search optimization efforts on offpage factors and neglecting the onpage factors as well won’t bode well for your Google positions, I’m sure of that. And if you’ve hired an SEO firm that’s overly focused on the linkbuilding and not giving any attention to fixing your website’s search engine unfriendliness, you should probably reevaluate your decision.


Search Engine Optimization as an industry shouldn’t exist

Maybe I’m being a bit provocative here, but I don’t see SEO as a viable industry long-term.

Would you hire a company to produce a shoddy TV commercial for you just to turn around and hire a TV commercial optimization company to fix it?

If not, why would you be amenable to such a scenario with your web site? It just doesn’t make sense.

Search engine friendliness, as well as usability, should both be core competencies of the web developers. Don’t hire a web vendor that isn’t going to do a proper job of the website development from the get-go — including making the site “sing” for the search engines.

SEO isn’t a black art like Seth Godin opines. It’s scientific, measurable, and testable. And there’s a wealth of information online about SEO, freely available. There’s no excuse for a web design firm to deliver anything but a website that’s been optimized for search engines.

buy ambien online

Spiders like Googlebot choke on Session IDs

Many ecommerce sites have session IDs or user IDs in the URL of their pages. This tends to cause either the pages to not get indexed by search engines like Google, or to cause the pages to get included many times over and over, clogging up the index with duplicates (this phenonemon is called a “spider trap”). Furthermore, having all these duplicates in the index causes the site’s importance score, known as PageRank, to be spread out across all these duplicates (this phenonemon is called “PageRank dilution”).

Ironically, Googlebot regularly gets caught in a spider trap while spidering one of its own sites – the Google Store (where they sell branded caps, shirts, umbrellas, etc.). The URLs of the store are not very search engine friendly: they and are overly complex, and include session IDs. This has resulted in 3,440 duplicate copies of the Accessories page and 3,420 copies of the Office page, for example.

If you have a dynamic, database-driven website and you want to avoid your own site becoming a spider trap, you’ll need to keep your URLs simple. Try to avoid having any ?, &, or = characters in the URLs. And try to keep the number of “parameters” to a minimum. With URLs and search engine friendliness, less is more.

side effects from augmentin

RSS Feeds