Welcome to Natural Search Blog
Natural Search Blog provides articles on search engine optimization including keyword reasearch, on-page factors, link-building, social media optimization, local search optimization, image search optimization, and mobile SEO.
In addition to natural search optimization topics, we also cover internet marketing, ecommerce, web design, usability, and technology.
Recent Entries
Tempest in a Local Teacup
Okay, so in the ongoing minor brouhaha sparked from my “Extreme Local Search Optimization Tactics“, Dave Naffziger has posted a rebuttal of my recent post.
Just to clarify, if there was any doubt, and to steer the unwary newbies of search engine optimization from bad practices, I’m posting another follow-up rebuttal of the rebuttal of the rebuttal. Terribly recursive, I know, but bear with me and you might find this entertaining and informative. (more…)
Possible Related Posts
Posted by Chris of Silvery on 02/05/2007
Permalink | | Print
| Trackback | Comments Off on Tempest in a Local Teacup | Comments RSS
Filed under: Best Practices, Local Search, Local Search Optimization, Yellow Pages black-hat-seo, Local Search, local-search-engine-optimization, local-SEO, Online-Yellow-Pages, SEO
Local Search Mentions in the News
It was cool that Greg Sterling mentioned one of my projects during the last week — IdearcLocal.com (the site was previously known as “VZlocal.com”, prior to our recent divestment from Verizon Corporation):
It’s always gratifying to have one’s work get noticed!
In a less-than-glowing mention of me, David Naffziger, VP of Strategy and BizDev at Judy’s Book, was critical of my recent article on Extreme Local Search Optimization Tactics. He apparently feels that some of these tips could result in “spamming” online directory listings. I beg to differ, of course. (Not to be too pedantic, but his use of the word, “spam”, is inaccurate because spam is the mass-mailing of unsolicited email notes of a commercial nature. My posting had nothing to do with email. Heh!)
Read on for my rebuttal on this and some more local search news.
Possible Related Posts
Posted by Chris of Silvery on 02/01/2007
Permalink | | Print
| Trackback | Comments Off on Local Search Mentions in the News | Comments RSS
Filed under: Google, Local Search, Local Search Optimization black-hat-seo, idearc, Local Search
PR for your PR: Publicity for Improved PageRank
After a company has engineered their website to enable search engine spidering, they may then graduate on to understand the importance of link-building. But, businesses often look for quick technical tricks to achieve those vital inbound links without looking towards classic offline business strategies. Press releases and similar types of publicity can significantly help with linkbuilding, and should be a major component of a business’s search marketing arsenal.
One question that frequently comes up in search engine optimization is “How can we get a new domain name to rank well and rapidly?” You may have heard of “The Sandbox” in relation to SEO — this is the concept that newer domain names will not be trusted by search engines, and so pages hosted on those domains may not rank as well as would be expected for unique keyword combinations. Getting good numbers of inbound links can break a domain out of the sandbox effect, but linkbuilding takes time. Most shortcuts won’t work in this area, and you should run screaming the other direction if someone promises otherwise, since participation in link networks can get you penalized with major engines.
But, there is one shortcut that not only can work, but is allowed by the search engines: publicity. While the sudden appearance of hundreds and thousands of inbound links to a new domain name could raise redflags with search engines, the exception is if those inbound links are coming from recognized news sites and blogs. The search engines recognize “burstiness” — the sudden influx of links — in cases where a site has attracted popular attention, and lots of articles and blog postings have come out on a particular subject.
Whether you’re trying to found a new domain name, or increase your site’s overall ranking in the search engines, publicity is one of the most effective methods around. Read on and I’ll outline some tips for getting good PR — both kinds!
Possible Related Posts
Posted by Chris of Silvery on 01/23/2007
Permalink | | Print
| Trackback | Comments Off on PR for your PR: Publicity for Improved PageRank | Comments RSS
Filed under: Link Building, Marketing, PageRank pr, Press-Release-Optimization, Press-Releases, publicity, seo-sandbox
Leveraging Wikipedia for SEO: it’s no longer about the link juice
Recently when I blogged about the SEO benefits of contributing to Wikipedia, I alluded to some of the complex strategies and tactics around creating entries, keeping your edits from getting reverted, etc.
One of the benefits that can no longer be gained is link juice. That’s because rel=nofollow has just been instituted across all of Wikipedia and its sister sites (such as Wikinews).
Does that mean you no longer need to concern yourself with Wikipedia? Heck no! It is still a valuable source of traffic and, just as importantly, credibility. To have a Wikipedia entry for your company show up in the top 10 in Google for your company name gives a nice credibility boost. Even better if the coverage on your entry is favorable!
Wikipedia is still key to the discipline of “reputation management.” By understanding the ins and outs of Wikipedia — navigating the landmines of notability criteria, not contributing your company’s entry yourself, disambiguation pages, redirects, User pages, Talk pages, etc. — you can potentially influence what is said about you on Wikipedia. Furthermore, if web pages that are critical of your company occupy spots in the first page of the SERPs, you can push them out and replace them with your Wikipedia entries. Because Wikipedia holds so much authority and TrustRank, it’s easy to get an entry into the top 10 for any keyword.
Back to the nofollowing of external links… I don’t think SEOs will leave Wikipedia any time soon due to this new development. Even though that was Jimbo Wales’ hope.
There is still significant incentive for SEOs to edit (and manipulate) Wikipedia so long as Wikipedia holds the top spot for important keywords such as “marketing” in Google.
Possible Related Posts
Posted by stephan of stephan on 01/23/2007
Permalink | | Print
| Trackback | Comments Off on Leveraging Wikipedia for SEO: it’s no longer about the link juice | Comments RSS
Filed under: Link Building, PageRank Link Building, link-gain, PageRank, reputation-management, TrustRank, Wikipedia
Getting 404 errors with Ultimate Tag Warrior?
If you’re running WordPress and you care about SEO, then you’re probably running the Ultimate Tag Warrior plugin too. If you don’t know what I’m talking about, then read my blog SEO tip on tagging.
There’s been a long-standing bug in WordPress 2.X, ever since WordPress switched to internal rewrites instead of external ones within .htaccess. The bug is that UltimateTagWarrior displays 404 errors (File Not Found) on tag pages when you have rewriting of local tag URLs turned on (in Options > Tags in the WordPress admin). The bug usually only manifests itself when you are using custom permalinks (i.e. if you’ve selected “Custom” from the Permalink Options in the WordPress admin).
Well I’ve got good news! I’ve figured out the problem!
(more…)
Possible Related Posts
Posted by stephan of stephan on 01/20/2007
Permalink | | Print
| Trackback | Comments Off on Getting 404 errors with Ultimate Tag Warrior? | Comments RSS
Filed under: Blog Optimization tagging, Ultimate-Tag-Warrior, WordPress
Could Newspapers Own Local Search Through Better SEO?
Don Dodge, Director of BizDev for Microsoft’s Emerging Business Team, just wrote an article on how “Newspapers should own local search results“. I wasn’t entirely sure from his column if he meant they “should” own local as in “they are traditional experts at local info, and therefore should own local search due so it’s surprising they don’t”, or if he meant that “they should own local because I think they’re the ideal owners of it.” I think he meant that it’s just surprising they aren’t bigger contenders in local search, and if that’s what he was driving at — I tend to agree.

I also think he’s right — they don’t own local search in great part because they don’t think globally and they are crappy at the SEO side of the game. But, I’d go so far as to say that they should NOT think they can own local anymore — that kind of mindset is just what’s hampering them now. Yeah, they’d be better off if they improved their SEO, but that’s just going to be a bandaid for them at this point.
Possible Related Posts
Posted by Chris of Silvery on 01/16/2007
Permalink | | Print
| Trackback | Comments Off on Could Newspapers Own Local Search Through Better SEO? | Comments RSS
Filed under: Local Search Optimization, Search Engine Optimization, SEO, Yellow Pages Local Search, Newspapers, SEO, Yellow Pages
Extreme Local Search Optimization Tactics
I make it a point to follow blogs and conference sessions to see what everyone recommends for “Local Search Optimization”, and I have to say that most of it’s repetitive and too limited. Most folx who write about this subject have said little more than “put a business’s address and phone number on all their site’s pages”, and “update/enhance the business’s information in all the major directory sites”. A lot of the focus is on search marketing, and very little has been outlined for optimizing for local search beyond all the aspects of traditional natural search optimization.
Similarly, I previously wrote on the subject and just added a marginally unique spin by suggesting that local biz sites should follow the hCard Microformat when adding the address and contact info to their site’s pages. Yet, I think all of us who work in local SEO have not really pushed the envelope much with these limited suggestions, and we haven’t really outlined a lot of the other areas where savvy webmasters and businesses could make themselves even more optimal for the local search paradigm. Local Search is a unique beast, and in many ways is more complex than pure keyword search, so why hasn’t anyone addressed some of the unique aspects that could really drive a local business’s online referrals higher via optimizations?
So, I’m pulling out the stops and posting some strategies here that could inch a local business past its competition. Some of these tips are not for the faint-of-heart, and may assume that you might change some things about your business that are traditionally things that people don’t consider changing just to improve referrals from online search. Read on and I’ll give you an insider’s tips for some extreme local optimizations!
Possible Related Posts
Posted by Chris of Silvery on 01/11/2007
Permalink | | Print
| Trackback | Comments Off on Extreme Local Search Optimization Tactics | Comments RSS
Filed under: Local Search Optimization, Search Engine Optimization, SEO, Yellow Pages Local Search, local-search-engine-optimization, local-search-engines, local-SEO, Yellow Pages
New WordPress Plugin for tracking offline impact of SEO
We just released a new WordPress plugin, Replace by Referrer, which allows you to track the effectiveness of SEO and other online marketing activities by replacing text on your landing page based on the referrer (i.e. which search engine or site referred the visitor). So, for example, you might want to offer a different toll-free phone number depending on the search engine used by the visitor. That would give you the ability to track the number of phone inquiries delivered by each search engine. Pretty cool, eh!
It’s free and open source. Download it now for your WordPress blog or site. Enjoy!
Possible Related Posts
Posted by stephan of stephan on 01/01/2007
Permalink | | Print
| Trackback | Comments Off on New WordPress Plugin for tracking offline impact of SEO | Comments RSS
Filed under: Blog Optimization, Tracking and Reporting SEO, tracking, WordPress, WordPress-plugins
Hey Google: Nofollow is for when I don’t vouch for the link’s quality
I’ve said before that I don’t agree with Google’s tough stance on link buying and use of “nofollow” to mark it as a financially influenced link (here and here). One of my favorite white-hat SEO bloggers, Rand Fishkin, is also on Google’s case for it. A key argument that Rand makes:
Nofollow means “I do not editorially vouch for the quality of this link.” It does NOT mean “financial interest may have influenced my decision to link.” If that were the case, fully a quarter of all links on the web would require nofollow (that’s a rough guess, but probably close to the mark). Certainly any website that earns money via its operation, directly or indirectly is guilty of linking to their own material and that of others in the hopes that it will benefit them financially. It is not only unreasonable but illogical to ask that webmasters around the world change their code to ensure that once the chance of financial benefit reaches a certain level (say, you’re about 90% sure a link will make you some money), you add a “nofollow” onto the link.
You go, Rand! Tell those Googlers a thing or two! 😉
Despite all this, Google is the one who holds the keys to the kingdom. So we have to abide by their rules, no matter how “unreasonable” and “illogical.” That’s why my January column for Practical Ecommerce goes into some detail explaining Google’s stance on link buying and the risks. I’ll post a link once the article comes out in a few days.
Possible Related Posts
Posted by stephan of stephan on 12/28/2006
Permalink | | Print
| Trackback | Comments Off on Hey Google: Nofollow is for when I don’t vouch for the link’s quality | Comments RSS
Filed under: Google, Link Building Google, link-buying, nofollow
Interview with Google about duplicate content
The following is an excerpt of a video conversation held between Vanessa Fox, Product Manager of Google Webmaster Central, and Rand Fishkin, CEO and co-founder of SEOMoz about Google and duplicate content. This further confirms Adam Lasnik’s position that it’s a filter, not a penalty. The full video can be found here.
Rand Fishkin: Duplicate content filter, is that the same or different to a duplicate content penalty?
Vanessa Fox: So I think there is a lot of confusion about this issue. I think people think that if Google sees information on a site that is duplicate within the site then there will some kind of penalty applied (duplicating its own material). There’s a couple of different ways this can happen, one if you use subpages that seem to have a lot of content that is the same, e.g. a local type site that says here is information about Boulder and here’s information about Denver, but it doesn’t actually have any information about Boulder, it just says Boulder in one place and Denver in the other. But otherwise the pages are exactly the same. Another scenario is where you have multiple URL’s that point to the same exact page, e.g. a dynamic site. So those are two times when you have duplicate content within a site.Fishkin: So would you call that a filter or would you call that a penalty, do you discriminate between the two?
Fox: There is no penalty. We don’t apply any kind of penalty to a site that has that situation. I think people get more worried than they should about it because they think oh no, there’s going to be a penalty on my site because I have duplicate content. But what is going to happen is some kind of filtering, because in the search results page we want to show relevant, useful pages instead of showing ten URLs that all point to the same page – which is probably not the best experience for the user. So what is going to happen is we are going to only index one of those pages. So if you don’t care, in the instance where there are a lot of URLs that all point to the same exact page, if you don’t care which one of them is indexed then you don’t have to do anything, Google will pick one and we’ll index it and it will be fine.Fishkin: So let’s say I was looking for the optimal Google experience and I was trying to optimize my site to the best of my ability, would I then say well maybe it isn’t so good for me to have Google crawling my site pages I know are duplicates (or very similar), let me just give them the pages I know they will want?
Fox: Right, so you can do that, you can redirect versions…we can figure it out, it’s fine, we have a lot of systems. But if you care which version of the site is indexed, and you don’t want us to hit your site too much by crawling all these versions, then yeah, you might want to do some things, you can submit sitemaps and tell us which version of the page you want, you can do a redirect, you can block with robots, you can not serve us session IDs. I mean there’s a lot of different things you could do in that situation. In the situation where the pages are just very similar, it’s sort of a similar situation where you want to make the pages as unique as possible. So that’s sort of a different solution to the similar sort of problem. You want to go, ok, how can I make my page about Boulder, different from my page about Denver, or maybe I just need one page about Colorado if I don’t have any information about the other two pages.
Possible Related Posts
Posted by Gabriel of Gabriel on 12/28/2006
Permalink | | Print
| Trackback | Comments Off on Interview with Google about duplicate content | Comments RSS
Filed under: Google duplicate-content, duplicate-pages, Google, Vanessa-Fox, Webmaster-Central