Natural Search Blog


Dupe Content Penalty a Myth, but Negative Effects Are Not

I was interested to read a column by Jill Whalen this past week on “The Duplicate Content Penalty Myth” at Search Engine Land. While I agree with her assessment that there really isn’t a Duplicate Content Penalty per se, I think she perhaps failed to address one major issue affecting websites in relation to this.

Read on to see what I mean.

Hercules Fights the Original Duplicate Content Beast

Sure she’s right in that webmasters don’t have to be afraid if their applications have created multiple page URLs which all contain identical or near-identical content. Websites do this all the time, and search engines aren’t penalizing them for it. (Except perhaps for the case of page-scrapers who steal other sites’ content for redisplay — in which case a scraper’s page might get penalized or just ranked lower as being non-authoritative for its content.) But, webmasters *do* still need to be concerned with duplicate content, because it can affect their overall traffic and rankings.

Quite simply, PageRank continues to be one big factor in ranking one page versus another for keyword searches. Most sites only have so much PageRank to spend on all the pages in their site. If you double the number of pages on your site, you may be virtually cutting each page’s PageRank in half when you do it. If you deploy duplicate copies of all your pages willy-nilly, you’ll have watered-down your pages’ PageRank scores for no good reason.

I wish Jill had mentioned this — dupe content may not cause a website to be penalized, but it’s still an important factor for the sake of improving/optimizing a site’s pages to rank better and bring in more traffic. Her article seems to leave one with the feeling that since there’s not a penalization, webmasters just don’t need to worry about duplication at all.

I say, what do webmasters care if it’s called “penalization” or not, if the end result is still unnecessarily lower rankings in SERPs?                             

If you don’t know what duplicate content may be, you should know that there are a number of things which can cause it to occur in web applications. Primarily, if you have multiple different URLs which all present the same page content, and all of these URLs can be found and indexed by search engine spiders, then you have a duplicate content problem. Here’s some common examples:

These are just some examples — there are many more cases possible.

There are a handful of ways you can fight duplication problems or mitigate their effects on PageRank:

There are a number of other solutions out there, depending upon what your duplication problem may be. In most cases, fixing duplication is going to be a bit of a technical clean-up job, but the benefit to your overall page rankings and referral traffic may be significant.

You don’t need to worry about being “penalized” for duplicate content within your site — you’re not going to be delisted for it. But *do* worry about how it affects the SERP rankings for your content.

No comments for Dupe Content Penalty a Myth, but Negative Effects Are Not

No comments yet.

Sorry, the comment form is closed at this time.

RSS Feeds
Categories
Archives
Other