Natural Search Blog


Welcome to Natural Search Blog

Natural Search Blog provides articles on search engine optimization including keyword reasearch, on-page factors, link-building, social media optimization, local search optimization, image search optimization, and mobile SEO.

In addition to natural search optimization topics, we also cover internet marketing, ecommerce, web design, usability, and technology.

Recent Entries

Robots Meta Tag Not Well Documented by Search Engines

Those of us who do SEO have been increasingly pleased with the various search engines for providing or allowing tools and protocols to allow us to help direct, control, and manage how our sites are indexed. However, the search engines still have a significant need to keep much of their workings a secret out of fear of being exploited by ruthless black-hats who will seek to improve page rankings for keywords regardless of appropriateness. This often leaves the rest of us with tools that can be used in some limited cases, but there’s little or no documentation to tell us how those tools operate functionally in the complex real world. The Robots META tag is a case in point.

The idea behind the protocol was simple, and convenient. It’s sometimes hard to use a robots.txt file to manage all the types of pages delivered up by large, dynamic sites. So, what could be better than using a tag directly on a page to tell the SE whether to spider and index the page or not?  Here’s how the tag should look, if you wanted a page to NOT be indexed, and for links found on it to NOT be crawled:

<meta content=”noindex,nofollow” name=”ROBOTS”>

Alternatively, here’s the tag if you wanted to expressly tell the bot to index the page and crawl the links on it:

<meta content=”index,follow” name=”ROBOTS”>

But, what if you wanted the page to not be indexed, while you still wanted the links to be spidered? Or, what if you needed the page indexed, but the links not followed? The major search engines don’t clearly describe how they treat these combinations, and the effects may not be what you’d otherwise expect. Read on and I’ll explain how using this simple protocol with the odd combos had some undesirable effects.

(more…)

Flickr Adds Geotagging Features

In a move that proves that the people behind flickr are still channeling the Web 2.0 mass conscious, flickr announced this week that they’re adding Geotagging features to their already-robust suite of image management products.

As you may recall, I previously blogged a bit about the rise of geotagging, particularly geotagging of photos, and I had said that it seemed to be a really strong idea with a lot of potential uses. It’s gratifying to see that a service like flickr (and a company like Yahoo!) also believes that it will be strategically beneficial.

The number of people who have been geotagging or who even know about it is likely a relatively low percentage of the online populace, I’d guess (partly for the reason that most people don’t have a GPS device to tell them a location’s longitude and latitude). Now that a top-ranked photo site is supporting it expressly, droves of users will become educated about it, and experiment with it. By doing this, flickr is propelling the trend into the mainstream, increasing the likelihood that it’ll be more widely adopted.

Flickr’s new geotagging utilities were built by mashing-up their image management utilities with Yahoo! Maps, allowing users to drag pix onto a mapped location of where the image was taken in order to associate the photo with the geotag. Also, it appears that users could now use a graphic map as a navigational interface to browse geographic locations and then pull up any publicly-available photos associated with that location. Read on for more info.

(more…)

Will Google Keep Minority Report from Happening? Eric Schmidt’s Chat with Danny Sullivan

This morning at the Search Engine Strategies Conference 2006 in San Jose, Danny Sullivan interviewed the Google CEO, Eric Schmidt, in the conference’s main keynote session. Others such as the Search Engine Roundtable have reported on most of the content of that session, but one little thing Danny mentioned particularly grabbed my attention. Read on, and I’ll elaborate….

A Conversation with Eric Schmidt of Google

(more…)

Sneak Peek: Chasing The Long Tail of Natural Search

Phew – After 7 long months slogging away, we will finally officially release the long awaited white paper “Chasing the Long Tail of Natural Search” next week Monday (Aug 7th) at SES San Jose and the Etail Philadelphia show.

One is always a little cautious about postulating grand theories into the wide world. But after studying over 1 million unique unbranded keywords across 25 major retailer search programs, we couldn’t resist – referring to the concept we outline as “Page Yield Theory.” This is an underpinning notion that the “long tail” of unbranded search keyword traffic is inextricably linked to the website’s number of uniquely indexable site pages. To those of us who subscribe to the “every-page-should-sing-its-own-song” philosophy, that seems like an obvious statement.

Yet the challenge behind it, and the impetus for the research, arose from the fact that many (unoptimized) well-branded multichannel retailers have 10’s/100’s of thousands of unique and indexed website pages. However most of their natural search traffic (usually over 90%) comes from searches related to their own company name. How could such strong brands and massive websites produce such little traffic for generic terms, terms other than the company name?

(more…)

Attending the 2006 Search Engine Strategies Conference in San Jose?

I’ll be attending the Search Engine Strategies Conference in San Jose next week. Drop me a line if you’d like to meet me during the conference!

There are a handful of sessions I’m interested in sitting in on, and I’m looking forward to having dinner one night with some of my old friends from college who work in Silicon Valley.

Some of you may be interested to know that Stephan Spencer is scheduled to appear on a panel on Blog & Feed Search SEO, though I’m thinking I’ll have to miss that in order to attend the simultaneous session on Duplicate Content and Multiple Site Issue. Sorry, Stephan! 😉

(more…)

Fishing for “link love” with “link bait”

Yum…. Link Bait.

I agree with Eric Ward: the phrase “link bait” just sounds UGLY.

But the technique works. Who can resist linking to the uproariously funny “The top 10 unintentionally worst company URLs“. Apparently I can’t, because I just did!

One of my favorite “link bait” morsels of late has been this how-to post about link baiting, by the inimitable Rand Fishkin. Go and gobble it up!

melanotan 1

A window into Google through error messages: PageRank vectors and IndyRank

There’s been plenty of speculation posted to the blogosphere on the recently discovered cryptic Google error message; my favorites being from Wesley Tanaka and from Teh Xiggeh.

What intrigues me most in the Google error message is the references to IndyRank and to PageRank possibly being a vector. In regards to IndyRank, Stuart Brown suspects it means an ‘independent ranking’ — a “human-derived page ranking scoring, independent of the concrete world of linking and keywords”.

In regards to a PageRank vector, Wesley hypothesizes:

“If page rank is actually a vector (multiple numbers) as opposed to a scalar (single number) like everyone assumes (and like is displayed by the toolbar). It would make sense — the page rank for a page could store other aspects of the page, like how likely it is to be spam, in addition to an idea of how linked-to the page is. The page rank you see in the google toolbar would be some scalar function of the page rank vector.”

Of course the Google engineers are probably laughing at all this.

nandrolone decanoate effect

Yahoo update beefs up on authority sites

Aaron Wall posted a blog about how Yahoo!’s recent algorithm update has apparently increased weighting factors for links and authority sites.

Predictibly, a number of folx have complained in the comments added to Yahoo’s “Weather Report” blog about the update. Jeremy Zawodny subsequently posted that their search team was paying close attention to the comments, which is always nice to hear.

Coincidentally, I’d also just recently posted about Google’s apparent use of page text to help identify a site’s overall authoritativeness for particular keywords/themes.

As they say, there’s nothing really new under the sun. I wonder if the search engines are all returning to the trend of authority/hub focus in algorithm development? It’s a strong concept and useful for ranking results, so the methodology for identifying authorities and hubs is likely here to stay.

Through the Scanner Darkly

Seems strange, but there are only two degrees of separation between me and the late, famous, science fiction author, Philip K. Dick (“PKD”). If you aren’t familiar, Dick was the author of a number of stories which have since been made into major films such as: Blade Runner, Total Recall, Minority Report, and the recently-released film A Scanner Darkly. I’ve just got two degrees of separation from Philip K. Dick because of my “spare time” work on writing a soon-to-be-published book about two of his friends and protégés, Tim Powers and James P. Blaylock. In the course of writing that book, (A Comprehensive Dual Bibliography of James P. Blaylock & Tim Powers), I asked the authors questions about their old friend, Dick, and I spoke with other friends of his as well. He was apparently a very interesting character — brilliant, and more than a bit mysterious as well. PKD had a few unusual religious visions and appeared to suffer occasionally from paranoia and other schizophrenic bouts.

Last weekend, I got to see the most recent film inspired by a Philip K. Dick story, A Scanner Darkly, directed by Richard Linklater. The film was really great, telling a futuristic story of an undercover cop who becomes addicted to the drug of choice for his surveillance subjects, and then becomes required to spy on himself in the course of his investigation. The undercover cops all wear these camouflage suits which morph together features from millions of individuals to obscure their identies from others and from each other. The film is astoundingly well-made, and is pretty entertaining overall.

I saw that Nelson Minar, one of Google’s engineers, is also apparently a reader of PKD, and he blogged his impressions about A Scanner Darkly, too. He agrees that it’s good, though I disagree with him: he thinks it won’t appeal to people who haven’t read the book, and I think it will. It strikes too many chords with people, even today, and the actor’s humor in the early parts saves it from being too dry/boring.

Dick’s stories still seem relevant, over twenty years after his death. His stories contrasted realistic characters against a twisted reality where commercialism and technology seem to’ve evolved past a reasonable point. He played around with the nature of reality itself, and his work seemed to segue smoothly into the cyberpunk movement, which I previously posted about.

(more…)

Towards a New Cyberpunk Reality

I recently discovered something interesting about my company, Verizon.

Do you remember the old Oliver Stone tv mini-series from the early 90s called “Wild Palms”? It was about a dystopian future of America where a fascist political group has risen to power, headed up by a senator who founded a new philosophy called “Synthiotics” or “New Realism”, which apparently involves the next stage of human evolution and virtual reality (VR).

The Senator, named Anton Kreutzer, owns a company named Mimecom which has developed some sort of advanced VR technology and 3-d display technology which they are about to deploy out to households through a television company, called Channel 3, in a new drama series they’ve named “Church Windows”. They seemed to be using Church Windows as a platform for propagandizing Synthiotic tenets, as well. The Senator is seeking one last piece of technology from Japan, a “Go Chip”, which will essentially give him eternal life, and seal up his political power. The Go Chip is named after the game of Go, an ancient Chinese strategy game that has been used by artificial intelligence researchers as a test case for building systems which can learn and immitate human intelligence (though, they don’t really spell out that AI tie-in during the series).

Cyberpunk Photo - Sony Center at night
Berlin’s Sony Centre in Potsdamer Platz reflects the global reach of a Japanese corporation. Much cyberpunk action occurs in urbanized, artificial landscapes, and “city lights at night” was one of the genre’s first metaphors for cyberspace (in Gibson’s Neuromancer).

The Wild Palms series was likely intended to be a very cutting-edge, conceptual story that was inspired in large part by the cyberpunk movement in science fiction. One of the prime “founders” of the cyberpunk movement, the author William Gibson, actually puts in a cameo appearance in the series, as well. Oliver Stone likely intended the story to use semiotic literary devices as well, since many of the plot items and names seemed to be intended to have multiple layers of meanings.

Here’s where fiction begins to turn into reality. MimEcom was the name of an actual ecommerce/hosting/technology firm that was later started up in San Francisco, and considered IPOing in 2000, though the dot-bombs happened, and it halted plans to go public.

Later, MimEcom changed their company name to “Totality”.

In about 2005, Totality was acquired by MCI. MCI was merged into Verizon later on in 2005. The Totality part was folded under the Verizon Business division of the company. (more…)

Tag Cloud
RSS Feeds
Categories
Archives
Other