Yahoo!’s Search Blog announced yesterday that they were making some final changes to their spider, (named “Slurp”), standardizing their crawlers to provide a common DNS signature for identification/authorization purposes.
Previously, Slurp’s requests may have come from IP addresses associated with inktomisearch.com, and now they should all come from IPs associated with domains in this standard syntax:
What will this mean to most of us? In most cases, likely nothing. Most sites out there are not likely to be currently performing reverse DNS lookups to check if search engine spiders are actually coming from the IPs/Domains they’re supposed to, except when those spiders get really impolite in requesting too many pages per second. Most people are only identifying bots by their User-Agent strings.
In fact, Yahoo’s provision of this authoritative bot ID syntax is more advanced than Google’s! Google only recommends that people identify their bot (aka “Googlebot”) solely through the User-Agent String — a bit unsatisfactory for a lot of webmasters out there. I’ve heard quite a number of webmasters ask what IP address block to expect the Googlebot requests to originate from, and Google wouldn’t provide them with an authoritative answer.
Of course, one could take a visiting bot’s IP address, say “220.127.116.11”, and perform a Network WHOIS lookup on it to find out if it’s in a block owned by Google. The Network Whois for 18.104.22.168 returns the following info (lookup info provided by Hexillion’s Domain Dossier) :
OrgName: Google Inc.
Address: 1600 Amphitheatre Parkway
City: Mountain View
NetRange: 22.214.171.124 – 126.96.36.199
NetType: Direct Allocation
OrgTechName: Google Inc.
While webmasters could do this lookup for requests for bots displaying the Googlebot user-agent string, it’s still very unsatisfactory because Google does not state that all requests necessarily come from IP blocks that are identifiably owned by Google. So, webmasters would be nervous about blocking something that claimed to be Googlebot yet came from non-Google IP address ranges. After all, it’s possible that Google could have purchased IP addresses and domain names through a proxy in order to perform various types of investigative page requests on sites.
There are cases where hostile dataminers will set their user-agent strings up to masquerade as major search engine spiders, so this newly authoritative method for IDing the bots places Yahoo one step ahead of the game for those webmasters who feel the need to ban the badguys who are scraping their site’s content or requesting pages fast enough to be a defacto denial of service attack.
. . . . . . . . . . . . . . . . . . . .
UPDATE: incrediBILL, one of the moderators at WebmasterWorld, kindly pointed out to me that Matt Cutts had provided the same sort of Googlebot authentication method via the Webmaster Central Blog not long ago. I wish that Google would update their webmaster help section to reflect the same information, if this is indeed intended to be a trustworthy method for authenticating Googlebot. With the instruction only to be found in the blog and not in the actual help section, it still leaves one with the uncomfortable feeling that it’s perhaps an informal method and might still not be depended upon to be true for all cases or it could abruptly change. Hopefully, they’ll update the help pages so everything will be in sync!
Possible Related Posts