Colin Cochrane

Colin Cochrane is a Software Developer based in Victoria, BC specializing in C#, PowerShell, Web Development and DevOps. Bans Search Engine Spiders

It appears that within the past 2-3 days the popular social book-marking site has started blocking the major search engine spiders from crawling their site.  This isn't a simple robots.txt exclusion, but rather a 404 response that is now being served based on the requesting User-Agent.

While I was doing some Photoshop work for a site of mine tonight I needed to grab some custom shapes to use to make some icons.  I recalled having bookmarked a good resource for custom shapes in, but after searching my bookmarks using my add-in for Firefix, I couldn't find it, so I pulled up my browser and went to my profile page on to do a search.  To my surprise, I was greeted with this: 404 Errors User Agent set to Googlebot

After confirming I hadn't mistyped the URL, I checked out the homepage and found that all was fine there.  However, upon trying to perform a search, I was confronted with the same 404 error, and received the same response when trying to navigate to any page other than the homepage. 

At this point I was thinking that there might have been some server issues going on with, but that didn't line up with my Firefox add-in still showing my bookmarks.  I then noticed that my User-Agent switcher add-in was active (not sending the default User-Agent header), and remembered that I had set it to switch my User-Agent to Googlebot earlier because I was checking another site earlier today to see if it was cloaking (it was). 

I reset the User-Agent switcher so it was sending my normal User-Agent header and tried accessing my page again and I was surprised to see that it was no longer responding with a 404 error.  Puzzled by this, I took a look at' robots.txt and found that it was disallowing Googlebot, Slurp, Teoma, and msnbot for the following:

Disallow: /inbox
Disallow: /subscriptions
Disallow: /network
Disallow: /search
Disallow: /post
Disallow: /login
Disallow: /rss

Seeing that the robots.txt was blocking these search engine spiders, I tried accessing with my User-Agent switcher set to each of the disallowed User-Agents and received the same 404 response for each one.  I thought that there might have been some obscure issue with the add-in that was leading to this behaviour, so I popped open Fiddler, a nifty HTTP debugging proxy that I use to sniff HTTP headers.  Fiddler has a convenient feature that allows you to create HTTP requests manually, so I created a simple set of request headers and made HEAD and GET requests using the different User-Agents listed in the robots.txt.  I received the same responses as before.

HEAD Request using Googlebot User-Agent

My interest was definitely piqued at this point.  I ran a site command against in Google restricted to the past 24 hours and found results as fresh as 15 hours old.

Recent Google Search Results for a site command ran against

Running a normal site command on revealed numerous results that Google had a cached version of, many of which were as fresh as only three days ago.

This evidence seems to be indicating that has recently started blocking the major search engine spiders from crawling their site, by way of the requesting User-Agent.  Given the recent crawl dates and cache dates, it looks like this started happening within the past 2-3 days.  This raises some questions as to the intentions of, and perhaps Yahoo!  With Yahoo! recently integrating bookmarks into its search results this could an attempt to enhance the effectiveness of that new feature by preventing competing search engines from indexing content from  While Yahoo!'s Slurp bot is also blocked, it's unlikely that Yahoo! would need to crawl the content of one of its own sites, as Yahoo! actually owns

What are your thoughts on this?

Comments (16) -

  • ltdraper

    2/16/2008 10:02:27 PM |

    Wow.  Makes me wonder if it's worth the trouble of bookmarking in if it's only going to be indexed by Yahoo.  Do they have a search engine?  :}

  • Samsara

    2/17/2008 12:23:15 PM |

    This raises some questions as to the intentions of, and perhaps Yahoo!  With Yahoo! recently integrating bookmarks into its search results this could an attempt to enhance the effectiveness of that new feature by preventing competing search engines from indexing content from

    Looks like that's what it is. That's annoying to have to read about it due to an accidental discovery rather than or Yahoo making some announcement. But I guess that would have made them look petty.

  • Andy

    2/17/2008 10:10:14 PM | are busy working on the roll out of their next big update (according to their blog, anyway) so hopefully this is a temporary measure. Some other functionality, like linkrolls, also seems to be affected.

    I've always found their search engine to be pretty slow so often I'll rely on Google if I need to do lots of searches in quick succession. I'll be disappointed if that happens - and likely to switch to ma.gnolia or diig.

  • Michael VanDeMar

    2/17/2008 11:31:49 PM |

    Colin, I commented on Sphinn but wanted to comment here as well. That robots.txt hasn't changed since Dec 24th, and only blocks the unimportant stuff.

    I think Sebastian nailed it when he said that it was most likely blocked by IP, to only get people who spoofed bot user agents.

  • Colin Cochrane

    2/18/2008 12:46:03 AM |

    It still raises the question of why they would go to the effort to block people who were spoofing those User-Agents.

  • Michael VanDeMar

    2/18/2008 1:19:11 AM |

    Perhaps, but a) that is a very reasonable thing to do, no reason not to block people who are spoofing, and b) it's a completely different issue than them preventing other search engines from spidering/indexing their content.

  • Sebastian

    2/18/2008 2:13:40 AM |

    It makes sense to block requests with spoofed crawler UA because some scrapers "emulate" crawlers. If in a week or so we can't find crawler fetches after Feb/13 that's worth further investigation.

  • Colin Cochrane

    2/18/2008 2:22:53 AM |

    My thoughts exactly Sebastian.  It will be interesting to see if any new fetches start appearing in the next few days.

  • Dan Thies

    2/19/2008 6:21:29 AM |

    Colin, they wouldn't necessarily be trying to block *people* who are spoofing those user agents, they'd be trying to block proxy servers that are delivering their content to real bots under the proxy's URLs. Some background:

    They used to "cloak" a robots meta tag - unless you were an actual validated spider, you got noindex, nofollow.

  • Chrisitan

    2/19/2008 6:32:46 AM |

    Hi There Navneet,
    I just wanted to add two cents of something i noticed over the weekend as well.  I did some searching on Yahoo on Sunday (over the same time period) and noticed that Yahoo was inserting the number of delicious "bookmarks" into their search results.  For example, when i did a search for "business blogs"...i noticed that some listings actually had the number of bookmarks in delicious by their listing.  So o think its fair to say that they have been experimenting in feeding delicious bookmarks into their search results to offer more comprehensive search results....much like Google's universal search.  However i noted that our site "" (which has several delicious bookmarks) did not have the little icon by the search results.  So im not sure how they are determining who they show the bookmark icon to at the moment, but it will be interesting to see what yahoo decides to roll outSmile

    Best Regards,


  • Colin Cochrane

    2/19/2008 9:54:03 AM |

    Thanks for the feedback Dan.  That link brought me up to speed on that angle of things.

  • Samsara

    3/9/2008 9:50:17 AM | does this means delicious is *allowing* the indexing of users bookmarks or not? i haven't been keeping up. a friend told me his "people who link to him" went down but i didn't bother checking his stats.

  • Felix

    6/7/2008 11:05:50 AM |

    Come on ... they really do?

  • mutuelle senior pas cher

    7/19/2011 1:42:28 AM |

    I love the expression. Everyone needs to express there own opinion and feel free to hear others. Keep it up

  • prix mutuelle senior

    7/19/2011 1:51:56 AM |

    Excellent website. Thanks so much, very useful indeed…

  • tarif mutuelle senior

    7/19/2011 1:54:36 AM |

    This is very interesting, You are a very skilled blogger.

Pingbacks and trackbacks (13)+

Comments are closed