Colin Cochrane

Colin Cochrane is a Software Developer based in Victoria, BC specializing in C#, PowerShell, Web Development and DevOps.

IIS 7 Site Won't Start After Upgrading to Vista Service Pack 1

After letting Service Pack 1 install overnight, I logged in to my machine this morning looking forward to exploring some of the new features added to IIS 7.0.  Unfortunately there was a small problem with one of the local web applications that I host from my machine.  Simply put, the application refused to start in IIS, and each attempt to start it resulted in a modal pop-up informing me that the process was in use.  I quick peek at the error log showed the following:

iis1

iis2

iis3

After I quick search I found a KB article over at Microsoft that addressed the problem.  As directed by the article, I popped open the command prompt and ran netstat -ano to get a list of what processes were listing for net traffic on what port.  First entry on the list identified the problem process.

image

(The screencap was taken after the problem was fixed, so the PID is not the same).  I opened up the task manager to find out what process was at fault, and it turned out that Skype was listening for incoming traffic on port 80 for some silly reason.  I closed up Skype and attempted to start the website and lo-and-behold, it worked.  I started Skype again, and everything was working like normal again.

I thought this may be of use to anyone who encounters a similar problem.

Internet Explorer 8 Beta 1: First Impressions

Since the IE development team releasing the first beta version of Internet Explorer 8 for developers last week, I've had the chance to play around with this latest incarnation of the Internet Explorer family.  While most of the focus has been on the improved support for web standards, which is immediately evident even in this early beta, there are many more new features and enhancements that are making IE8 look like its shaping up into a solid browser.

1) The Acid2 Test

First off, I'll confirm that yes, IE8 does pass the Acid2 test.

ie8-9

2) Loosely-Coupled IE (LSIE)

How many times have we all come across this classic?

ie8-1

Nothing was quite as annoying as having multiple tabs/browsers open and having an error on one page, or with a plugin, cause all of them to crash.  The IE development team has addressed this with a collection of internal architecture changes called Loosely-Coupled IE.  In a nutshell, what LSIE means is that the browser frame (everything other than the tabs) and the browser tabs are now located in separate processes, so if you visit a website that disagrees with one of your plugins, it doesn't result in IE completely crashing out.  Rather, you will see the classic "Internet Explorer has stopped working" window:

ie8-2 

Which is now followed with the problem tab being recovered, rather than the entire browser:

ie8-3

3) Session Recovery

IE8 also offers a new session recovery option for those instances where you have an "unexpected" end to a session.

ie8-4

4) Domain Highlighting

A new feature of the address bar now highlights what IE considers the "owning domain" of the site you are currently viewing.  At first it may appear strange, and somewhat unnecessary, but when you consider situations where unscrupulous webmasters try using subdomains to fool the user into thinking they are at a site they are not (such as www.domain.com.path.realdomain.com) this feature becomes a subtle, but useful visual cue to quickly draw your attention to the true domain of the site you are browsing.

ie8-5

5) IE8 Developer Tools

A nice addition for web developers is the built in IE8 Developer Tools, which is the successor to the IE developer toolbar.  It features some nice upgrades from the developer toolbar, such as the ability to changes rendering modes on-the-fly.

ie8-6

A beefed up style-trace, which breaks down each style being applied to a selected element (showing you what specific element the definition is inherited from, and which stylesheet the definition is located in), and allows you to toggle the application of specific style definitions.

ie8-7

ie8-8

Unfortunately the developer tools are not as comprehensive as the phenomenal Firefox add-in Firebug, but are still a big step in the right direction, and provides the functionality needed to take most style-based issues with a web page.

 

The IE8 beta is quite stable, so I encourage you to give it a try and see what you think.  I'll be posting more impressions as I continue using the beta, but I'd love to hear some more opinions on it, so please feel free to share your experiences and impressions in the comments section.

.NET Code Tips: Converting A UNIX Timestamp To System.DateTime

After having to deal with UNIX timestamps in an application I am currently developing, I realized that there's probably a few people out there who are wondering how to convert a UNIX timestamp into a useable System.DateTime in a .NET application. 

Well the good news is that it's quite simple.  All a UNIX timestamp represents is the number of seconds since January 1st, 1970 12:00:00 AM.  So all we have to do is create a new System.DateTime structure, set it to 1/1/1970 12:00:00 AM, and use the AddSeconds() methods to tack on the timestamp.

Visual Basic:

   1: Function ConvertTimestamp(ByVal timestamp as Double) As DateTime
   2:     Return New DateTime(1970, 1, 1, 0, 0, 0).AddSeconds(timestamp)
   3: End Function

C#:

   1: static DateTime ConvertTimestamp(double timestamp)
   2: {
   3:   return new DateTime(1970, 1, 1, 0, 0, 0).AddSeconds(timestamp);
   4: }

 

Keep in mind that this method will return the time as Coordinated Univeral Time (UTC), so if you want to convert the value to local time you can simply modify the procedure as follows:

Visual Basic:

   1: Function ConvertTimestamp(ByVal timestamp as Double) As DateTime
   2:     Return New DateTime(1970, 1, 1, 0, 0, 0).AddSeconds(timestamp).ToLocalTime()
   3: End Function

C#:

   1: static DateTime ConvertTimestamp(double timestamp)
   2: {
   3:   return new DateTime(1970, 1, 1, 0, 0, 0).AddSeconds(timestamp).ToLocalTime();
   4: }

It's that easy to turn a UNIX timestamp into a .NET System.DateTime object. 

Happy coding!

The Worlds Most Appropriate Image Alt Attribute

As all good web developers know, accessibility is a very important consideration with an ever increasing proportion of the population that is old and decrepit. One of the more important accessibility features is the image alt attribute that is used to describe an image for visually impaired users. It's always nice to find a site that has accessbility in mind by providing good descriptions of an image within the alt attribute text.

 This particular example has got to be the best I have ever come across, and is an inspired choice of descriptive text that makes it clear to the elderly user what shenanigans are going on within the picture.

 

You didn't think I could top that did you?


This interesting commentary on modern family life within the UK can be found here, http://www.dailymail.co.uk/pages/live/femail/article.html?in_article_id=514809&in_page_id=1879.

 Note, unfortunately the alt text has been changed since this screenshot was taken. The good SEO strategy of a related high value keyword in the picture filename remains, however.

del.icio.us Bans Search Engine Spiders

It appears that within the past 2-3 days the popular social book-marking site del.icio.us has started blocking the major search engine spiders from crawling their site.  This isn't a simple robots.txt exclusion, but rather a 404 response that is now being served based on the requesting User-Agent.

While I was doing some Photoshop work for a site of mine tonight I needed to grab some custom shapes to use to make some icons.  I recalled having bookmarked a good resource for custom shapes in del.icio.us, but after searching my bookmarks using my del.icio.us add-in for Firefix, I couldn't find it, so I pulled up my browser and went to my profile page on del.icio.us to do a search.  To my surprise, I was greeted with this:

del.icio.us 404 Errors User Agent set to Googlebot

After confirming I hadn't mistyped the URL, I checked out the del.icio.us homepage and found that all was fine there.  However, upon trying to perform a search, I was confronted with the same 404 error, and received the same response when trying to navigate to any page other than the homepage. 

At this point I was thinking that there might have been some server issues going on with del.icio.us, but that didn't line up with my Firefox add-in still showing my bookmarks.  I then noticed that my User-Agent switcher add-in was active (not sending the default User-Agent header), and remembered that I had set it to switch my User-Agent to Googlebot earlier because I was checking another site earlier today to see if it was cloaking (it was). 

I reset the User-Agent switcher so it was sending my normal User-Agent header and tried accessing my del.icio.us page again and I was surprised to see that it was no longer responding with a 404 error.  Puzzled by this, I took a look at del.icio.us' robots.txt and found that it was disallowing Googlebot, Slurp, Teoma, and msnbot for the following:

Disallow: /inbox
Disallow: /subscriptions
Disallow: /network
Disallow: /search
Disallow: /post
Disallow: /login
Disallow: /rss

Seeing that the robots.txt was blocking these search engine spiders, I tried accessing del.icio.us with my User-Agent switcher set to each of the disallowed User-Agents and received the same 404 response for each one.  I thought that there might have been some obscure issue with the add-in that was leading to this behaviour, so I popped open Fiddler, a nifty HTTP debugging proxy that I use to sniff HTTP headers.  Fiddler has a convenient feature that allows you to create HTTP requests manually, so I created a simple set of request headers and made HEAD and GET requests using the different User-Agents listed in the robots.txt.  I received the same responses as before.

HEAD Request using Googlebot User-Agent

My interest was definitely piqued at this point.  I ran a site command against del.icio.us in Google restricted to the past 24 hours and found results as fresh as 15 hours old.

Recent Google Search Results for a site command ran against del.icio.us

Running a normal site command on del.icio.us revealed numerous results that Google had a cached version of, many of which were as fresh as only three days ago.

This evidence seems to be indicating that del.icio.us has recently started blocking the major search engine spiders from crawling their site, by way of the requesting User-Agent.  Given the recent crawl dates and cache dates, it looks like this started happening within the past 2-3 days.  This raises some questions as to the intentions of del.icio.us, and perhaps Yahoo!  With Yahoo! recently integrating del.icio.us bookmarks into its search results this could an attempt to enhance the effectiveness of that new feature by preventing competing search engines from indexing content from del.icio.us.  While Yahoo!'s Slurp bot is also blocked, it's unlikely that Yahoo! would need to crawl the content of one of its own sites, as Yahoo! actually owns del.icio.us.

What are your thoughts on this?

ASP.NET Custom Errors: Preventing 302 Redirects To Custom Error Pages

 
You can download the HttpModule here.
 
Defining custom error pages is a convenient way to show users a friendly page when they encounter an HTTP error such as a 404 Not Found, or a 500 Server Error.  Unfortunately ASP.NET handles custom error pages by responding with a 302 Temporary redirect to the error page that was defined. For example, consider an example application that has IIS configured to map all requests to it, and has the following customErrors element defined in its web.config:
 
<customErrors mode="RemoteOnly" defaultRedirect="~/error.aspx">
<error statusCode="404" redirect="~/404.aspx" />
</customError>

If a user requested a page that didn't exist, then the HTTP response would look something like:

http://www.domain.com/non-existant-page.aspx --> 302 Found
http://www.domain.com/404.aspx  --> 404 Not Found
Date: Sat, 26 Jan 2008 03:08:21 GMT
Server: Microsoft-IIS/6.0
Content-Length: 24753
Content-Type: text/html; charset=utf-8
X-Powered-By: ASP.NET
 
As you can see, there is a 302 redirect that occurs to send the user to the custom error page.  This is not ideal for two reasons:

1) It's bad for SEO

When a search engine spiders crawls your site and comes across a page that doesn't exist, you want to make sure you respond with an HTTP status of 404 and send it on its way.  Otherwise you may end up with duplicate content issues or indexing problems, depending on the spider and search engine.

2) It can lead to more incorrect HTTP status responses

This ties in with the first point, but can be significantly more serious.  If the custom error page is not configured to response with the correct status code then the HTTP response could end up looking like:

http://www.domain.com/non-existant-page.aspx --> 302 Found
http://www.domain.com/404.aspx  --> 200 OK
Date: Sat, 26 Jan 2008 03:08:21 GMT
Server: Microsoft-IIS/6.0
Content-Length: 24753
Content-Type: text/html; charset=utf-8
X-Powered-By: ASP.NET
 
Which would almost guarantee that there would be duplicate content issues for the site with the search engines, as the search spiders are simply going to assume that the error page is a normal page, like any other.Furthermore it will probably cause some website and server administration headaches, as HTTP errors won't be accurately logged, making them harder to track and identify.
I tried to find a solution to this problem, but I didn't have any luck finding anything, other than people who were also looking for a way to get around it.  So I did what I usually do, and created my own solution.
 
The solution comes in the form of a small HTTP module that hooks onto the HttpContext.Error event.  When an error occurs, the module checks if the error's type is an HttpException.  If the error is an HttpException, then the following process takes place:
  1. The response headers are cleared (context.Response.ClearHeaders() )
  2. The response status code is set to match the actual HttpException.GetHttpCode() value (context.Response.StatusCode = HttpException.GetHttpCode())
  3. The customErrorsSection from the web.config is checked to see if the HTTP status code (HttpException.GetHttpCode() ) is defined.
  4. If the statusCode is defined in the customErrorsSection then the request is transferred, server-side, to the custom error page. (context.Server.Transfer(customErrorsCollection.Get(statusCode.ToString).Redirect) )
  5. If the statusCode is not defined in the customErrorsSection, then the response is flushed, immediately sending the response to the client.(context.Response.Flush() )

Here is the source code for the module.

   1: Imports System.Web
   2: Imports System.Web.Configuration
   3:  
   4: Public Class HttpErrorModule
   5:   Implements IHttpModule
   6:  
   7:   Public Sub Dispose() Implements System.Web.IHttpModule.Dispose
   8:     'Nothing to dispose.
   9:   End Sub
  10:  
  11:   Public Sub Init(ByVal context As System.Web.HttpApplication) Implements System.Web.IHttpModule.Init
  12:     AddHandler context.Error, New EventHandler(AddressOf Context_Error)
  13:   End Sub
  14:  
  15:   Private Sub Context_Error(ByVal sender As Object, ByVal e As EventArgs)
  16:     Dim context As HttpContext = CType(sender, HttpApplication).Context
  17:     If (context.Error.GetType Is GetType(HttpException)) Then
  18:       ' Get the Web application configuration.
  19:       Dim configuration As System.Configuration.Configuration = WebConfigurationManager.OpenWebConfiguration("~/web.config")
  20:  
  21:       ' Get the section.
  22:       Dim customErrorsSection As CustomErrorsSection = CType(configuration.GetSection("system.web/customErrors"), CustomErrorsSection)
  23:  
  24:       ' Get the collection
  25:       Dim customErrorsCollection As CustomErrorCollection = customErrorsSection.Errors
  26:  
  27:       Dim statusCode As Integer = CType(context.Error, HttpException).GetHttpCode
  28:  
  29:       'Clears existing response headers and sets the desired ones.
  30:       context.Response.ClearHeaders()
  31:       context.Response.StatusCode = statusCode
  32:       If (customErrorsCollection.Item(statusCode.ToString) IsNot Nothing) Then
  33:         context.Server.Transfer(customErrorsCollection.Get(statusCode.ToString).Redirect)
  34:       Else
  35:         context.Response.Flush()
  36:       End If
  37:  
  38:     End If
  39:  
  40:   End Sub
  41:  
  42: End Class

The following element also needs to be added to the httpModules element in your web.config (replace the attribute values if you aren't using the downloaded binary):

<httpModules>
<add name="HttpErrorModule" type="ColinCochrane.HttpErrorModule, ColinCochrane" />
</httpModules>

And there you go! No more 302 redirects to your custom error pages.

Web Standards: The Ideal And The Reality

There has been a flurry of reactions to the IE8 development team's recent announcement about the new version-targeting meta declaration that will be introduced in Internet Explorer 8. In an article I posted on the Metamend SEO Blog yesterday, I looked at how this feature could bring IE8 and Web Standards a lot closer together and find the ideal balance between backwards-compatibility and interoperability.  Many, however, did not share my optimism and saw this as another cop-out by Microsoft that would continue to hold back the web standards movement.  Being that this is a topic that involves both Internet Explorer/Microsoft and web standards I naturally came across a lot of heated discussion.  As I read more and more of this discussion I was once again reminded about how so many people take such an unreasonably hard stance on the issue of web standards and browser support.  When it comes to a topic as complex as web standards and interoperability it is crucial that one considers all factors, both theoretical and practical, otherwise the discussion will inevitably end up taking a "your with us or against us" mentality, that does little to benefit anyone.

The Ideal

Web standards are intended to bring consistency to the Web.  The ultimate ideal is a completely interoperable web, independent of platform or agent.  The more realistic ideal is a set of rules for the creation of content that, if followed, would ensure consistent presentation regardless of the client's browser   This would allow web developers who followed these rules to be safe in the knowledge that their content would be presented as they intended for all visitors.

The Reality

Web standards are attempting to bring consistency to what is a enormously complex and vast collection of mostly inconsistent data.  Even with more web pages being created that are built on web standards, there is still, and will always be, a subset of this collection that is non-standard.  There will never be an entirely interoperable web, nor would anyone reasonable expect there to be.  The reasonable expectation is that web standards are adopted by those who develop new content, or modify existing content, and that major web browsers will be truly standards-compliant in its presentation, so that web developers need not to worry about cross-browser compatibility.

One aspect that is often forgotten is the average internet user.  They don't care about standards, DOCTYPES or W3C recommendations.  All they care about is being able to visit a web site and have it display correctly, as they should.  This is what puts the browser developers in a bind, because the browser business is competitive and its hard to increase your user base if most pages on the web break when viewed with your product.  A degree of backwards-compatibility is absolutely essential, and denying that is simply ignorant.  This leads to something of a catch-22, however, because on the other side of the coin are the website owners who may not have the resources (be it time or money), or simply lack the desire, to redevelop their sites.   They are unlikely to make a substantial investment to bring their sites up to code for the sole reason of standards-compliance unless there is a benefit in doing so, or a harm in not doing so.  While the more vigorous supporters web standards  may wag their fingers at Microsoft for spending time worrying about backwards compatibility, you can be sure that if businesses were suddenly forced to spend tens of thousands of dollars to make their sites work in IE, Microsoft would be on the receiving end of a lot more than finger wagging.

I admit this was a minor rant.  As a supporter of web standards, I get a great deal of enjoyment out of good, honest discourse regarding their development and future.  This makes it all the more frustrating to read article after article and post after post that take close-minded stances, becoming dams in the flow of discussion.  The advancement of web standards is, and only can be, a collaborative effort, and this effort will be most productive when everyone enters in to it with their ears open and their egos left at the door.

Please Don't Urinate In The Pool: The Social Media Backlash

pool-party The increasing interest of the search engine marketing community in social media has resulted in more and more discussion about how to get in on the "traffic goldrush".  As an SEO, I appreciate the enthusiasm in exploring new methods for maximizing exposure for a client's site, but as a social media user I am finding myself becoming increasingly annoyed with the number of people that are set on finding ways to game the system.

The Social Media Backlash

My focus for the purposes of this post will be StumbleUpon, which is my favourite social media community by far.  That said, most of what I say will applicable to just about any social media community, so don't stop reading just because you're not a stumbler.  Within the StumbleUpon community there has been a surprisingly strong, and negative, reaction to those who write articles/blog posts that explore methods for leveraging StumbleUpon to drive the fabled "server crashing" levels of traffic, or dissect the inner-workings of the stumbling algorithm in order to figure out how to get that traffic with the least amount of effort and contribution necessary. 

"What Did I Do?"

When one of these people would end up on the receiving end of the StumbleUpon's community's ire they would be surprised. Instinctively, with perfectly crafted link-bait in hand, they would chronicle how they fell victim to hordes of angry stumblers, and express their disappointment while condemning the community for being so harsh.  Then, with anticipation of the inevitable rush of traffic their tale would attract to their site, they would hit the "post" button and quickly submit their post to their preferred social media channels.  What they didn't realize was that they were proving the reason for the community's backlash the instant they pressed "post".

Please Don't Urinate In The Pool

To explain that reason, we need to look at the reason people actually use StumbleUpon.  The biggest reason is the uncanny ability that it has for providing its users with a virtually endless supply of content that is almost perfectly targeted to them.  When this supply gets tainted, the user experience is worsened, and the better that the untainted experience is, the less tolerant the users will be of any tainting.

To illustrate, allow me to capitalize on the admittedly crude analogy found in the heading of this section.  Let's think of the StumbleUpon community as a group of friends at a pool party.  They are having a lot of fun, enjoying eachother's company, when they discover someone has been urinating in the pool.  The cleaner the water was before, the more everyone is going to notice the unwelcome "addition" to the water.  When they find out who urinated in the pool, they are going to be expectedly angry with them.  To stretch this analogy a little further, you can be damned sure that they wouldn't be happy when they found out that someone was telling everyone methods for strategically urinating in certain areas of the pool in order to maximize the number of people who would be exposed to the urine.

For anyone who was in the group of friends, and actually used and enjoyed the pool, the idea of urinating in it wouldn't even be an option.  Or, in the case of StumbleUpon, someone who actually participated in the community and enjoyed the service, wouldn't want to pollute it.

Catching Unwanted Spiders And Content Scraping Bots In ASP.NET

spiderinaglass

If you have a blog that is even moderately popular then you have likely fallen victim to some form of content scraping.  Ever since it became possible to earn money through ads on a website there have been people trying to find ways to cheat the system.  The most widespread example of this comes in the form of splogs and similar spam-based websites, which consist only of ads from Google AdSense and duplicated content that is scraped from other sites.  In this post I will share a method you can use to identify "evil" spiders and content scraping bots that are wasting your website's resources.

I'll start off by defining what is considered an "evil" spider/bot.  For our purposes here, we'll be looking at spiders and bots that ignore robots.txt and nofollow when crawling a site.  These are spiders and bots that offer no value to you in allowing them to crawl your site, as the major search engines use spiders and bots that respect these rules (with the unique exception of MSN who employs a certain bot that presents itself as a regular user in order to identify sites that present different content to search engine spiders than users).  

Of these valueless spiders, some are almost certainly going to be some form of content scraping bot, which is sent to literally copy the content of your site for use elsewhere.  It is in your best interest to limit how much of your content gets scraped because you want visitors coming to your site, not some spam-filled facsimile.

This method to identify unwanted spiders involves the creation of a trap,  which can be created as follows:

1) Create a Hidden Page

To identify these undesired visitors you need to isolate them.  Create a page on your site, but do not link to it from anywhere just yet.  For the purposes of my examples, I'll call our example page "trap.aspx".

spidertrap1

Now you want to disallow this page in your robots.txt.

spidertrap2

With this trap page disallowed in the robots.txt, it will prevent good spiders from crawling it.  What is needed now is a link to the trap page with the rel="nofollow" attribute, which should be placed on your home page for maximum effect.  The link must be invisible to users otherwise you might mistake a unwitting visitor for a bad spider.

<a rel="nofollow" href="/trap.aspx" style="display:none;" />

This creates a situation in which the only requests for "/trap.aspx" will be from a spider or bot that ignores both robots.txt and nofollow, which is exactly the kind of bots we want to identify.

2) Create a Log File

Create an XML document and name it "trap.xml" (or whatever you want) and place it in the App_Data folder of your application (or wherever you want, as long as the application has write-access to the directory).  Open the new XML document and create an empty root-element "<trapRequests>" and ensure it has a complete closing tag.

<?xml version="1.0" encoding="utf-8"?>
<trapRequests>
</trapRequests>
 
You can use whatever method is best for you to log the requests, you do not need to use an XML document.  I am using XML for the purposes of this example.

3) Log What Gets Caught In The Trap

With the trap in place, you now want to keep track of the requests being made for "trap.aspx".  This can be accomplished quite easily using LINQ, as illustrated in the following example:

Imports System.Xml.Linq
Partial Class trap_aspx Inherits System.Web.UI.Page
Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) _ Handles Me.Load
 
  LogRequest(Request.UserHostAddress, Request.UserAgent)
End Sub

Private Sub LogRequest(ByVal ipAddress As String, ByVal userAgent As String)
Dim logFile As XDocument

Try

    logFile = XDocument.Load(Server.MapPath("~/App_Data/trap.xml"))
    logFile.Root.AddFirst(<request>
<date><%= Now.ToString %> </date>
<ip><%= ipAddress %> </ip>
<userAgent><%= Now.ToString %> </userAgent>
</request>)
     logFile.Save(Server.MapPath("~/App_Data/trap.xml"))
Catch ex As Exception
My.Log.WriteException(ex)
End Try
End Sub
End Class

This code sets it up so every request for this page is logged with:

  1. The Date and Time of the request.
  2. The IP address of the requesting agent.
  3. The User Agent of the requesting agent.

You can, of course, customize what information is logged to your preference.  The code will need to be adjusted if you are using a different storage method.  Once done, you will end up with an XML log file (or your custom store) with every request to "trap.aspx" that will look like:

<?xml version="1.0" encoding="utf-8"?>
<trapRequests>
<request>
<date>12/30/2007 12:54:20 PM</date>
<ip>1.2.3.4</ip>
<userAgent>ISCRAPECONTENT/1.2</userAgent>
</request>
<request>
<date>12/30/2007 2:31:51 PM</date>
<ip>2.3.4.5</ip>
<userAgent>BADSPIDER/0.5</userAgent>
</request>
</trapRequests>
 
Now you've set your trap and any unwanted bots and spiders that find it will be logged.  You are then free to use the logged data to deny access to offending IPs, User Agents, or by whatever criteria you decide is appropriate for your site.

GMail Security Exploit Allows Backdoor Into Your Account

I normally don't discuss topics related to online security, but I recently came across a post over at Sphinn that detailed how David Airey had his domain hijacked thanks to a security exploit in GMail and wanted to help make sure as many people as possible are made aware of it.  I recommend to any of you who uses GMail to read David's post that details the trouble that this has caused him and make sure that your GMail account hasn't been compromised.

Not the most festive post for an early Christmas morning, but I'll wish everyone a Merry Christmas anyways.