I don’t envy the work of developers and system administrators who are responsible for the security of a software system. For starters, it’s a hard thing to think about conceptually. You have to ask yourself, again and again, how might someone wreck all the stuff I’ve accomplished? Moreover, security is just plain hard to do. Software security is one person, team, or organization against a wild world of software exploitation. In a fight like that, the discovery of a serious security vulnerability isn’t avoidable, it’s inevitable.
So it’s not surprising that 2010 had several high profile security incidents. The two that stand out in my mind are Firesheep and Gawker Media’s security compromise. Firesheep, a Firefox extension released in October, demonstrated just how easy it is to hijack someone’s browser session over public wifi. In December, more than a million of Gawker Media’s users’ passwords were publicly distributed. These two incidents stand out not only for their recency, but also in the way that they demonstrated that security is difficult to get right and difficult to explain when things go wrong. Both incidents illustrated that users themselves have very little control over how secure they are on the web. Moreover, each incident illustrated that many critical components in security are frequently arcane and complex.
In the case of Firesheep, many people learned that sessions which aren’t protected by SSL—that little padlock in your browser window—are vulnerable to attack over public, unencrypted wifi. With Firesheep installed, someone can sit down in Starbucks and, within minutes, snoop Facebook accounts, webmail, and more. It’s a difficult problem to fix, as very few non-login pages are served with SSL enabled and the cost of doing so can be high. But Firesheep also exposed another difficulty: it’s incredibly hard to teach someone how to know whether their session is secure or not. There are several layers: transport, browser, and network, which must be secured. I’ve forgotten the risks while using public wifi; how do you teach a less technical audience to do as well or better? Take a look at this account of an attempt to actively warn users of the risks by hijacking sessions. If that doesn’t work, what would?
Likewise, consider that Gawker Media’s user account data was illicitly accessed and distributed. A huge number of passwords and email addresses were revealed to the world. Though the passwords were not stored in cleartext, Gawker Media used a particular method of storing passwords—a DES hash, for those interested—which made it possible for many of the passwords to be trivially deciphered. If Gawker Media’s users with exposed passwords reused their username and password combination elsewhere, those accounts could potentially be compromised as well. Had Gawker Media used a different method to store passwords, that may not have been the case. Gawker Media ended up preparing a 26-question FAQ to help their users not only understand what the effects of the compromise were, but also what to do about it.
In both cases, users were faced with a problem largely out of their control, but with potentially frustrating and costly effects. Those responsible for the security of Facebook, Gmail, and Gawker Media had headaches, but so too did those responsible for explaining what had happened. And I’m not sure anyone did an entirely satisfactory job of it. I think, more and more, the problem of security isn’t just the problem of security. It’s also a problem of helping people understand what security means and what it means to not have it.