Many people discuss HushMail these days, in light of the recent story. The data from the secure email service was leaked to the FBI, which used it to bust an illegal steroid ring.
Of course those people are all bad, and it’s good that they have been caught. But this brings back the old question about the security of HushMail and similar services.
Bruce Schneier wrote an essay about this back in 1999 (when HushMail first appeared), so these issues are pretty well known by now. This is straightforward that having your own client is better than trusting the “untrusted” provider to provide you the code.
So the obvious question is then, why bother? Why not use Yahoo/Gmail instead of HushMail? And why not use OpenID instead of SlashID? Those are trusted providers which have access to your sensitive data - but at least they are upfront about it. And they promise not to give your data to anyone.
This is a real question that I heard from quite a few security people while discussing SlashID, and it was posed as a serious objection to its usefulness. The argument went something like this: “If stealing user’s password is not “cryptographically hard”, then it’s essentially a “no-op”. If so, I’m just as good with OpenID as I am with SlashID or other assertion-based solution. Why bother developing the new scheme?”
To answer that, we first need to agree that generally, an untrusted service (hypothetical one - even if SlashID doesn’t qualify) is better than a trusted one - in fact, much better. This is simply because an untrusted service can’t harm me even if they want to, whereas a trusted one promises not to harm me, even though they could. I would always prefer lack of ability over a promise, because promises can be broken, while abilities cannot magically appear. I think this point is also well-understood today.
Now, let’s see what it takes to make SlashID untrusted “again”. Very simple - write a browser plugin that does whatever SlashID does, release it under GPL and make it downloadable from somewhere else - not from our website. That’s it - we’re untrusted again. And this time it’s “for real” - no “no-ops” or other tricks. If FBI shows up and wants us to disclose some data, guess what, we’d love to help, but we have no control whatsoever besides shutting down these accounts (which by the way will not help much, since we are not a SPOF - but that’s a topic for a different post…). No changes to the protocol and no code changes on the Relying Party or the Identity Provider are required.
In other words, in SlashID case, there is a certain amount of trust embedded in the particular implementation of the technology, but not in the technology itself. So in essence, even if SlashID does not eliminate trust completely, it puts it into such form that it can be easily removed in the (hopefully, near) future.
To me, it’s a big deal - in fact not less important than to actually eliminate the trust. Think about how you handle garbage at home - most of your time and effort is spent to collect it in one place so it can be conveniently taken out in one simple operation later on. This is what we want to do to trust - to eliminate it eventually, but we have to do it one step at a time.