The Fastmail security mindset
“Security” is a word that gets bandied around a lot in the IT world, often with little actual thought or substance behind its use. The phrase “we take your privacy and security seriously” is the preamble to many a mea culpa from companies who, frankly, didn’t.
Fastmail has always been an engineering-focused company, from the top down. As such there is a strong culture of no-bullshit, and an intense dislike of security theatre.
Our approach to security is to proactively develop and adopt any measures which meaningfully improve the confidentiality, availability or integrity of our customer’s data. We are not interested in implementing things that sound good in marketing spiel but don’t actually help, or may even actively hurt, our customers’ security. We also strongly believe that usability is part of security; we need to make it easy to stay safe, hard to get wrong, in order to be secure.
As an example of this mindset, we were one of the early adopters of opportunistic TLS encryption of SMTP connections when sending and receiving mail. This prevents passive man-in-the-middle attacker from snooping on your data, making mass surveillance much harder.
This even protects interception of metadata; someone watching our outbound connections might just know Fastmail connected to Gmail, for example. There’s a lot of email sent between us by many different users, so observing this connection would not leak much information. (Interestingly this is where there is safety in numbers: if you and your intended recipient both hosted your own email on individual servers, then encrypting the connection doesn’t really hide who the message is from or to!)
Supporting encrypted SMTP meaningfully improved the confidentiality of our customer’s data, without impacting our users’ workflow. And there’s still more we can do in this area! Initiatives like MTA-STS will allow us to further protect against active man-in-the-middle attacks on mail delivery, and all without impacting usability.
Just as important as what we do do is what we don’t. For example, we don’t do full message encryption (e.g. PGP) in the browser. In theory it means you “don’t have to trust us”. However in reality, every time you open your email you would be trusting the code delivered to your browser. If the server were compromised, it could easily be made to return code that intercepted and sent back your password next time you logged in; it could even just do this for specific users. It is very unlikely that a user would notice.
We therefore don’t believe this offers a meaningful increase in security, and can be actively harmful in a number of ways. It reduces availability, because if you forget your password we cannot help you recover access to your own mail. It makes phishing (by far the biggest cause of compromised accounts) much harder to filter out.
It can also be seriously dangerous when users misunderstand the security characteristics. For example, if you were a journalist working undercover in certain countries, you may justifiably require secure, anonymous communication. “Encrypted email” sounds like just the thing you need. But if your mail host doesn’t proxy images to hide your IP, someone could simply send you a message which when opened made your device connect directly to their servers. This reveals your IP address, which can often be used to fairly precisely determine your location, and sends cookies that may allow them to correlate your email address with visits to other sites on the web. That’s a much bigger risk.
Ultimately, security is a process, not a checkbox. We are always looking for further measures that will help secure our customer’s sensitive data. But we don’t do stuff just to check a marketing box. It may lose us a few customers enticed by razzle-dazzle claims, but we feel better about the integrity of our service.