Enabling instead of Blocking – How Etsy’s approach to Security is Different

Stefania Druga, one of my fellow-attendees of “Code as Craft: Crafting an Effective Security Organisation”, a talk given by Rich Smith, Director of Security at Etsy (on Tuesday at Etsy’s Berlin offices), already has written a post about it. Since she has done a good job summarizing several of the key points of the talk, you can get a good idea about the content as a whole from her post and Rich’s slides. I want to focus on one single aspect that stuck to my mind after the talk.

lock-329759_1280

Same, same – but different

The idea that people, or the human factor, is very much crucial in IT security, as mentioned early on in Rich’s talk, is, in my opinion, actually not that novel anymore. I came across this theme frequently while “binge listening” to about 100 episodes of Gary McGraw’s Silver Bullet podcast from the last couple years, or watching talks from the Chaos Communication Congress and DEFCON, after I had decided that I wanted to get more knowledgeable about security. This idea is definitely an essential starting point to improving security, which at this point, is likely present in many organizations, albeit maybe still to an insufficient extent. However, I think most organizations draw the wrong conclusions and next steps from there.

Same idea, with different conclusions

Many organizations “factor in” human behavior in their approach to security, yet Etsy seems to deal quite differently with that. The idea of security as enabler, rather than blocker, sticks out as a crucial difference. Elsewhere the default still seems to be blocking or restricting, often with an increasing number of security layers, to counteract all possible human errors (or malicious intent) on different levels. People are taken into account, yet in a way that provokes an increasing divergence between (intended) security policies and measures and “security reality” or practice in the organization. Just like “bolting security on”, instead of “building security in” (to quote the title of one of Gary McGraw’s books), this is, of course, not a promising route to take.

If the security reality in an organization becomes divergent from the intended security in this way (through blocking), the organization also looses the ability to effectively change security measures and policies. On paper, theoretical compliance with a real-world mess might very well be the ultimate outcome. Once the security team becomes “last to know”, it will be relegated to fighting fires and any proactive approach to security is close to impossible. When adding more and more levels of blocking seems to be the way forward, it actually often is an indicator of failure of explaining security to users, on the levels previously deemed sufficient. If, as a very simple example, users start to write down their passwords on post-it notes because they are required to change them very frequently (and they do not understand why that might be necessary), you achieved the opposite of what you wanted.

Let me know your thoughts on enabling versus blocking security cultures on Twitter!