IT Security Differently

Compliance and regulations are one way to achieve IT security. If one looks at industries that have been around for a very long time, and have very high stakes, for example commercial airline travel, mining, oil&gas, etc., one can find compliance and regulations everywhere. It’s how safety is managed in these environments. I have always been fascinated with safety incidents and read a lot of reports around them — these are almost always free to read and very detailed, unlike IT security incident reports. See for example the now very famous Challenger Accident Report (“For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.”) or the similarly famous, and more recent AF-447 accident report. These are fascinating reads and if you are willing to read between the lines, they all talk about systems issues — not a single person making a single mistake.

Human Errors and the Dangers of going Down and In

Sydney Dekker talks a lot about safety, trust and accountability. I strongly suggest reading The Field Guide to Understanding ‘Human Error’, in it, he talks a lot about going “up and out” instead of “down and in” when an accident happens. That is, instead of trying to e.g. pin the line of code that ultimately allowed the remote code execution, and who exactly wrote it, he advocates to map out the people, processes and technologies that allowed that code not only to be written, but to be approved, deployed to and stay in production until it ultimately allowed the security incident to happen. Once these are mapped out, he advocates for inviting the practitioners creating security at all these junctions  to collaboratively find a way to reduce the risk of the incident happening again.

Because here is the issue with going down and in: you blame the person who committed the code, fix the code, berate/fire/demote her or him, and you are now “done”. Anybody who works in security knows that’s not going to fix the issue. When there is one SQLi, there is usually a lot more. When you see no AWS Config rules, you usually expect the AWS deployment to be nearly-devoid of security auditing, detective and reactive controls. That’s just how people (and hence organisations) work. We need to do better than just blame the individual.

As far as I can understand, what Dekker advocates for, when translated to security, is to make your practitioners not fear but embrace the security support that is available in your organisation, to make the security engineers engage with these practitioners so they better understand what to look out for — from potential security vulnerabilities to indicators of compromise — and not be worried to report their fears. If there is fear that reporting issues will lead to complicated and unnecessary security audits, that their work (which ultimately everybody prides themselves on) will get taken over and potentially taken out of their hands if they do report, then you will not hear of potential issues early, allowing you to remediate before the issues become serious.

Culture of Security

The above runs in parallel with what Dekker says about minor accidents in the workplace. The more there are of these (reported) the less there is of major accidents that will happen — remember, BP and Transocean managers were on board to celebrate seven years without a lost-time accident on the Deepwater Horizon, when it blew up. This is as much counter-intuitive in safety as it is in security, yet it makes sense in both, for the very same reasons.

The more you push your small issues under the carpet, the more you are likely to have bigger ones down the road — because you are actively discouraging safety/security culture in your organisation. Instead, an open discussion of both safety and security allows the practitioners to voice their concerns, and for the processes to be adapted to their concerns, allowing for higher production throughput and higher safety/security, and, crucially, for engagement from the very practitioners you must rely on to create security. Because here is the real catch: security is created every single day, by the people who write the code, review it, deploy it, and maintain and monitor the deployed environment.

Engaging the Sharp End

IT security engineers cannot read every code commit, be there at every deployment, check every change to every environment deployed. The people performing these actions are at the point the safety industry calls “the sharp end” — practitioners making security-critical decisions every single day. What IT security engineers can do, however, is to educate those who are there, to give them support when they need it, and most importantly, to engender a culture where asking for help by people at the sharp end is not a recipe for someone to whack your job out of your hand and “to do it right” or create a new process without involving the practitioners, telling them how it must be done. Instead, the security engineers and culture should assure the person reporting that they will be made part of the solution, whatever it may be.

 

PS: Thanks to all, some colleagues, some not, who have made me think about all of the above… it’s still a work in progress :)