On Testing and Security Engineering

I have been working in a large organization for quite a while now where I have seen a lot of testing going on. As an information security engineer, I naturally aligned more with testing and indeed, information security assurance does align well with testing: It’s done on a continuous basis, its results usually mean work for developers, operations people, system architects, etc. and not caring about it is equivalent to accepting unknown risks. Since I have been working in an environment where testing was paramount, I have been digging more and more into the testing literature.

Testing and Information Security, Worlds Apart

What was most surprising is that security engineering is miles away separated from the testing community in general. Just listen to the largest testing conference’s keynote and be astounded: no security folks in the audience at all. As a white-hat hacker I can understand where this is coming from: information security seem to have evolved on its own with little influence from the testing crowd. When I picked my Masters to be Information Security, it was the very first year that kind of Masters was taught at my university. At the largest European security conference I go every year, the Chaos Communication Congress (in Germany), I have never met any testers. It seems that a lot of the security know-how came about from the underground, people hacking together and creating know-how from scratch. If you look at one of the main reference books, Security Engineering by Ross Anderson (first published 2001), you will scarcely find a reference to testing and the testing literature.

Commonalities and Differences

I find this strict separation eerie. First of all, there is incredible overlap. For example, if you want to properly fuzz a complicated system, say, Mozilla’s javascript engine, you need to have the component under test (in this case, SpiderMonkey) made into a module so it can be fed input directly. Such stand-alone executables are normally created by and for testers (or developers who care about testing, see TDD) so that modules can be properly tested. Without such an executable, trying to fuzz Firefox’s javascript engine would be quite futile — Firefox startup times would kill your fuzzing session’s efficiency. Similarly, protocol and deployment diagrams used for functional and performance testing and validation is extremely useful for security architecture reviews. Note that testers almost always know more about the overall system and the context it is deployed in than developers, who in turn tend to be more focused on individual subsystems.

Secondly, handling security similarly as you would high/medium/low priority issues caught by testers improves the chances that security is taken into consideration at the right level and in the right way. Just as you wouldn’t ship a game that crashes after 10 minutes of play (and you might remove the functionality that crashes it as a stopgap), you wouldn’t ship a system that has an SQL injection vulnerability in it (and you might remove the functionality that allows for that unescaped input as a stopgap). In other words, insecurity is created by some functionality — note that no functionality means perfect security — and that functionality can be changed, disabled or restricted to allow for lower risk operations. Or indeed, the risk can simply be accepted as it is sometimes accepted for functional or deployment issues. In this sense information security engineers highlight risks that management then can decide to either accept, mitigate, or eliminate. This is very similar to let’s say a functional issue that may endanger the reputation of the company — and even human life if it’s a medical or safety device — in case it is released.

Finally, it must be acknowledged that information security is different in a number of ways from normal testing. First of all, the risks are often larger and harder to control for. If you inadvertently delete your customers’ data, it’s a shame and you might loose some customers. If that same data ends up in someone else’s hands, you’ll be in for a much rougher ride. Systems typically low-risk from a functional standpoint can carry large security risks — as one can evidence time and again looking at the Full Disclosure mailing list. Similarly, testing of systems, even end-to-end, often doesn’t need to take into consideration a number of factors that security folk have to: the legal landscape, trust of third parties such as governmental organisations, incentives of both inside and outside threat agents, etc. In general, a security engineer needs to take into account a more broader context and may need to make calls on higher-than-average risk mitigation strategies. But note that risk mitigation is still the key component here and controlling for risk often requires changes that are not unlike what testers might demand of developers to fix performance, deployment, or functional issues.

Conclusions

Successful companies seemed to have capitalized on some of this divide. Codenomicon clearly appeals to the testing crowd with its intuitive interface and the way it can be integrated into test system automation. The same can be said of Nessus. Both companies seemed to have packaged up information security know-how into something that testers can use who are not necessarily experts in information security.

Given the divide between these two crowds I believe there is an opportunity here. Testing is a well-established area with a large knowledge base. Similarly, information security can now be considered a well-established discipline with its own certifications, knowledge base, etc. Intermingling of the two crowds and hence some of the know-how would I think serve both communities quite well. There are some obvious low-hanging fruits such as more effective fuzzing and automated test case generation. The non-obvious ones are probably more interesting and I believe will only come about once more bridges have been built.