The Information Security Policy Trap



Ben TomhaveInfoSec genius Ben Tomhave wrote:

"It's that time of year again: time to update the policies! This annual exercise is always a source of great enjoyment for me (no, not really). After all, there's nothing like having the non-technical flailing about as they try to force-feed technical requirements down the throats of IT without explaining, justifying, or providing any factual basis for asking. If there's something most techies love, it's an over-the-top policy recommended by external auditors.

Quite frankly, policies are the precursor to, and embodiment of, the checkbox-compliance mindset. We all know how well that's worked out for us thus far. I mean, looking at all the data breaches we're not having thanks to compliance and policies, right? Hahaha... oh.

One of the biggest problems with these annual policy-update exercises is that "policies" are rarely defined properly within the enterprise. Instead, you get a jumble of policies, standards, baselines, processes, and procedures, all crammed into some monolithic document that some know about, few review, and even fewer follow.

Policies should, definitionally, be a statement on desired risk management strategy. They must articulate the business case and basis for understanding operational risk and how the enterprise desires to manage that risk. These statements should create the limits known as risk tolerance, risk capacity, and risk appetite with means for measuring against those limits to ensure that IT is operating within the defined parameters.

Policies are not the place to document specific technical controls (or practices). Those statements must exist within the other lower levels of documentation, and for the most part should be owned by the operational teams, which will ensure that they have a desire to adhere to their own defined practices.

A quick path to failure is having non-technical compliance people telling technical people how to operate their IT without actually understanding what they're talking about. As if we didn't have enough credibility challenges, let's just make it worse by trying to inject incompetence and misunderstanding into a technical environment. That should work out great.

This line of thought leads me to three significant fallacies I've consistently encountered.

Fallacy #1: If You Write Them, They Will Comply

One of the most amusing notions is that of the aspirational security policy. "If we write this policy at this level, then everyone will come into line. We just need the strength of a policy to force people into a new way of behavior." All good and fine until you then immediately start issuing exceptions to the policy because people can't comply. Or, worse, your policy isn't really a policy, but rather a standard, baseline, proc & proc, etc.

The fact of the matter is that policies should not be viewed as "something to comply with" rather than as "the risk maangement boundaries within which we must operate." Policies should clearly articulate the risk management strategy and then everything else (such as technical standards) should provide the implementation details that demonstrate meeting those expectations.

A perfect example is the SOX audit. SOX 404 does not specify technical controls. It establishes a high-level objective to guard against fraud in financial systems. Unfortunately, the AICPA, ISACA, and auditor community has spun that to mean specific technical practices when, really, the focus should be on a handful of specific, auditable capabilities like monitoring for configuration state changes, monitoring for attacks and anomalous access/traffic, and demonstrating overall process integrity through automated methods/mechanisms. However, these things are a bit more "squishy" and require a lot more technical savvy to audit, which leads to devolving the requirements to checklists, which in turn can be adopted by people lacking cluefulness, forced down the throats of ops teams that are already under siege from other quarters. It's a lose-lose proposition, and I marvel that anybody willingly enters the IT space any more, figuring they just don't realize what they're in for... but I digress...

Fallacy #2: Policies Stop Incidents

Show me a documented policy that has stopped a data breach. No, really... I'll wait. Policies don't stop breaches, nor should they! Policies establish the overall risk management context within which specific practices should be established.

Technical and administrative controls stop incidents. Policies are neither of these things. Processes and procedures are administrative controls. They align to desired performance characteristics set forth in policies. So it also is for technical controls. What this means is that a policy, at best, provides a one-off means of protection against incidents, but because you don't implement policies so much as align to the standard of performance, they do not prevent anything.

I asked this question on Twitter and one response back pointed to an example of a "policy" that said "don't ever give your password out, IT will never ask for it" as an example of a policy preventing an incident. It was highlighted that this "policy" typically resides inside "employee policies." My response and counter-argument is that this isn't a "policy" so much as a security awareness talking point, and that it's now lumped in with HR "policies," which are distinctly different from security/technical policies. What this really amounts to is an administrative control ("awareness statement") that is placed within an HR context for the purposes of making people aware of what to expect.

Maybe it's nitpicking, but the point is this: the policy statement/objective is to minimize the incident of compromised accounts because it has negative impact on the business, and one of the administrative controls implemented to meet this objective is the awareness initiative telling people not to give out their passwords. Undoubtedly, there will be several other technical and administrative controls to further meet that high-level objective.

Fallacy #3: You Can Just Adopt Standard X For Policies

First, why would you want to abdicate decisional authority for how your organization functions to a third party entity that knows nothing about your organization? Second, most standards start with a "scoping" phase that require you to first understand and define your business requirements, which is what your policies should be articulating (anything more than that gets beyond the role of a policy and into the land of standards, baselines, etc.). Third, while various standards and lists of practices can be instructive, such as for security architecture, it's rarely a good idea to blindly adopt them wholesale without heavily customizing them to meet your organization's needs.

Yes, it's true that lists of practices like the CCS Critical Security Controls (formerly SANS) can be informative in terms of making specific technical architecture decisions, but this is most definitely not the realm of policies. Policies must articulate why specific changes are necessary/important and what business risk management objective is being achieved. It's a common misconception among certain populations(*cough*auditors*cough*) that one can simply checklist-away all of the world's ills. Of course, if this were true, and if securing the enterprise were really so easy, then we wouldn't need to have this conversation.

---
Policies typically are nothing more than a trap. Some folks mistakenly believe in aspirational policies, which cannot be enforced, which means that they're null and void (and, enforcing them arbitrarily can lead to legal issues). Others think that writing a policy will magically change practices or corporate culture. Again, a logical trap in that the policies don't actually do anything. If the policy isn't a direct change in practices, then it's a one-off, which we know definitely does not result in change.

The proper use of a policy is articulate business requirements and objectives, which are then met through the implementation of technical and administrative controls. These controls should be owned by the implementing teams, which allows them the flexibility to come up with feasible solutions. The policy should provide a means for measuring that "risk" is within the defined limits, but must stop short of specifying the "how." Sadly, this perspective is often misunderstood and misrepresented, leading to the annual circus of "policy updates." Fun times."

So now that you are more clear about the difference between Policy, Procedure and Awareness, it's time to deploy new school security awareness training and step all employees through it ASAP. Find out how affordable this is today and be pleasantly surprised.

Get A Quote Now

 

Cross posted from the genius Ben Tomhave at The Falcon's View Blog 


Topics: IT Security



Subscribe to Our Blog


Comprehensive Anti-Phishing Guide




Get the latest about social engineering

Subscribe to CyberheistNews