We use cookies to provide you with a better experience. If you continue to use this site, we'll assume you're happy with this. Alternatively, click here to find out how to manage these cookies

hide cookie message
80,259 News Articles

Why don't risk management programs work?

When the moderator of a panel discussion at the recent RSA conference asked the audience how many thought their risk management programs were successful, only a handful raised their hands. So Network World Editor in Chief John Dix asked two of the experts on that panel to hash out in an email exchange why these programs don't tend to work. 

Alexander Hutton is director of operations risk and governance at a financial services firm (that he can't name) in the Greater Salt Lake City area, and Jack Jones is principal and Co-Founder of CXOWARE, Inc., a SaaS company that specializes in risk analysis and risk management.

[ALSO:Why risk management fails in IT]

Jones: Risk management programs don't work because our profession doesn't, in large part, understand risk. And without understanding the problem we're trying to manage, we're pretty much guaranteed to fail. The evidence I would submit includes:

* Inconsistent definitions for risk. Some practitioners seem to think risk equates to outcome uncertainty (positive or negative), while others believe it's about the frequency and magnitude of loss. Two fundamentally different views. And although I've heard the arguments for risk = uncertainty, I have yet to see a practical application of the theory to information security. Besides, whenever I've spoken with the stakeholders who sign my paychecks, what they care about is the second definition. They don't see the point in the first definition because in their world the "upside" part of the equation is called "opportunity" and not "positive risk".

* Inconsistent use of terminology. This relates, in part, to the previous point. If we don't understand the fundamental problem we're trying to manage then we're unlikely to firmly understand the elements that contribute to the problem and establish clear definitions for those elements. I regularly see fundamental terms like threat, vulnerability, and risk being used inconsistently, and if we can't normalize our terms, then there seems to be little chance that we'll be able to normalize our data or communicate effectively. After all, if one person's "threat" is another person's "risk" and yet another person's "vulnerability", then we have a big problem. How much credibility would physics have if physicists were inconsistent in their use of fundamental terms like mass, weight and velocity?

* The Common Vulnerability Scoring System (CVSS) is my favorite whipping post, but only because it's perhaps the most widely used model. There are others that are just as bad, if not worse. CVSS, for example, claims to evaluate the risk associated with its findings, but nowhere in its measurements or formulas does it consider the likelihood of an attack. Without that variable, it misses the mark entirely. It has other problems too -- complex math on ordinal values and accounting for variables in the wrong part of their equations, etc. At least the folks who oversee CVSS recognize some of its problems and are trying to evolve it over time.

* Every time (and I do mean every time) I get to look at the entries in an organization's risk register, I see a fundamental problem. Most of the entries reflect control deficiencies like "failure to patch in a timely manner." The problem is these risk registers also require the user to provide a likelihood and impact rating for the issue and the users invariably rate the likelihood of that deficiency occurring, and the impact of some event that might occur as a result. That's like saying every time the batteries on a smoke detector fail the house will burn down. The result in most cases is grossly overstated risk ratings, which leads either to people ignoring the risk register because they intuitively sense it's inaccurate or, maybe worse, actually letting it guide their decisions. If you're going to use likelihood and impact ratings, it only makes sense to do so on scenarios that represent an actual loss event -- e.g., compromise of sensitive data via a malware attack.

Let me add one more thing that might help to put it into perspective. In order to manage an organization cost-effectively, decision-makers have to make well-informed decisions. In order to make well-informed decisions, they have to be able to compare the issues on their plates, including: opportunities, operational costs, and risk issues of different flavors. In order to make these effective comparisons they have to have meaningful measurements (apples to apples), and in order to have meaningful measurements you have to have a accurate model of the problem to be measured, which will inform you on what to measure and how to use the measurements. Recognizing that no model is perfect, our industry has operated from models that are so badly broken that the ability to manage risk cost-effectively is a complete crapshoot.

Hutton: Why do most risk management programs fail? My take:

1.) We think we understand risk. But, similar to Jack's thoughts, the reality is, what is risk? What creates it and how is it measured? These things in and of themselves are evolving hypothesis. Our practitioners, industry groups like ISC^2 and ISACA, standards bodies like NIST and the ISO, all their efforts are all focused on telling you what to do, when the fact is, they shouldn't be. Formalizing risk standards and models is counter-productive to innovation.

An analogy: What if 100 years ago the International Standards Organization for Physics (ISOP) settled on J.J. Thompson's Plum Pudding Model of atomic theory (in which atoms were thought to contain electrons), and then decided not to implement scientific method to disprove that model? Now, what if ISOP created a document that formalized the Pudding Model and industry and science had to simply then take that Pudding Model as "the way to do things?" And what if practitioners suffered negative incentives should they think of innovating beyond that model? That's exactly what our industry is doing to us. And the current Geo-Political marketing around "cyber" isn't helping.

2.) We don't know how to value a risk and metrics program. There is a Catch22 around ROI. Most people won't invest in risk and metrics until they understand the value (business case). But getting those value statements to make that business case? Well, that requires a strong investment in a risk and metrics program.

3.) Bias. Without strong data and formal methods that are widely identified as useful and successful, the Overconfidence Effect (a serious cognitive bias) is deep and strong. Combined with the stress of our thinning money and time resources, this Overconfidence Effect leads to a generally dismissive attitude toward formalism. In fact, I've seen the Overconfidence Effect happen even when the practitioner has some of the greatest data in the world at their fingertips! 

Thus we find ourselves (as an industry) in a similar Catch22 to the above: We don't get the strong formal methods we may all agree we want in order to be data-driven, because we don't believe that we personally need them. But until we recognize that we need them we won't contribute to, and thus receive, their development.

4.) Laziness. Most people want this all handed to them on a plate. If we're realistic with ourselves we all are waiting for some 1U box to come deliver our risk and metrics for us. We don't want to actually work for a rational approach to security. In the meantime, it's much easier to buy a bunch of managed services, 1U appliances, and roll the dice hoping that tomorrow isn't the day we get owned.

Jones: As usual, Alex nails some critical and, in some ways subtle, points. I particularly like his observation that our industry thinks it understands risk. This creates numerous challenges, not the least of which is that I suspect it's much more difficult getting people to shift paradigms than to adopt a net-new paradigm. 

So, it seems that "all" we have to do to make infosec risk programs successful is:

  • Fix a flawed belief system (or systems)
  • Resolve a chicken-vs-egg problem related to metrics
  • Compensate for human bias
  • Make it simple enough for people who want it handed to them on a platter

No problem. 

Actually, the good news is we're beginning to see more mature approaches to risk, although it feels like painfully slow progress sometimes. There are also methods for dealing with human bias, if people are willing to learn and apply those methods. As for simplicity, it's not as hard as it seems. Some of the difficulty is perception only, and some of the rest can be resolved with time. Of course, I'm skeptical that a 1U box for risk will ever be the end game. 

The bad news is there is tremendous inertia to overcome, especially since the infosec profession is not the only risk discipline that doesn't fundamentally "get" risk. This presents a challenge because I commonly hear people say that, "Risk has been dealt with for a long time, so we should just do what other disciplines have done." 

Great idea, in theory. But we have to be very careful about how much faith we put into existing risk models, particularly operational risk models. Some of the widely used stuff out there is laughable when it's put under a magnifying glass. I'd be curious about whether Alex has had the same observations.

A final point I'll make is that every infosec program is a risk program, whether we choose to recognize it and treat it that way or not because, at the end of the day, the only value proposition infosec policies, processes and technologies have is their effect on an organization's loss exposure -- the frequency and magnitude of loss. 

The problem is, as an industry we don't commonly put it in those terms and we haven't been measuring, managing, and expressing it in those terms. As a result, the policies, processes, and technologies that we use are not well understood in terms of their effect on that value proposition, which means that the cost-effectiveness of most infosec/risk programs is a crapshoot. Do you agree, Alex?

Hutton: Regarding Jack's question if I agree that we have to be careful about how much faith we put into existing risk models, I would say it depends <grin>. Uncertainty is hard regardless of discipline. What I have found is that some disciplines, in theory at least, have a more rational approach to how they try to understand that uncertainty than others. Some are very scientific, others not so much. The message I've been stumping for the past few years is that our industry should be very pro-science. 

Now, "How to be pro-science?" "What does it mean to be pro-science as an industry?" There aren't easy answers to these questions. And we shouldn't expect "easy." The search for truth, the search for knowledge and meaning... these quests are rarely simple or easy.

But yes, I look at much of what is called "risk management" and laugh because the only other alternative is to weep.As to Jack's other question about whether I agree with the notion that the cost effectiveness of most infosec/risk programs is a crapshoot, yes, absolutely. But more than that, I think there has been forming for some time a question about what the role of a risk management program is. This formalization has been very control-focused, thanks in no small part to the "GRC" meme. But if you take the mindset that governance should be driven by metrics, that *all* metrics (governance, performance, etc.) have some risk meaning (even if we don't have a model that directly accounts for it yet), then it may be time to remove the control focus and switch to a data-science focus.

What that means is a great question. And an exciting one.

Jones: I couldn't agree more with Alex's statement about this being an exciting time for those in our industry who are focused on the risk perspective. We have the opportunity to break new ground -- establish a new science, if you will. What could be more fun than that? There's still so much to figure out! 

Of course, there are significant challenges too, some of which we've talked about or alluded to here already. For example, you'd better come to the table with thick skin because people are going to be sniping at you constantly. You'll be challenging conventional "wisdom" and the status quo, and that makes you a target. You'd also better be comfortable with being proven wrong because, well, sometimes you will be. 

The upside is significant, though. The industry seems to be firmly headed toward an adoption of risk, particularly quantitative statements of risk. So if someone wants to be well-positioned for jobs and promotions in the future, and/or if you want to put your stamp on the next generation of information risk management, then this is a great time. 

And those who are concerned that maybe they don't have a strong enough math background for this stuff, rest easy. Math isn't the challenge. What you do need are critical thinking skills -- the ability to think beyond the superficial veneer of current practices. This requires a willingness to look at what the industry (and sometimes you, yourself) have been doing for years and realize it doesn't make any sense. Sometimes it's been embarrassingly wrong. Challenge, continually challenge, "best practices."

Hutton: Let me end with this: the key to success in security and risk for the foreseeable future is going to be data science. In fact, in my opinion, all the hype around "Big Data" is sorely misplaced. Let me explain. For the past 20 years we've focused on the existence of the control over the skillful operation of a series of controls. We've become a culture of "installers" and, to whit, we've built a false religion about how our controls "protect" us at the expense of really understanding how they "inform" us. 

It's worth noting that our approach to the concept of compliance feeds this culture, our approach to creating standards feeds this culture, our approach to audit feeds this culture... we have multiple perverse incentives that cause us to not focus on that which has demonstrably been shown to secure (skilfull operations). The good news is that one thing that can change this culture is a move towards data centric or evidence-based risk management approach.

The bad news is that this myopic installer/protection focus problem we have is going to be accentuated as CISOs go out and invest in the technology of big data without understanding the people and process needs of risk and security data science.

Read more about wide area network in Network World's Wide Area Network section.


IDG UK Sites

Best Christmas 2014 UK tech deals, Boxing Day 2014 UK tech deals & January sales 2015 UK tech...

IDG UK Sites

LED vs Halogen: Why now could be the right time to invest in LED bulbs

IDG UK Sites

Christmas' best ads: See great festive spots studios have created to promote themselves and clients

IDG UK Sites

Why Apple shouldn't be blamed for exploitation in China and Indonesia