Contemplating My Goals for Risk Management Education

September 26th, 2010

As I am getting ready to teach the debut session of my new SANS MGT442 course this week, I have been thinking a lot about what my goal was for the course. It has been a long road to develop this course. I think that I first pitched the idea to SANS in June of 2009. Around that time I felt like risk management wasn’t getting nearly enough attention in the information security field, and so many professionals didn’t know the first thing about assessing risk. We had organizations running a qualys scan, and thinking that was a risk assessment, and security managers escalating every vulnerability to executive management like it was the end of the world. Now, over a year later, every security conference has tons of presentations with risk in the title at least, and risk as become almost as big of a buzz word as virtualization or cloud. I have even seen some great strides forward in the research and implementation of some really robust and advanced risk analysis models, but are we any better off than a year ago?

Every time I have to interview candidates for a open position, I am amazed how many have nothing more than the equivalent of a Devry training in information security. They have a bunch of tools in their toolbox, and they know the so-called “best practices” for applying them. The problem is that they have no idea how to really analyze a situation and consider solutions outside the normal model. It’s like taking your car to the dealership these days, if the computer doesn’t say anything is wrong with the car, the mechanics have no idea how to troubleshoot the strange noise coming from under the hood. Security practitioners still follow a simple methodology: find vulnerability, patch vulnerability. They also have a long list of things that aren’t ever allowed, but ask them for a creative way to mitigate the risk without just saying no, and they are lost.

As I really think about my goals for this new two day course on information security risk management, it has always been my number one goal to: educate the field about how to look at a problem, understand the real risks, and find a solution that meets the business needs, while keeping the risk level in an acceptable range.

Since I have recently joined the Society of Information Risk Analysts, I have been exposed to some really fantastic work that will surely move our field towards the level of maturity and precision that we desperately need. But I look at the gaps in knowledge and skills in the field, and I know that the audience for my class just isn’t going to be ready to digest that depth on the first pass. First we need to help the profession to understand and develop basic risk models that can move their security programs out of the “village elder” type approach to risk predictions. If we can provide a strong foundation for dissecting a risk and building a security program around risk management, then it should be trivial to substitute in more precise analysis models later when you’re ready. In my experience, the organization has a hard enough time absorbing the basic concepts of residual risk and compensating controls, if you also throw in advanced concepts like the differences between likelihood and frequency, you will lose them completely. I have seen so many security programs try to take on too much too fast, only to see it rejected by the corporate culture. I have found more success setting out your long-term goal, which may include a sophisticated quantitative risk model, but keeping this vision to yourself. You need to slowly lead the organization towards that end, but in small bite size chunks that they can digest. If you structure your risk program right, you will have all the foundational steps in place to keep raising the level of precision as the business finds the limitations in the simple models for themselves.

If by the end of this course, students come out understanding how to really break down a risk and understanding how to recommend solutions to address the real exposure and not just the symptoms, I will consider the class a success. If they also understand how to implement an information security program based on these principles, then I know that our profession will be better for it. If done right, the security risk management program will be so integrated into the core business processes that the lines will start to blur between functions like security, business continuity, vendor management, and operations to the point that security won’t feel like an island in the organization, it will just be embedded in every business decision.

Avoiding Formulaic Security

February 21st, 2010

The problem with the information security these days is the emphasis on checklists and so-called “best practices” that may not be appropriate for all situations. For the sake of simplicity and consistency, the security field has evolved into a cookbook-type approach. Everyone gets the same recipe and is expected to execute on it in the same way, but we don’t live in a one-size-fits-all world. Instead of blanketly applying so-called “best practices” across the board, we should be using some risk analysis techniques to determine the best controls for our organization. The current training opportunities turn out security professionals who know which activities to perform and which patterns to follow, but can’t tell you why. The problem with this is that they have no idea what to do when the situation doesn’t fit their patterns, or even worse they apply the same checklists even if it doesn’t address the actual risks. Have you ever interviewed someone who is very technically savvy, and tried asking them why they do it a certain way? The scary thing is that most people can’t explain why. They have just always done it that way, or been told to do it that way, and never questioned it.

If you are transferring sensitive data over the network, then you need to encrypt it every time. But why are you encrypting it? What problem are you trying to solve? What risk are you trying to mitigate? Having checklists and baselines makes it easy for security novices to apply a minimal level of protection without having to understand the intricacies of information security, and also provides a basis for auditing. Just think about how many times you have gotten a recommendation from an auditor or third-party consultant, and it is clear that they don’t understand the real risks for your organization. I can’t tell you how many times I have seen recommendations that identify a “high” risk that should really be listed as low risk if you understand the business model of the organization.

We need to train our senior security professionals in the field to perform a real risk analysis and not just accept the established cookbooks for security. Even NIST seems to be moving in this direction with the latest draft of their SP800-37 guide for Certification and Accreditation which is now totally based on a risk management approach. This is clearly the future of the field. More dynamic and flexible approaches to security that bases recommendations on the particular risks of each scenario, not just a single pattern for the entire field. Just look at the Payment Card Industry, I don’t think that anyone would say that the PCI requirements have made retail companies more secure, just compliant.

As the threat landscape continues to shift, your old checklists and formulas for information security just aren’t going to cut it any more. If you want to stay ahead or just keep up, you need to understand the fundamental components of a solid information security program, and decide how to apply them given the particular risks to your organization. I think that “best practices” should be considered dirty words in the field. How can a single list of “best practices” possibly apply to my organization and yours in the same way?

Modern Information Security Challenges

December 6th, 2009

There are several challenges in our evolving environments that make it difficult to adequately protect our resources. Among these many challenges, I think the following are worth mentioning:

  1. Blending of corporate and personal lives — It is harder to differentiate between your work life and personal life as the work day has less of a distinct start and end. For example, employees use company email for some personal communications, and some employees may be issued a blackberry or cell phone that they use for limited personal use. Many people may not even have a home computer and use their company issued laptop for everything including running personal software, like their tax software. On the flip side, some employees may bring a personal laptop into the office and try to plug it in.
  2. Inconsistent enforcement of policies — Many organizations either haven’t enforced their policies in the past, or have done so inconsistently depending on the position of the employee. This causes many issues when a security function tries to crack down of violators. Hopefully you don’t have one of those organizations who have buried their security policies on some internal website that no one ever reads.
  3. IT doesn’t own and control all devices — I alluded to this issue above with personal mobile devices, but what if the organization doesn’t provide a PDA to the sales team, so they buy their own and start storing client lists on it and try to connect it to your wireless network in the office? What happens when you need to do an investigation on that device, can you?
  4. Blurring of internal vs. external — The edge or perimeter of the network isn’t as clear anymore. In the past we established strong perimeter controls to regulate access into and out of the network, but now that perimeter has been pushed out to partners with extranets, to third-parties with hosting services, and to employees homes with VPN solutions that can be used from a personal desktop. Where would you even draw the line now?
  5. Covert attacks, no longer obvious — It used to be typical for a virus infection to be big and messy causing a lot of damage and immediately being obvious when you were infected. Now, however, attackers are silent and stealthy. They don’t want to erase your data or take down your system, they want to slowly steal your data or use your computing power to attack other victims. They do their best to be undetectable with rootkits and backdoor trojans.
  6. Moving target — As we mature and get better at securing our systems, the attackers find new and creative ways to bypass our controls. As we close the easy ways in, they develop more sophisticated attacks. It is a never ending battle.

The threat landscape is constantly changing, and it can be easy to fall behind. Techniques and strategies that worked last year, may not be enough this year. I’m not a proponent of spending every day analyzing the slightest change in threat intelligence, but your security program does need to be flexible. Take advantage of threat reports and study the major trends, and adjust your approach periodically.

Just remember that very few weaknesses or attacks are really new. Old attacks get repackaged and new buzzwords are coined. In my experience, it is just applying the same fundamental attack strategies to new targets. We in the information security field have the habit of making the same design mistakes over and over.

What a Busy Month!

October 17th, 2009

October is turning out to be quite the busy month.  I just finished a presentation for SANS on Digital Forensics at the Holyoke Community College’s first Internet Security Awareness Conference yesterday.  We even got a few seconds of local news coverage on Channel 22 in Springfield.  If you squint really hard during the news clip, you can see me on the stage (in the dark) presenting a slide on Mobile Forensic Arsenals.

Next week I am doing a webcast with SANS on how to use risk management techniques to better manage vulnerability remediation efforts.  This is my first webcast with SANS, so I hope that it will be well attended.  It is free after all.  I wrote a brief blog article for Akibia earlier this week to introduce the topic and stir up some more interest:  Improving Vulnerability & Patch Management.

As if that wasn’t enough for one month, I will be presenting as part of the Risk Management Summit at the CSI Conference in Maryland on October 26th.  I will be participating in the panel discussion and also presenting a short discussion about How to Build a Risk Management Program from Scratch.  It is essential for any security professional to understand how a risk model can become the center of a mature information security program.  Attendees will learn how to build a Risk Management Program from scratch and the fundamental components that are required for a holistic approach.  This session will also demonstrate how to successfully approach integrating it into your environment with minimal resistance.

That same week, I am organizing an evening of free sales training focused on skills for security consultants, A Crash Course in Security Consulting.  As part of SANS’ commitment to sponsoring free educational sessions for the infosec community, they will be providing some great speakers and a hands-on excerpt from their SEC 560 Network Penetration Testing & Ethical Hacking course.  If that isn’t enough, we will have some sales training experts presenting, and beer and appetizers will be provided.  You can’t get much better than free food and training!  I think this will be a very valuable session for aspiring and current consultants in the field.  Especially those of you who are going out on your own.

I think that just about does it for this month.  Luckily I am finishing up my class at Northeastern next week, otherwise I don’t know when I would sleep …

New Risk Analysis Webcast with SANS

September 9th, 2009

Want to learn more about risk management?  Don’t know where to start developing your own risk model? On October 20th, I will be presenting a free webcast hosted by SANS and sponsored by Rapid7:

Changing the Way We Manage Vulnerabilities & Patching

If you are a resource administrator, then you probably spend too much time responding to new vulnerability reports and patching systems. For the security folks, you probably spend too much of your time tracking down the status on remediation and trying to qualify new vulnerability notifications. So how can we manage this better? This session will focus on how to take vendor and industry reports of new vulnerabilities in software/hardware, and analyze the risk to your own organization. With limited time and resources, you can’t patch everything on day 1, so how do you determine which alerts are actually critical for your environment?

The answer is to develop a risk model that takes into account the particulars of your environment. We will demonstrate how to develop your own risk criteria for severity and likelihood by analyzing some recent vulnerability notifications. By the end of this session, attendees will know how to analyze a new vulnerability report for the distinguishing characteristics that would make it a critical weakness for some, but a moderate concern for you. Armed with this knowledge, you can better focus your administrators’ efforts.

Register here:

https://www.sans.org/webcasts/changing-way-we-manage-vulnerabilities-and-patching-92834