Impact driven risk management and business continuity

Archive for the ‘Business Continuity’ Category

Impact driven risk management and business continuity

Posted by

InfoSec management hit the perfect storm. At the same time IT has exploded, budgets imploded. Our IT environments are growing due to consumerization, cloud computing, and sprawl. Our teams and budgets are shrinking due to the economic down turn. We are in an even tougher spot today that we were in 2008, when I began talking about the importance of driving information security from risk management.

We still have the same fundamental problems. How can we pitch the idea to our executive team? How can we secure this growing IT environment, when it is next to impossible to know what any one piece of it means to the business?

I have been driving my efforts from impact. We cannot defend what we do not understand. By mapping IT to the business activities it enables, and by performing an impact analysis, we can understand what all the stuff we are responsible for actually does. We can then tune the cost of security controls around the value of the IT assets.

Business continuity and risk management then flow naturally from this deeper understanding of our organization and how IT enables our organization to fulfill its mission.

These are #secbiz — or security in the business — concerns. I’d argue that impact driven risk management is a #secbiz approach. I made that argument today at a conference in Grand Rapids. I posted a copy of my talk online.

How asteroids falling from the sky improve security (GrrCon 2011)

The next steps that I gave in my talk are continuing the conversation on Twitter under #sira and #secbiz, as well as joining the Society of Information Risk Analysts (SIRA) and the SecBiz group on LinkedIn. Please send some feedback my way. Is this is a good approach? What would make it better?


Note: This is the same talk. However, due to technical difficulties, the recording online is not the same as the one I gave in person. Same message, different out take.

Out and About: GrrCon

Posted by

I will be out at the GrrCON conference in Grand Rapids on Friday, September 16. I am giving a session in the InfoSec Management track. The topic is on business continuity planning and risk management. Sound boring? No worries. There is an amazing line up of speakers covering a wide range of topics. Hope to see you there.

How asteroids falling from the sky improves security
An asteroid fell from the sky and the data center is now a smoking crater. At least, that’s the scenario that launches your business continuity planning. BCP asks the questions: what do we have, what does it do, what is the risk and what is the value? The answers to these questions are also essential build blocks of a risk management program. This presents an opportunity for the savvy information security professional. In this session, we will look at ways to co-opt business continuity to advance an organization’s information security.

Disaster recovery metrics – beyond RTO and RPO

Posted by

A recording of my talk at SNW 2011 today is online:

Disaster recovery metrics – beyond RTO and RPO

Many people consider only the recovery time and recovery point, RTO and RPO, when developing their strategies. This is a problem. Left unattended, certain characteristics of a recovery strategy will cause us to miss our recovery time. So it is important to look beyond the surface.

To meet RTO, we must have sufficient time metrics. To meet RPO, we must have sufficient data metrics. And to balance the ongoing operational costs with the per incident costs, we must have supporting scalability metrics. My talk reviewed these necessary metrics and considerations.

Today’s talk was on the high-level management of a disaster recovery program. Back in 2008, I did a nuts-and-bolts talk about our recovery strategies. I also put my talk from SNW 2008 up, for those interested.

Evolution of disaster recovery (Video no longer available)

Making and mounting Vss snapshots in Windows Server 2008

Posted by

Tech tip: Volume Shadow Copy Services (Vss) on Window Server 2008 can make a copy of active, open files on the fly. It works on the block level similarly to an open file agent. This works a treat if you need a quick-and-dirty command line backup.


To make a copy of the (C:) volume:

C:\> vssadmin create shadow /for=c:


To view copies of the (C:) volume:

C:\> vssadmin list shadows /for=c:


To mount a shadow copy as a browseable folder:

C:\>mklink /d <folder name> <shadow copy volume from list>

C:\>mklink /d C:\mycopy \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy6


For more information, see:

Virtualization and BCP Webinar Today

Posted by

How My Firm Reduced Costs and Delivers Agile IT Infrastructure through Virtualization

Virtualization has become an important part of many organizations’ IT strategy for 2009 and beyond.

The availability of both data and IT systems are at risk not only from natural disasters but also power outages, human error and hardware failures. To ensure that your company can quickly recover systems and data in the event of such an incident, it needs a reliable and cost-effective disaster recovery plan. The use of virtualization and data protection technology combined helps you control costs as your company grows, which is essential in any economic climate.

My firm has deployed CA ARCserve Backup and Microsoft Hyper-V server 2008 to create a simple and scalable disaster recovery environment. The combined solution is responsible for backing up around 36 terabytes of data every week.

Join this webcast to find out how CA ARCserve Backup, combined with Microsoft Hyper-V 2008, can work in tandem to protect many terabytes of data, and deliver an agile, cost-effective IT infrastructure.

In this webcast, you’ll also hear:

  • How to utilize CA ARCserve Backup to restore single files
  • How CA ARCserve can backup your physical environment and also restore virtual instance
  • Microsoft’s Virtualization Strategy
  • The role Microsoft’s Hyper-V plays today and what you can expect in the upcoming release of Windows Server 2008 R2
  • How in the event of a disaster, my firm is able to recover 86 physical servers to 12 standby servers in just two hours
  • How my firm has been able to minimize not only downtime but also its spend on disaster recovery utilizing this combined solution

Eric G. Pitcher
Eric Pitcher is vice president of technology strategy at CA, responsible for setting and communicating CA’s Recovery Management plans across the business unit, throughout CA and to partners and customers. Previously, Eric served as vice president of product management at CA, responsible for defining the process, requirements and product specifications for CA’s Recovery Management product lines. Prior to that, Eric worked as assistant vice president of CA’s research and development global SWAT team—a specialized task force designed to maximize the quality, customer satisfaction, and market competitiveness of CA’s storage management solutions.

Before joining CA, Eric was network and systems administrator at Universal Studios Florida and was responsible for server and network design, installation, administration and support on a network of more than 1,000 users. Eric earned a bachelor of science degree in business administration from the University of Central Florida.

Wolfgang Goerlich
J Wolfgang Goerlich, CISSP, CISA, is an information security professional who focuses on performance and availability. Mr. Goerlich is currently … the network operations and security manager. With ten years of experience, Mr. Goerlich has a solid understanding of both the IT infrastructure and the business it enables.

Isaac Roybal
Isaac Roybal is a Product Manager in Windows Server managing the Server Virtualization, including Microsoft’s Hyper-V, and has been involved with IT for over twelve years. Seven of those years have been with Microsoft. Isaac’s career started in Systems and Network Engineering working with VMS, Windows Server since NT 3.51 and IIS 4 in various capacities.

Virtualization Webinar next Monday

Posted by

As I mentioned before, I have played around a bit with Hyper-V and virtualized my production and recovery systems. CA did a case study on the project. This coming Monday, April 20 at 12:00 pm Eastern, I am doing a joint webcast with CA and Microsoft. The topic is still virtualization with the focus on disaster recovery. I doubt I will say anything new during the talk, excepting the talk will be much briefer than some others I have given on DR. CA’s going to talk a bit about their CDP, however, which is pretty cool stuff.

DRP Training, Testing and Auditing

Posted by

What role does Disaster Recovery Plan training, testing, and auditing play in a successful Business Continuity program?

Testing. Things are only known to be good at the time you check. The time to find out that components of the DR plan are not good is not during an actual disaster. That time has a premium cost. No, the time to identify and correct weaknesses is during test runs. The only cost for that time is the time for those testing.

Training. Those testing the plan have to know what to do. Furthermore, they have to know it to an extent that executing the plan becomes second nature. This is because actual disasters are stressful affairs. It is easy to make mistakes, omit steps, or forget details when under stress. The role of planning and training is to ingrain the steps and make the plan easier to perform if needed.

Auditing. A second set of eyes is always needed, particularly when that pair of eyes belong to an auditor. No good author would publish a book without an editor. Likewise, no good InfoSec professional should publish a plan without an auditor. A trusted third-party will always find ways to improve upon your plan.

Training, testing, and auditing are fundamental in achieving the BCP/DRP objectives.

CA Case Study on our use of ARCserve and Hyper-V

Posted by

After looking at several P2V-V2P solutions, we chose CA ARCserve. The choice has several benefits. The primary one is that it allows us to use a single tool for both data protection and for physical/virtual conversions. Essentially, this means a flat learning curve for my team. The other benefit is that CA ARCserve is significantly less expensive that dedicated P2V tools. CA did a case study on how we use their product and it is now online.

Relying on Third Parties for DR

Posted by

Many of us rely upon vendors and third-parties for our disaster recovery efforts. For example, I personally rely upon a refueling company to keep my generator topped off and a maintenance company to keep it running. Other companies rely upon shared data centers, data backup/recovery companies, and DR planners like Agility Recovery.

A weakness in these plans occurs when a regional disaster impacts multiple companies. In these scenarios, the third-party may lack the capacity to handle all the requests and be overwhelmed. One thing that happened to my data center during the winter storm (which knocked power out for five days) was that the fuel trucks were delayed for 48 hours, and the maintenance crew delayed for 24. The times are well within tolerances, but well beyond normal service levels.