Perimeter-less Security and Clouds on the Horizon

Archive for April, 2008

Perimeter-less Security and Clouds on the Horizon

Posted by

Cloud computing is similar to what the tech industry has been calling “on-demand” or “utility” computing, terms used to describe the ability to tap into computing power on the Web with the same ease as plugging into an electric outlet in your home. But cloud computing is also different from the older concepts in a number of ways. One is scale. Google, Yahoo!, Microsoft, and Amazon.com have vast data centers full of tens of thousands of server computers, offering computing power of a magnitude never before available. Cloud computing is also more flexible. Clouds can be used not only to perform specific computing tasks, but also to handle wide swaths of the technologies companies need to run their operations. Then there’s efficiency: The servers are hooked to each other so they operate like a single large machine, so computing tasks large and small can be performed more quickly and cheaply than ever before. A key aspect of the new cloud data centers is the concept of “multitenancy.” Computing tasks being done for different individuals or companies are all handled on the same set of computers. As a result, more of the available computing power is being used at any given time.”

Clouds are on the horizon. I know very few data centers that host everything internally. Most, including my own, deliver a mixture of desktop applications, client-server applications, and hosted (e.g., cloud) web apps. The shift has an immediate impact on security planning. Information security architectures began with terminal-server applications and focused on strong perimeters. With apps moving to the desktops, the perimeter became a little wider and a little more porous. But we could still control the information, by restricting what data was on the desktops and using technologies like end-point security. In fact, one might argue that many of our controls today are based around restricting information to the data center and keeping it off the desktops. The next major shift, which we are already starting to see, is moving the information from data centers to third-party hosting providers. This is only going to accelerate as young people, weaned on MySpace and Gmail,  join the workforce. Another accelerant which we may see in the next few years is another economic downturn. Both sociological and economical changes are moving the data from controlled perimeters to uncontrolled open spaces. The clouds on the horizon are coming nearer.

The open question is this: how do we build controls in an age of perimeter-less security?

Virtualization for Disaster Recovery: Strategies

Posted by

Using virtualization as a disaster recovery strategy can in one of two scenarios:

First scenario is vm to vm. Put a hypervisor at the production site and another at the recovery site. Run the production server in a vm. Replicate the vm drives to the recovery site. During a disaster, boot the vm up on the recovery hypervisor.

The second scenario is bare metal to vm. Put a physical server running on bare metal at the production site. Stage the physical server with the necessary vm drivers (in Hyper-V, this is called the Integration Components.) Put a hypervisor at the recovery site. Replicate the disks. During a disaster, boot the server up as a vm on the recovery hypervisor. The second scenario requires block level replication and the ability for the hypervisor to read native disks. If both of these requirements are not possible, an alternative solution exists. This is to restore the production server into a vm using software that supports VM P2V DR. Examples of this software include Acronis, Arcserve, and Backup Exec. The downside is that this option takes significantly longer.

Virtualization for Disaster Recovery: Metrics

Posted by

Some quick thoughts on using server virtualization for disaster recovery. The key metrics in using VMs for DR is RTO and RPO. These are defined during the BIA process. One question that I wrestled with was how to get a near time RTO (within minutes before the disaster) and a rapid RPO (within 1hours after the disaster).

Traditional P2V techniques rely on a live system or a nightly backup, so RTO is up to 24 hours. Traditional P2V also relies upon writing the data back out into virtual disks, so the RPO for our average server was up to 7 hours. We addressed these challenges by keeping the storage on a backend SAN and pointing the disk into the VM in the event of a disaster. The RTO is then near time and the RPO is an hour or less.

The DR strategy requires native NTFS disk access and SAN support. Both VMware ESX and Hyper-V support this type of DR. Linux based hypervisors such as Xen do not.

Effective Presentation Techniques

Posted by

As a company that trades on the open market, we have many rules and regulations to follow in regards to employee trading. Part of this is a yearly training session on trading compliance and the systems we provide. I sat thru this a few weeks back. It was the normal session about not accepting cash or large gifts, and not trading stocks that the company is currently researching or holding, et cetera.

At the end of the presentation, the head of the compliance team stood up. She thanked her staff for the presentation and thanked all of us for coming. Then she pulled out a newspaper. She described the company in the news and how similar it was to ours in size, focus, and culture. She then read from the paper. This company had a compliance lapse. The SEC fined the company millions of dollars, and fined the person who executed the trade over a hundred thousand. The room fell dead silent.

I thought this was a particularly effective technique to reinforce the idea that the rules we follow really do matter.