Attacking hypervisors without exploits

Archive for the ‘Virtualization’ Category

Attacking hypervisors without exploits

Posted by

The OpenSSL website was defaced this past Sunday. (Click here to see a screenshot from @DaveAtErrata on Twitter.) On Wednesday, OpenSSL released an announcement that read: “Initial investigations show that the attack was made via hypervisor through the hosting provider and not via any vulnerability in the OS configuration.” The announcement led to speculation that a hypervisor software exploit was being used in the wild.

Exploiting hypervisors, the foundation of infrastructure cloud computing, would be a big deal. To date, most attacks in the public cloud are pretty much the same as the traditional data center. People make the same sort of mistakes and missteps, regardless of hosting environment. A good place to study this is the Alert Logic State of Cloud Security Report, which concludes “It’s not that the cloud is inherently secure or insecure. It’s really about the quality of management applied to any IT environment.”

Some quick checking showed OpenSSL to be hosted by SpaceNet AG, which runs VMware vCloud off of HP Virtual Connect with NetApp and Hitachi storage. It was not long before VMware issued a clarification.

VMware: “We have no reason to believe that the OpenSSL website defacement is a result of a security vulnerability in any VMware products and that the defacement is a result of an operational security error.” OpenSSL then clarified: “Our investigation found that the attack was made through insecure passwords at the hosting provider, leading to control of the hypervisor management console, which then was used to manipulate our virtual server.”

No hypervisor exploit, no big deal. Right? Wrong.

Our security controls are built around owning the operating system and hardware. See, for example, the classic 10 Immutable Laws of Security. “Law #2: If a bad guy can alter the operating system on your computer, it’s not your computer anymore. Law #3: If a bad guy has unrestricted physical access to your computer, it’s not your computer anymore.” Hypervisor access lets the bad guy do both. It was just one wrong password choice. It was just one wrong networking choice for the management console. But it was game over for OpenSSL, and potentially any other customer hosted on that vCloud.

It does not take a software exploit to lead to a breach. Moreover, the absence of exploits is not the absence of lessons to be learned. Josh Little (@zombietango), a pentester who I work with, has long said “exploits are for amateurs”. When Josh carried out an assignment on a VMware shop recently, it was using a situation very much like the one at SpaceNet AG: he hopped onto the hypervisor management console. The point is to get in quickly, quietly, and easily. The technique is about finding the path of least resistance.

Leveraging architectural decisions and administration sloppiness is valid attack technique. Scale and automation, that is what changes with cloud computing. It is this change that magnifies otherwise small mistakes by IT operations and makes compromises like OpenSSL possible. Low quality IT management becomes even worse.

And cloud computing’s magnification effect on security is a big deal.

Private Cloud ROI

Posted by

When and how does private cloud computing pay for itself? What is the return? I recently spoke with Pam Baker (@bakercom1) about this topic. Check out Pam’s article in The IT Pro: Cloud ROI: How much and how soon?

Now mixing and matching appeals to me. A team should adopt a strategy and a toolset that enables managing compute resources on-premise and at utilities. The private or public option then comes down to economics, performance, and security. The security component can be a driving factor for economics, too.

Pam quotes a telling statistic from the Aberdeen Group: “companies using private clouds eliminate 38 percent of security and compliance costs as compared to public cloud users. Further, public cloud users experience 25 percent more problems with hacking, data loss, and audit deficiencies.” In other words, organizations are not going full public any time soon, and for good reason.

Read the full article: http://www.theitpro.com/author.asp?section_id=2006&doc_id=242050


This post is an excerpt from a press article. To see other media mentions and press coverage, click to view the Media page or the News category.

Cloud adoption and use

Posted by

I am tremendously in favor of virtualization, a staunch proponent for cloud computing, and I’d automate my own life if I could. After all, we dedicated most of last year to investigating and piloting various cloud backup solutions. But take a peek at my infrastructure and you might be surprised.

Why is my team still running physical servers? Why are we using so few public resources? And tape, really?

I am not the only one who is a bit behind on rolling out the new technology. Check out this study that came out on Forbes this week. “The slower adoption of cloud … reflects a greater hesitancy … remain conservative about putting mission-critical and customer data on the cloud. Regulations … may explain much of this reluctance. The prevalence of long-established corporate data centers with legacy systems throughout the US and Europe … may be another factor. Accordingly, the study confirms that overcoming the fear of security risks remains the key to adopting and benefiting from cloud applications.”

I have a sense that cloud computing, in the IaaS sense, is roughly where virtualization was circa 2004. It is good for point solutions. Some firms are looking at it for development regions. Now folks are beginning to investigate cloud for disaster recovery. (See, for example, Mark Stanislav’s Cloud Disaster Recovery presentation.) These low risk areas enable IT management to build competencies in the team. A next step would be moving out tier 3 apps. A few years after that, the mission-critical tier 1 apps will start to move. This will happen over the next five to eight years.

This logical progression gives the impression that I see everything moving to the cloud. As Ray DePena said this week, “Resist the cloud if you must, but know that it is inevitable.” I can see that. However inevitable cloud computing is, like virtualization, it does not fit all use cases.

Why are some servers still physical? In large part, it is due to legacy support. Some things cannot be virtualized and cannot be unplugged, without incurring significant costs. In some cases, this choice is driven by the software vendor. Some support contracts still mandate that they cover only physical servers. Legacy and vendors aside, some servers went physical because the performance gains outweigh the drawbacks. Decisions, decisions.

The majority of my environment is virtualized and is managed as a private cloud. Even there, however, there are gaps. Some areas are not automated and fully managed due to project constraints. We simply have not gotten there yet. Other areas probably will never be automated. With how infrequent an event occurs, and with how little manual work is needed, it does not make sense at my scale to invest the time. This is a conscious decision on where it is appropriate to apply automation.

Why are we not using so more public resources? Oh, I want to. Believe me. Now I am not keen on spending several weeks educating auditors until cloud reaches critical mass and the audit bodies catch up. But the real killer is costs. For stable systems, the economics do not make sense. The Forbes article points out that the drivers of public cloud are “speed and agility — not cost-cutting.” My team spent ten months in 2011 trying to make the economics work for cloud backup. Fast forward a half of a year, and we are still on tape. It is an informed decision based on the current pricing models.

Is cloud inevitable? The progression of the technology most surely is, as is the adoption of the technology in areas where it makes sense. The adoption curve of virtualization gives us some insight into the future. Today, there are successful firms that still run solely on physical servers with direct attached storage. Come 2020, as inevitable as cloud computing is, it is equally inevitable that there will be successful firms still running on in-house IT.

Many firms, such as mine, will continue to use a variety of approaches to meet a variety of needs. Cloud computing is simply the latest tactic. The strategy is striking the right balance between usability, flexibility, security, and economics.

Wolfgang

Side note: If you do not already follow Ray DePena, you should. He is @RayDePena on Twitter and cloudbender.com on the Web.

Cost justifying 10 GbE networking for Hyper-V

Posted by

SearchSMBStorage.com has an article on 10 GbE. My team gets a mention. The link is below and on my Press mentions page.

For J. Wolfgang Goerlich, an IT professional at a 200-employee financial services company, making the switch to 10 Gigabit Ethernet (10 GbE) was a straightforward process. “Like many firms, we have a three-year technology refresh cycle. And last year, with a big push for private cloud, we looked at many things and decided 10 GbE would be an important enabler for those increased bandwidth needs.”

10 Gigabit Ethernet technology: A viable option for SMBs?
http://searchsmbstorage.techtarget.com/news/2240079428/10-Gigabit-Ethernet-technology-A-viable-option-for-SMBs

My team built a Hyper-V grid in 2007-2008 that worked rather nicely at 1 Gbps speeds.We assumed 80% capacity on a network link, a density of 4:1, and an average of 20% (~200 Mbps) per vm. In operation, the spec was close. We had a “server as a Frisbee” model that meant non-redundant networking. This wasn’t a concern because if a Hyper-V host failed (3% per year) it only impacted up to four hosts (%2 of the environment) for about a minute.

When designing the new Hyper-V grid in 2010, we realized this bandwidth was no longer going to cut it. Our working density is 12:1 with our usable density of 40:1. That meant 2.4 Gbps to 8 Gbps per node. Our 2010 model is “fewer pieces, higher reliability” and that translates into redundant network links. This was more important when a good portion of our servers (10-15%) would be impacted by a link failure.

Let’s do a quick back of the napkin sketch. Traditional 1 Gbps Ethernet would require 10 primary and 10 secondary Ethernet connections. That’s ten dual 1 Gbps adapters: 10 x $250 = $2,500. That’s twenty 1 Gbps ports: 20 x $105 = $2,100. Then there’s the time and materials cost for cabling all that up. Let’s call that $500. By contrast, one dual port 10 GbE adapter  is $700. We need two 10 GbE ports: 2 x $930 = $1,860. We need two cables ($120/per) plus installation. Let’s call that $400.

The total cost per Hyper-V host for 10 GbE is $2,960. Compared to the cost of 1 Gbps ($5,100), we are looking at a savings of $2,140. For higher density Hyper-V grids, 10 GbE is easily cost justified.

It took some engineering and re-organizing. We have been able to squeeze quite a bit of functionality and performance from the new technology. Cost savings plus enhancements? Win.

Matriux – Upgrade to 2.6.32-7 and install the GPL Hyper-V integration

Posted by

These steps will install Matriux into a Hyper-V vm (2008 or 2008 R2) and integrate the network and storage adapters.

Create a Hyper-V vm with the legacy network adapter and a 10 GB vhd.
Download Matriux and install onto the local vhd.

Configure apt-get to download the Lucid (2.6.32-7) kernel.

 

sudo bash

nano /etc/apt/sources.list

 

# added by -JWG- for Hyper-V integration

# The Lucid repository contains the 2.6.32-7 kernel

deb http://archive.ubuntu.com/ubuntu/ lucid main

 

apt-get update

 

Install the kernel and then comment out the repositories.

 

apt-cache search linux-image-2.6.32

apt-get install linux-image-2.6.32-7-generic linux-headers-2.6.32-7-generic build-essential

 

nano /etc/apt/sources.list

Comment out the #deb line

 

Validate the kernel after rebooting to ensure we are on 2.6.32-7.

 

uname -r

 

Enable the GPL integration components.

 

uname -r

sudo bash

cd /lib/modules/2.6.32-7-generic/kernel/drivers/staging/hv

insmod hv_vmbus.ko

insmod hv_blkvsc.ko

insmod hv_netvsc.ko

insmod hv_storvsc.ko

 

Add the modules to the startup file.

 

nano /etc/initramfs-tools/modules

 

# added by -JWG- for Hyper-V integration

hv_vmbus

hv_blkvscb

hv_netvsc

hv_storvsc

 

update-initramfs -u

reboot

 

 

Confirm that the modules are loaded. You will have full network and disk integration. The mouse integration (Inputvsc) is currently provided by Citrix Project Satori and has not yet been patched to 2.6.32-7.

 

lsmod | grep vsc

 

Matriux – Downgrade to 2.6.18 and install Hyper-V’s integration components

Posted by

These steps will install Matriux into a Hyper-V vm (2008 or 2008 R2) and integrate the mouse, network adapter, and storage adapter.

Create a Hyper-V vm with the legacy network adapter and a 10 GB vhd.
Download Matriux and install onto the local vhd.
Download the Linux Integration components for Windows Server 2008 R2 (LinuxIC v2.iso).
Download the Citrix Project Satori mouse driver (Inputvsc.iso)

 

Configure apt-get to download the previous version of the kernel, which includes first flushing and renewing the encryption keyring.

 

sudo bash

 

apt-key list

apt-key del 437D05B5

apt-key del FBB75451

 

apt-key list should now return an empty list.

 

Install the keyring

apt-get install debian-archive-keyring

 

Load the key for the ftp.us.debian.org and security.debian.org.

 

cd /home/tiger/.gnupg/

mv gpg.conf gpg.con~

 

gpg –keyserver wwwkeys.eu.pgp.net –recv 9AA38DCD55BE302B

gpg –list-keys 9AA38DCD55BE302B

gpg –export 9AA38DCD55BE302B > 9AA38DCD55BE302B.gpg

apt-key add ./9AA38DCD55BE302B.gpg

apt-key list

 

Add the repositories to the end of the sources list, and update the apt list.

 

nano /etc/apt/sources.list

 

# Repository for older kernel versions

# added by -JWG- for Hyper-V integration

deb http://ftp.us.debian.org/debian etch main

deb http://security.debian.org/debian-security etch/updates main

 

cd /usr/src/

apt-get update

 

Install the kernel and then comment out the repositories.

 

apt-cache search linux-image-2.6.18

apt-get install linux-image-2.6.18-6-amd64 linux-headers-2.6.18-6-amd64 build-essential

 

nano /etc/apt/sources.list

Comment out the two #deb lines.

 

Modify the menu.lst file so it defaults to the 2.6.18-6 and reboot.

 

nano /boot/grub/menu.lst

default 2

reboot

 

Validate the kernel after rebooting to ensure we are on 2.6.18-6.

 

uname -r

 

Insert the LinuxIC v2.iso disk, copy locally, and install the drivers.

 

sudo bash

 

mkdir /opt/linux_ic

cd /opt/linux_ic

cp -R /media/CDROM/* /opt/linux_ic/

./setup.pl drivers

cat drvinstalls.err

 

The only error should be “make: udevcontrol: command not found” and “make: *** [install] Error 127”. These simply indicate that we will need to manually add the services to the init modules file.

 

Insert the Inputsvc.iso disk.

 

mkdir /opt/inputvsc

cd /opt/inputvsc

cp -R /media/CDROM/* /opt/inputvsc/

./setup.pl drivers

cat drvinstall.err

 

Again, the only errors should be related to the modules. Edit that file now.

 

nano /etc/initramfs-tools/modules

 

# added by -JWG- for Hyper-V integration

netvsc

blkvsc

storvsc

inputvsc

 

update-initramfs -u

reboot

 

Confirm that the modules are loaded. Then it is play time.

 

lsmod | grep vsc

Matriux – Penetration Testing from Hyper-V

Posted by

Matriux is a vulnerability assessment / penetration testing Linux distribution. The team’s beta release was the beginning of this month, and I have been playing around with the distro for the past couple weeks. What can I say? I am a sucker for Latin motto’s (“Aut viam inveniam aut faciam” or “I shall find a way or make one”) and for cleanly laid out VA/PT toolsets.

The bonus, for those running Hyper-V, is that Matriux is a Kubuntu based and comes with the Jaunty kernel (2.6.28-13-generic). Setting up a Hyper-V security appliance is as simple as creating a vm, using the legacy network adapter, skipping the hard drive, and booting off the downloadable ISO. Matriux works right out of the box within Hyper-V.

You can compare this to the Slax VA/PT distros, which do not support the network adapter. Often times, these distros do not even support the mouse. Using the Matriux Live CD in Hyper-V is a breeze. For an environment to support a demo or an occassional vulnerability assessment, you cannot ask for more.

If you are doing regular assessments, there are a couple limitations with Hyper-V. The legacy network adapter performs at 100 Mbps (significantly slower than the 10 Gbps speed of the standard network adapter.) The Live ISO is read-only, too. The mouse integration is present, but it is not the seamless integration one is used with Windows vms. Oh, and the mouse integration does not work when connected to Hyper-V over RDP. To get full functionality, you will need to install Matriux into a vhd and install the Hyper-V integration components.

The Jaunty kernel does not support integration. You have two options: (1) downgrade Matriux’s kernel to 2.6.18 and install Hyper-V’s integration components; or (2) upgrade Matriux to the Lucid kernel (2.6.32-7) and enable the Hyper-V GPL code. Option (2) provides faster performance and is in-line with the Matriux planned Beta 2, but it does not support the full mouse integration.

For those who want to skip to the chase and simply try out Matriux under Hyper-V, I have done the steps for you. You can download the security appliance from SimWitty’s website. Enjoy!

Thank you to the Matriux team for a smooth, well done security distribution beta. Thanks goes, too, to Tom Houghtby for providing the Linux knowledge and guidance that made the integration possible.

jwg

Building our own cloud

Posted by

I have been thinking a lot about IT service architecture. After all, my theme this year is “Security is Design”. How can we maximize the benefits of new technologies while minimizing the security risks?

Take cloud computing. The buzz is that cloud computing reduces costs and increases scalability. Cloud computing, specifically with cloud hosting, does this by putting our servers in a multi-tenant environment and then charging based on utilization. So organizations get pay-as-you-go pricing that is shared across scores of customers (tenants). Add self-service and rapid provisioning, and you get a fast and flexible solution.

That makes the IT operations side of my brain happy. But then my IT security side chirps up.

Multi-tenant increases security risks as we no longer have end-to-end visibility and control coverage. Think of the property security of an apartment versus a private home.  Multi-tenant decreases responsiveness, too, as the service provider must balance the needs of his organization against the needs of yours. Think the customer service you get from your telephone utility versus your in-house telecommunications specialist. Above and beyond that, simply by being a new architecture, cloud computing will bring an entirely new set of risks that can only be identified with time.

So how can we balance the benefits and risks of cloud computing? One way is to bring the cloud computing technologies in-house. The basics are readily available: virtualization, rapid provisioning, self-service, resource pooling, charge back. A data center built on the cloud computing model, but leveraging the best of an internal IT team: responsiveness, responsibility, and business domain knowledge.

My team has been using the terms “in-house cloud” or “private cloud” to describe our efforts to achieve this balance. This week, vendors led by EMC launched www.privatecloud.com as a resource building such beasts. Check out their definition of private cloud. While the blog is VMware and EMC based, I wager it is only a matter of time before Microsoft and Compellent come out with comparable information.

Done right, private clouds or cloud computing built in-house will provide a smooth transition for organizations to get the benefits of this new architecture.

Virtualization and the physical security boundary

Posted by

There are several laws of information security. Ask ten InfoSec pros and you will likely get ten different lists of laws, but I wager every one of them will agree on a couple fundamentals. If an attacker can gain physical access to the computer, or if an attacker can modify the operating system, then the attacker can compromise the computer. The reason is physical access allows an attacker to bypass the OS and directly access the data, and bypass the security controls.

Now, switch gears and picture a virtual environment. The physical analog is the hypervisor. If an attacker can gain access to the hypervisor, he has the same abilities as if he had access to the physical computer. If an attacker can exploit the Windows or Linux server hosting Hyper-V or XenServer, then the attacker can compromise all virtual computers on the host.

It is a subtle shift in the way of thinking. In the past, only one server ran on one piece of hardware, and the security boundary was the server itself. Thus you would place a physical web server in the DMZ and physically wire it to the firewalls. Computers with different security postures (e.g., domain controllers) would be on separate physical hardware and wired into separate physical networks.

Thus the hypervisor should host servers that have relatively the same security posture. One should not, for instance, host domain controllers and public-facing web servers on the same hypervisor. Even if the public-facing web server is on a separate virtual network, you still run the risk of its compromise affecting the domain controllers.

The security boundary is the physical hardware, not the computer itself.

Installing ARCserve on Hyper-V Core

Posted by

Hyper-V Core, or the Hyper-V role running on a Server Core installation of Windows Server 2008, provides only a command line interface. This makes installing management apps a bit tricky.

Take CA ARCserve Backup agent, for example. You cannot simply logon and run the installer. Rather, you need to use the management console that comes with ARCserve (r12.5). Use the management console to push out the agent to the Server Core.

The normal caveats apply to push installations. Both the management console and the Server Core computers should be on the same network. Both computers should be in the same Windows domain (or have a domain trust relationship setup.) Ensure the Windows firewall on the Hyper-V Core is accepting inbound file (CIFS) and procedure (RPC) requests. Once those are accomplished, pushing the agent is straightforward.

Similar procedures apply to Diskeeper and anti-virus software.