Attacking hypervisors without exploits

Archive for the ‘Security’ Category

Attacking hypervisors without exploits

Posted by

The OpenSSL website was defaced this past Sunday. (Click here to see a screenshot from @DaveAtErrata on Twitter.) On Wednesday, OpenSSL released an announcement that read: “Initial investigations show that the attack was made via hypervisor through the hosting provider and not via any vulnerability in the OS configuration.” The announcement led to speculation that a hypervisor software exploit was being used in the wild.

Exploiting hypervisors, the foundation of infrastructure cloud computing, would be a big deal. To date, most attacks in the public cloud are pretty much the same as the traditional data center. People make the same sort of mistakes and missteps, regardless of hosting environment. A good place to study this is the Alert Logic State of Cloud Security Report, which concludes “It’s not that the cloud is inherently secure or insecure. It’s really about the quality of management applied to any IT environment.”

Some quick checking showed OpenSSL to be hosted by SpaceNet AG, which runs VMware vCloud off of HP Virtual Connect with NetApp and Hitachi storage. It was not long before VMware issued a clarification.

VMware: “We have no reason to believe that the OpenSSL website defacement is a result of a security vulnerability in any VMware products and that the defacement is a result of an operational security error.” OpenSSL then clarified: “Our investigation found that the attack was made through insecure passwords at the hosting provider, leading to control of the hypervisor management console, which then was used to manipulate our virtual server.”

No hypervisor exploit, no big deal. Right? Wrong.

Our security controls are built around owning the operating system and hardware. See, for example, the classic 10 Immutable Laws of Security. “Law #2: If a bad guy can alter the operating system on your computer, it’s not your computer anymore. Law #3: If a bad guy has unrestricted physical access to your computer, it’s not your computer anymore.” Hypervisor access lets the bad guy do both. It was just one wrong password choice. It was just one wrong networking choice for the management console. But it was game over for OpenSSL, and potentially any other customer hosted on that vCloud.

It does not take a software exploit to lead to a breach. Moreover, the absence of exploits is not the absence of lessons to be learned. Josh Little (@zombietango), a pentester who I work with, has long said “exploits are for amateurs”. When Josh carried out an assignment on a VMware shop recently, it was using a situation very much like the one at SpaceNet AG: he hopped onto the hypervisor management console. The point is to get in quickly, quietly, and easily. The technique is about finding the path of least resistance.

Leveraging architectural decisions and administration sloppiness is valid attack technique. Scale and automation, that is what changes with cloud computing. It is this change that magnifies otherwise small mistakes by IT operations and makes compromises like OpenSSL possible. Low quality IT management becomes even worse.

And cloud computing’s magnification effect on security is a big deal.

Building a better InfoSec community

Posted by

How can we build a stronger community of speakers and leaders? I have a few thoughts. In some ways, this is a response to Jericho’s Building a better InfoSec conference post. I disagree with a couple of Jericho’s points. To be fair, he brings more experience in both attending conferences and reviewing CFPs. For that reason and others, I have a slightly different perspective.

Engagement should be encouraged, heckling discouraged. Hecklers and those looking to one-up the speaker should be run out of the room. But engagement, engagement is something different: sharing complementary knowledge, and pointing out ideas. Engagement is about raising everyone in the talk.

At the BSides Detroit conference, during OWASP Detroit meetings, and during MiSec talks, we get a lot of engagement. Rare is the speaker that goes ten or fifteen minutes without being interrupted. It is a good thing. If the audience has something to add, let’s get it in the discussion. If the speaker says something incorrect, let’s address it right off. In fact, many talks directly solicit feedback and ideas from the audience. Engagement, to me, is key to building a stronger local community.

Participation should be encouraged, waiting for rockstar status discouraged. I have seen people sit on the sidelines waiting until they had just enough experience, just enough content, just enough mojo to justify being a speaker. The only justification a community needs to accept a speaker is that the speaker is committed to putting the time into giving a great talk.

At local events, we have a mixed audience. I believe that every one of us has a unique perspective, a unique skillset, and a unique knowledge. True, a pen-tester with 20 years of experience might not learn anything from someone with only a few years. Yet not all of our audience are pen-testers. It is the commitment to put together a good talk, practice it, research past talks of similar nature, and solicit feedback that marks someone as a good presenter.

Let me give an example. At last week’s MiSec meeting, Nick Jacob presented on PoshSec. Nick (@mortiousprime) is interning with me this summer and has a total of ten weeks of paid InfoSec experience under his belt. Don’t get me wrong. Nick comes from EMU’s program and has done a lot of side work. But a 20 year veteran, Nick is not.

Nick’s talk was on applying PowerShell to the SANS Critical Security Controls. He structured his talk with engagement in mind. He covered a control and associated scripts for five or ten minutes, and then turned it over to the audience for feedback. What would the pen-testers in the room do to bypass these controls? What would the defenders do to counter the attacks? All in all, the presentation went over well and everyone left with new information and ideas. That is how to do it.

In sum, the better InfoSec communities remove the concerns speakers have about being heckled and being inadequate. A better community stresses engagement and participation. Such communities do so in ways that open up new opportunities for new members while strengthening the knowledge among those who have been in the profession a long time.

That is the trick to building a better InfoSec community.

Incident Management in PowerShell: Containment

Posted by

Welcome to part three of our Incident Management series. On Monday, we reviewed preparing for incidents. Yesterday, we reviewed identifying indicators of compromise. Today’s article will cover containing the breach.

The PoshSec Steele release (0.1) is available for download on GitHub.

At this stage in the security incident, we have verified a security breach is in effect. We did this by notifying changes in the state and behavior of the system. Perhaps group memberships have changed, suspicious software installed, or unrecognized services are now listening on new ports. Fortunately, during the preparation phase we integrated the system into our Disaster Recovery plan.


There are two concepts behind successful containment. First, use a measured response in order to minimize the impact on the organization. Second, leverage the disaster recovery program and execute the runbook to maintain services.

When a breach is identified, kill all services and processes that are not in the baseline (Stop-Process). Oftentimes attackers have employed persistence techniques, so we must setup the computer to prevent new processes from spawning (see @obscuresec’s Invoke-ProcessLock script). This stops the breach in progress and prevents the attacker from continuing on this machine.

We now need to execute a disaster recovery runbook to resume services. Data files can be moved to a backup server using file replication services (New-DfsnFolderTarget). Services and software can be moved by replaying the build scripts on the backup server. The success metric here is minimizing downtime and data loss, thereby minimizing and potentially avoiding any business impact.

We can now move onto the network layer. If necessary, QoS and other NAC services can be set during the initial transfer. We then can move the compromised system onto a quarantine network. This VLAN should contain systems with the forensics and imaging tools necessary for the recovery process.

The switch commands for QoS, NAC, and VLAN vary by manufacturer. It is a good idea to determine what these commands are and how to execute them. A better idea is to automate these with PowerShell, leveraging the .Net Framework and libraries like SSH.Net and SharpSSH.

For more information about the network side of inicident containment, please see Mick Douglas’s talk: Automating Incident Response. The concepts Mick discusses can be executed manually, automated with switch scripts, or automated with PowerShell and SSH libraries.

To summarize Containment, we respond in a measured way based on the value the system delivers to the organization. Containment begins with disaster recovery: fail-over the services and data and minimize the business impact. We can then move the affected system to a quarantine network, and move onto the next stage: Eradication. The value PowerShell delivers is in automating the Containment process. When minutes count and time is expensive, automation lowers the impact of a breach.

This article series is cross-posted on the PoshSec blog.

Incident Management in PowerShell: Preparation

Posted by

We released PoshSec last Friday at BSides Detroit. We have named v0.1 the Steele release in honor of Will Steele. Will recognized PowerShell’s potential for improving an organization’s security posture early on. Last year, Matt Johnson — founder of the Michigan PowerShell User Group — joined Will and launched the PoshSec project. Sadly, Will passed away on Christmas Eve of 2011. A number of us have picked up the banner.

The Steele release team was led by Matt Johnson and included Rich Cassara (@rjcassara), Nick Jacob (@mortiousprime), Michael Ortega (@securitymoey), and J Wolfgang Goerlich (@jwgoerlich). You can download the code from GitHub. In memory of Will Steele.

This is the first of a five part series exploring PowerShell as it applies to Incident Management.

So what is Incident Management? Incident Management is a practice comprised of six stages. We prepare for the incident with automation and application of controls. We identify when an incident occurs. Believe it or not, this is where most organizations fall down. If you look at the Verizon Data Breach Investigations Report, companies can go weeks, months, sometimes even years before they identify that a breach has occurred. So we prepare for it, we identify it when it happens, we contain it so that it doesn’t spread to other systems, and then we clean up and recover. Finally, we figure out what happened and apply the lessons learned to reduce the risk of a re-occurrence.

Formally, IM consists of the following stages: Preparation, Identification, Containment, Eradication, Recovery, and Lessons Learned. We will explore these stages this week and examine the role PowerShell plays in each.


The key practice in the Preparation stage is leveraging the time that you have on a project, before the system goes live. If time is money, the preparation time is the cheapest time.

Our most expensive time is later on, in the middle of a breach, or in a disaster recovery scenario. The server is in operation, the workflow is going on, and we are breaking the business by having that server asset unavailable. There is a material impact to the organization. It is very visible, from our management up to the CEO level. Downtime is our most expensive time.

The objective in Preparation is to bank roll as much time as possible. We want to ensure, therefore, that extra time is allocated during pre-launch for automating the system build, hardening the system, and implementing security controls. Then, when an incident does occur, we can identify and recover quickly.

System build is where PowerShell shines the brightest. As the DevOps saying goes, infrastructure is code. PowerShell was conceived of as a task framework and admin automation tool, and it can be used to script the entire Windows server build process. Take the time to automate the process and, once done, we place the build scripts in a CVS (code versioning software) to track changes. When an incident occurs, we can then pull on these scripts to reduce our time to recover.

Once built, we can harden to increase the time it will take an attacker to breach our defense. CIS Security Benchmarks (Center for Internet Security) provides guidance on settings and configurations. As with the build, the focus is on scripting each step in hardening. And again, we will want to store these scripts in a CVS for ready replays during an incident.

Finally, we implement security controls to detect and correct changes that may be indicators of compromise. For a breakdown of the desired controls, we can follow the CSIS 20 Critical Security Controls matrix. The Steele release of PoshSec automates (1) Inventory of Authorized and Unauthorized Devices; (2) Inventory of Authorized and Unauthorized Software; (11) Limitation and Control of Network Ports, Protocols, and Services; (12) Controlled Use of Administrative Privileges; and (16) Account Monitoring and Control.

The bottom line is we baseline areas of the system that attackers will change, store those baselines as XML files in a CVS, and check regularly for changes against the baseline. We use the Export-Clixml and Compare-Object cmdlets to simplify the process.

At this point in the process, we are treating our systems like code. The setup and securing is completed using PowerShell scripts. The final state is baselined. The baselines, along with the scripts, are stored in a CVS. We are now prepared for a security incident to occur.


Next step: Identification
Tomorrow, we will cover the Identification stage. What happens when something changes against the baseline? Say, a new user with a suspicious name added to a privileged group. Maybe it is a new Windows service that is looking up suspect domain names and sending traffic out over the Internet. Whatever it is, we have a change. That change is an indicator of compromise. Tomorrow, we will review finding and responding to IOCs.

This article series is cross-posted on the PoshSec blog.

Software vulnerability lifecycle

Posted by

How long does it take to go from excitement to panic? Put differently, how long is the vulnerability lifecycle?

We know the hardware side of the story. Moore’s law predicts transistors density double every 18 months. On the street, factoring in leasing, this means computing power jumps up every 36 months.

Now let’s cover the software side of the story. It takes a couple of years for software ideas to be developed and to reach critical mass. We see a 24-month development cycle. Add another 6-12 months for the software to become prevalent and investigated by hackers, both ethical and not.

I made a prediction this past weekend. Some at BSides Chicago were calling this Wolf’s Law. Not me. I checked the video replay. Nope. It is simply a hunch I have. Start the clock when developers get really excited about software, tools, or techniques. Stop the clock when a hacker presents an attack at a well-known conference.

Wolf’s Hunch says it takes 36 months to go from excitement to panic.

As a security industry, the trick is to get ahead of the process. How could we engage the developers at months 1-12? One way might be to attend dev conferences. Here is how I put it at BSides Chicago:

“You know what is scary? Right now, as we are all in here talking, there is a software developer conference going on. Right now. There are a whole bunch of software developer guys talking about the next biggest thing. 36 months from now, what the developers are really excited about, we will be panicking about.”

I checked the news this morning. During this past weekend, NY Disrupt was in full swing. At approximately the time I was speaking, developers were hard at it in the Hackathon. Lots of people are excited about the results, such as Jarvis:

“Jarvis works, using APIs provided by Twilio, Weather Underground and Ninja Blocks to help you control your home and check the current conditions, headlines and what’s making news, and more, all just by dialing a number from any telephone and issuing voice commands, It’s like a Siri, but housed on Windows Azure and able to plug into a lot more functionality.”

Uh huh. A Jarvis. Voice control. Public APIs. What could possibly go wrong?

Will my hunch play out? Check back here in May 2016. My money is on a story about a rising infosec star who is demonstrating how home APIs can be misused.

Out and About: Great Lakes InfraGard Conference

Posted by

I am presenting a breakout session at this year’s Great Lakes InfraGard Conference. Hope to see you there.


Securing Financial Services Data Across The Cloud: A Case Study

We came from stock tickers, paper orders, armored vehicles, and guarded vaults. We moved to data bursts, virtual private networks, and protocols like Financial Information eXchange (FIX). While our objective remains the same, protect the organization and protect the financial transactions, our methods and technologies have radically shifted. Looking back is not going to protect us.

This session presents a case study on a financial services firm that modernized its secure data exchange. The story begins with the environment that was developed in the previous decade. We will then look at high-level threat modelling and architectural decisions. A security-focused architecture works at several layers and this talk will explore them in depth; including Internet connections, firewalls, perimeters, hardened operating systems, encryption, data integration, and data warehousing. The case study concludes with how the firm transformed the infrastructure, layer by layer, protocol by protocol, until we were left with a modern, efficient, and security-focused architecture. After all, nostalgia has no place in financial services data security.

Surviving the Robot Apocalpyse

Posted by

I am on the latest BSides Chicago podcast episode: The Taz, the Wolf, and the exclusives. Security Moey interviewed me about a new talk I am developing for Chicago, titled Surviving the Robot Apocalypse.

The inspiration comes from Twitter. Tazdrumm3r once said, “@jwgoerlich <~~ My theory, he’s a Terminator from the future programmed 4 good 2 rally & lead the fight against SkyNet, MI branch.” A few weeks back, we were discussing the robotics articles I wrote for Servo magazine and some ‘bots I built with my son. To which, Infosec_Rogue said, “@jwgoerlich not only welcomes our robot overlords, he helped create them.”

I can roll with that. Let’s do it.

The goal of this session is to cover software security and software vulnerabilities in an enjoyable way. Think Naked Boulder Rolling and Risk Management. Unless, of course, you didn’t enjoy Naked Boulder Rolling. In that case, imagine some other talk I gave that you enjoyed. Or some other talk someone else gave that you enjoyed. Yeah. Pick one. Got it? Surviving is like your favorite talk, only for software security principles and their applicability to InfoSec.

I hope to see you in Chicago.


Surviving the Robot Apocalypse

Abstract: The robots are coming to kill us all. That, or the zombies. One way or the other, humanity stands on the brink. While many talks have focused on surviving the zombie apocalypse, few have given us insights into how to handle the killer robots. This talk seeks to fill that void. By exploring software security flaws and vulnerabilities, we will learn ways to bypass access controls, extract valuable information, and cheat death. Should the unthinkable happen and the apocalypse not come, the information learned in this session can also be applied to protecting less-than-lethal software. At the end of the day, survival is all about the software.

Privilege management at CSO

Posted by

Least Privilege Management (LPM) is in the news …

The concept has been around for decades. J. Wolfgang Goerlich, information systems and information security manager for a Michigan-based financial services firm, said it was, “first explicitly called out as a design goal in the Multics operating system, in a paper by Jerome Saltzer in 1974.”

But, it appears that so far, it has still not gone mainstream. Verizon’s 2012 Data Breach Investigations Report found that, of the breaches it surveyed, 96% were not highly difficult for attackers and 97% could have been avoided through simple or intermediate controls.

“In an ideal world, the employee’s job description, system privileges, and available applications all match,” Goerlich said. “The person has the right tools and right permissions to complete a well-defined business process.”

“The real world is messy. Employees often have flexible job descriptions. The applications require more privileges than the business process requires,” he said. “[That means] trade-offs to ensure people can do their jobs, which invariably means elevating the privileges on the system to a point where the necessary applications function. But no further.”

Read the full article at CSO: Privilege management could cut breaches — if it were used

Considerations when testing Denial of Service

Posted by

Stress-testing has long been a part of every IT Operations toolkit. When a new system goes in, we want to know where the weaknesses and bottlenecks are. Stress-testing is the only way.

Now, hacktivists have been providing stress-tests for years in the form of distributed denial of service attacks. Such DDoS are complementary with just about any news event. As moves are underway to make DDoS a form of free speech, we can expect more in the future.

With that as a background, I have been asked recently for advice on how to test for a DDoS. Here are some considerations.

First, test on the farthest router away that you own. The “you own” part is essential. Let’s not run a DDoS across the public Internet or even across your hosting provider’s network. That is a quick way to run afoul of terms of service and, potentially, the law. Moreover, it is not a good test. A DDoS from, say, home will be bottlenecked by your ISP and the Internet backbone (1-10 Mbps). A better test is off the router interface (100-1000 Mbps).

Second, use a distributed test. A distributed test is a common practice when stress-testing. It required to get a D in the DDoS. Alright, that was a bad joke. The point is that you want to remove individual device differences from affecting the test, such as a bottleneck within the OS or the testing application. My rule of thumb is 5:1. So if you are testing one router interface at 1 Gbps, you would want to send 5 Gbps of data via five separate computers.

Third, use a combination of traditional administration tools and the tools in use for DDoS. Stress-test both the network layer and the HTTP layer of the application. If I were to launch a DDoS test, I would likely go with iperf, loic, and hoic. Check also for tools specific to the web server, such as ab for Apache. Put together a test plan with test scripts and repeat this plan in a consistent fashion.

Forth, test with disposable systems. The best test machine is one with a basic installation of the OS, the test tools, and the test scripts. This minimizes variables in the test. Also, while rare, it is not unheard of for tools like loic and hoic to be bundled with malicious software. Once the test is complete, the systems used for testing should be re-imaged before returned to service.

Let’s summarize by looking at a hypothetical example. Assume we have two Internet routers, two core routers, two firewalls, and then two front-end web servers. All are on 1 Gbps network connections. I would re-image five notebooks with a base OS and the DDoS tools. With all five plugged into the network switch on the Internet routers, I would execute the DDoS test and collect the results. Then repeat the exact same test (via script) on the core routers network, on the firewall network, and on the web server network. The last step is to review the entire data set to identify bottlenecks and make recommendations for securing the network against DDoS.

That’s it. These are simple considerations that reduce the risk and increase the effectiveness of DDoS testing.

Incog: past, present, and future

Posted by

I spent last summer tinkering with covert channels and steganography. It is one thing to read about a technique. It is quite another to build a tool that demonstrates a technique. To do the thing is to know the thing, as they say. It is like the art student who spend time duplicating the work of past masters.

And what did I duplicate? I started with the favorites: bitmap steganography and communication over ping packets. I did Windows-specific techniques such as NTFS ADS, shellcode injection via Kernel32.dll, mutexes, and RPC. I also replicated Dan Kaminsky’s Base32 over DNS. Then I tossed in a few evasion techniques like numbered sets and entropy masking.

Incog is the result of this summer of fun. Incog is a C# library and a collection of demos which illustrate these basic techniques. I released the full source code last fall at GrrCon. You can download Incog from GitHub.

If you would like to see me present on Incog, including my latest work with new channels and full PowerShell integration, I am up for consideration for Source Boston 2013.


Please vote here:

This year SOURCE Boston is opening up one session to voter choice. Please select the session you would like to see at SOURCE Boston 2013. Please only vote once (we will be checking) and vote for the session you would be the most interested in seeing. Voting will close on January 15th.

OPTION 5: Punch and Counter-punch with .Net Apps, J Wolfgang Goerlich, Alice wants to send a message to Bob. Not on our network, she won’t! Who are these people? Then Alice punches a hole in the OS to send the message using some .Net code. We punch back with Windows and .Net security configurations. Punch and counter-punch, breach and block, attack and defend, the attack goes on. With this as the back story, we will walk thru sample .Net apps and Windows configurations that defenders use and attackers abuse. Short on slides and long on demo, this presentation will step thru the latest in Microsoft .Net application security.