IT Maturity: The First Ten Steps to a Secure Future

Archive for the ‘Security’ Category

IT Maturity: The First Ten Steps to a Secure Future

Posted by

Today’s security leaders drive change across business strategy, technology, compliance and legal, and operations. Yet even as the scope has widened, the fundamental questions remain the same: Where are we today? Where are our benchmarks and targets? How can we best close the gap?

A risk-based maturity approach is often being employed to answer these questions. Such a model, when fully considered, is comprised of the following three components:

  • Controls Framework – this could be a top-level framework such as ISO 27001-27002 and NIST 800-53, industry frameworks such has NERC CIP and PCI DSS, or third-party frameworks such as the CIS Critical Security Controls
  • Maturity Framework – the most common is the Capability Maturity Model Integration (CMMI), however, various standards have specific maturity frameworks and some organizations have developed internal maturity models
  • Cultural Framework – the most common is the Security Culture Framework

All three frameworks yield the deepest insights into the current state and provide the clearest answers into potential improvements. That said, an assessment can be performed using simply the controls framework to get a quick read. It is up the organization to determine the level of effort to invest in the assessment. For the rest of this article, we will assume that all three frameworks are in play.

In a risk-based maturity approach, having determined the frameworks, the security leader and his team then complete the following ten-step process:

  1. Assess the security program’s controls and compliance to the control framework
  2. For each implemented control, assess the current people, processes, and technologies
  3. Perform both process validation (is it functioning as designed) and technical validation (is the control sufficient) to ensure the control addresses the risk
  4. For each implemented and functioning control, assess the maturity and identify improvements
  5. Document implemented controls that is not addressing the risk, and missing controls
  6. Analyze the organization’s capabilities and constraints for these missing controls (see our previous article on Action-Oriented IT Risk Management)
  7. Develop a project plan for immediate, short-term, mid-term, and long-term improvements in the control
  8. Create a communications plan and project metrics to ensure that these improvements change the culture as well as changing the security posture, using a cultural framework
  9. Execute the plan
  10. Re-assess the controls, maturity, and culture on a regular basis to adjust the plan

The above ten-step process establishes, maintains, and improves the quality of risk management program and overall security posture. It baselines the current program and provides a roadmap for making process and technical improvements. Each improvement is tracked technically (does it work), procedurally (is it sustainable), and culturally (is it implicitly performed). Culture is key, turning the IT risk program into a set of behaviors adopted by the entire organization. When everyone does their part to protect the organization, without the need for excessive oversight and intervention, the security leader moves from day-to-day supervision and toward strategy and value.

Controls, maturity, culture: three levers for advancing the security program and elevating the leader’s role.

Cross-posetd at http://content.cbihome.com/blog/it_maturity

Moving Tokens to the Point of Sale Can Slow Crooks

Posted by

Before Target, there was TJX, the major 2007 breach that impacted about 45 million credit cards. The crime and its prevention were basic, and provide a lesson for today’s retailers that are battling a new wave of data theft.

It is easy to forget, going on a decade later, how relatively simple the TJX crime actually was. TJX’s Wi-Fi was unprotected and the wireless network allowed access to the back-end IT systems that stored credit cards in the clear in centralized databases.

Several security improvements have been made since then, of course, but the most fundamental is shifting from using credit card information to tokens in those back-end databases. Using tokens as part of a process called format-preserving tokenization meant that criminals could not just walk out the front door with the database. PCI issued guidance on tokenization, many retailers adopted it, and for a while the security controls seemed to be working.

Until, of course, Target took TJX’s place as splashy retail breach. Approximately 40 million credit cards were stolen in November and December 2013. Target was using format preserving tokenization. So what happened?

Unable to get readable credit card numbers from Target’s database, the criminals went after the point of sale systems. Here, the credit cards were available in the clear. It was only after reading the card information that the token was generated and passed onto the retailers’ back-end systems. On the one hand, the impact on the consumers between TJX and Target was roughly the same. On the other hand, the cost to the attacker was much higher. Rather than gaining access to one database, they had to gain access into 1,700 stores and get data back out of these secured networks.

If we want to stop attacks such as the Target breach, tokenization needs to be moved up to the point of interaction. Emerging payment methods like Apple Pay and Google Wallet do just that. The tokenization occurs when the consumer enrolls in Apple Pay or Google Wallet. The token is passed via Near Field Communication (NFC) to the point of sale and the card information is never directly exposed within the retailers’ systems. We just raised the criminal’s level of difficulty from one database to a thousand stores to millions of phones.

That is not to suggest that systems like Apple Pay and Google Wallet are the stopping point. As ubiquity of NFC payments increases so will the efforts to steal from the consumers. Mass adoption is well underway, as demonstrated by the separate announcements late last year that McDonald’s and Subway are supporting NFC payments in over 40,000 locations. Not surprisingly, news has begun to surface about Apple Pay fraud, including attacks on the enrollment process and schemes to add wallets to stolen Apple devices.

Each action we take moves the criminals’ activities. The adoption of tokenization on back-end systems moved the criminals to the point of sale systems. The adoption of NFC moves the criminals to the consumer’s devices. New controls provide protection for a finite amount of time, but crime ultimately finds a way. Retailers who inspect the entire payment processing chain regularly, performing ethical hacking to find the cracks, are the retailers who avoid being the next splashy name in the news. Those that lag behind and only adopt the controls that fight the last breach remain criminals’ favorite marks.

Originally posted at: http://www.paymentssource.com/news/paythink/moving-tokens-to-the-point-of-sale-can-slow-crooks-3021519-1.html

Who Watches the Watchers? Firewall Monitoring

Posted by

Even in the face of being declared dead — often and repeatedly since 2004 — the firewall remains a viable security control. De-perimeterization simply leads to a specialization of controls between IT in the cloud and IT on the ground, with the firewall taking on new roles internally. Especially for payment processing, healthcare, and energy, the firewalled network is still a key element of today’s standards and regulations.

The trouble is, all firewalls share a weakness. It isn’t in the IP stack, firmware, or interfaces. No, the weakness is much more fundamental. All firewalls depend on proper configuration and are a single change away from a breach.

Barracuda Networks is well known for its Web Application Firewalls (WAF) which protect against attacks such as SQL injection and others listed in the OWASP Top 10. Back in 2011, however, a change process went awry and disabled Barracuda’s WAF protection for its own servers. Within hours, some tens of thousands of records were stolen via an injection vulnerability on a Barracuda website. All it took was a single misconfiguration.

FireMon Security Manager 8.0 Tools for firewall change management have sprung up to address these concerns. Centralizing the audit log for all changes on all firewalls is great for looking back, however, as Barracuda experienced, a breach can happen within hours. IT admins require real-time detection and notification on changes, which is one of the many features FireMon offers. It can model complex changes and provide a what-if analysis cross-referencing the firewalls with an organization’s policy and compliance obligations.

Firewalls will continue to be a foundational control for an organization’s internal IT. The control for the controller, the watcher for the watcher, is secure change management. This means change planning, detection, auditing, and alerting. Operationally, it also means tracking history and the ability to troubleshoot issues by comparing changes across time. For organizations running complex segmented networks, management tools like FireMon are invaluable for preventing breach by change.

Securing The Development Lifecycle

Posted by

One line. Ever since the Blaster worm snaked across the Internet, the security community has known that it takes but one line of vulnerable code. Heartbleed and iOS Goto Fail made the point again last year. Both were one line mistakes. Even the Bash Shellshock vulnerability was made possible by a small number of lines of code.

Let’s put that in perspective. Today, our thermostats have tens of thousands of lines of code. Our cars have hundreds of thousands and our operating systems have millions. The industry standard for quality is 0.69 defects per 1,000 lines of code. Any one of those defects can lead to the next Blaster or Heartbleed. And it only takes one.

To manage the risk of code-level vulnerabilities, many organizations have implemented security testing in their software development lifecycle. Such testing has touch-points in the implementation, verification, and maintenance phases. For example, an organization might:

Implement. As the code is produced and checked into source control, run static application security testing (SAST) with HP Fortify Static Code Analyzer. This identifies vulnerabilities and allows them to be corrected during development.

Verify. As part of the final quality assurance checks, perform dynamic application security testing (DAST) with HP WebInspect. This identifies security concerns in the running software. Some organizations go so far as to integrate their QA scripts with the DAST processes.

Maintain. Through-out the lifespan of the software, perform periodic audits with SAST and DAST to ensure that defects are not introduced through maintenance or that new vulnerabilities are not discovered.

This example approaches security as a quality management concern with routine checks and gates. With SAST and DAST, along with other activities during the development lifecycle, organizations reduce defects and manages the risk of software exploitation. The toolset for these tasks has matured to the point where program management is now possible; including scheduling, tracking, communication, metrics, and more.

Moreover, the tooling must be easy to use and fit within the development workflow. It must be stood up in a way that is usable by the development team. Good security is usable security. The success of a tool depends upon the adoption and implementation.

Managing a security program is more than tooling, of course. There are other manual tasks to perform, primarily around requirements and design. These include establishing security requirements, threat modeling, security architecture, and performing manual application penetration testing. Mature programs will contain all these touchpoints and more. Yet significant risk reduction can be made simply by beginning down the path towards a full secure development program.

In sum, take any code base. How many lines of code does it have? Given the standard defect density, how many possible defects are there? Take a moment to run SAST and DAST. The results may be surprising, if not downright scary. Thankfully the solutions and processes exist to find and secure that one line of code.

Attacking hypervisors without exploits

Posted by

The OpenSSL website was defaced this past Sunday. (Click here to see a screenshot from @DaveAtErrata on Twitter.) On Wednesday, OpenSSL released an announcement that read: “Initial investigations show that the attack was made via hypervisor through the hosting provider and not via any vulnerability in the OS configuration.” The announcement led to speculation that a hypervisor software exploit was being used in the wild.

Exploiting hypervisors, the foundation of infrastructure cloud computing, would be a big deal. To date, most attacks in the public cloud are pretty much the same as the traditional data center. People make the same sort of mistakes and missteps, regardless of hosting environment. A good place to study this is the Alert Logic State of Cloud Security Report, which concludes “It’s not that the cloud is inherently secure or insecure. It’s really about the quality of management applied to any IT environment.”

Some quick checking showed OpenSSL to be hosted by SpaceNet AG, which runs VMware vCloud off of HP Virtual Connect with NetApp and Hitachi storage. It was not long before VMware issued a clarification.

VMware: “We have no reason to believe that the OpenSSL website defacement is a result of a security vulnerability in any VMware products and that the defacement is a result of an operational security error.” OpenSSL then clarified: “Our investigation found that the attack was made through insecure passwords at the hosting provider, leading to control of the hypervisor management console, which then was used to manipulate our virtual server.”

No hypervisor exploit, no big deal. Right? Wrong.

Our security controls are built around owning the operating system and hardware. See, for example, the classic 10 Immutable Laws of Security. “Law #2: If a bad guy can alter the operating system on your computer, it’s not your computer anymore. Law #3: If a bad guy has unrestricted physical access to your computer, it’s not your computer anymore.” Hypervisor access lets the bad guy do both. It was just one wrong password choice. It was just one wrong networking choice for the management console. But it was game over for OpenSSL, and potentially any other customer hosted on that vCloud.

It does not take a software exploit to lead to a breach. Moreover, the absence of exploits is not the absence of lessons to be learned. Josh Little (@zombietango), a pentester who I work with, has long said “exploits are for amateurs”. When Josh carried out an assignment on a VMware shop recently, it was using a situation very much like the one at SpaceNet AG: he hopped onto the hypervisor management console. The point is to get in quickly, quietly, and easily. The technique is about finding the path of least resistance.

Leveraging architectural decisions and administration sloppiness is valid attack technique. Scale and automation, that is what changes with cloud computing. It is this change that magnifies otherwise small mistakes by IT operations and makes compromises like OpenSSL possible. Low quality IT management becomes even worse.

And cloud computing’s magnification effect on security is a big deal.

Building a better InfoSec community

Posted by

How can we build a stronger community of speakers and leaders? I have a few thoughts. In some ways, this is a response to Jericho’s Building a better InfoSec conference post. I disagree with a couple of Jericho’s points. To be fair, he brings more experience in both attending conferences and reviewing CFPs. For that reason and others, I have a slightly different perspective.

Engagement should be encouraged, heckling discouraged. Hecklers and those looking to one-up the speaker should be run out of the room. But engagement, engagement is something different: sharing complementary knowledge, and pointing out ideas. Engagement is about raising everyone in the talk.

At the BSides Detroit conference, during OWASP Detroit meetings, and during MiSec talks, we get a lot of engagement. Rare is the speaker that goes ten or fifteen minutes without being interrupted. It is a good thing. If the audience has something to add, let’s get it in the discussion. If the speaker says something incorrect, let’s address it right off. In fact, many talks directly solicit feedback and ideas from the audience. Engagement, to me, is key to building a stronger local community.

Participation should be encouraged, waiting for rockstar status discouraged. I have seen people sit on the sidelines waiting until they had just enough experience, just enough content, just enough mojo to justify being a speaker. The only justification a community needs to accept a speaker is that the speaker is committed to putting the time into giving a great talk.

At local events, we have a mixed audience. I believe that every one of us has a unique perspective, a unique skillset, and a unique knowledge. True, a pen-tester with 20 years of experience might not learn anything from someone with only a few years. Yet not all of our audience are pen-testers. It is the commitment to put together a good talk, practice it, research past talks of similar nature, and solicit feedback that marks someone as a good presenter.

Let me give an example. At last week’s MiSec meeting, Nick Jacob presented on PoshSec. Nick (@mortiousprime) is interning with me this summer and has a total of ten weeks of paid InfoSec experience under his belt. Don’t get me wrong. Nick comes from EMU’s program and has done a lot of side work. But a 20 year veteran, Nick is not.

Nick’s talk was on applying PowerShell to the SANS Critical Security Controls. He structured his talk with engagement in mind. He covered a control and associated scripts for five or ten minutes, and then turned it over to the audience for feedback. What would the pen-testers in the room do to bypass these controls? What would the defenders do to counter the attacks? All in all, the presentation went over well and everyone left with new information and ideas. That is how to do it.

In sum, the better InfoSec communities remove the concerns speakers have about being heckled and being inadequate. A better community stresses engagement and participation. Such communities do so in ways that open up new opportunities for new members while strengthening the knowledge among those who have been in the profession a long time.

That is the trick to building a better InfoSec community.

Incident Management in PowerShell: Containment

Posted by

Welcome to part three of our Incident Management series. On Monday, we reviewed preparing for incidents. Yesterday, we reviewed identifying indicators of compromise. Today’s article will cover containing the breach.

The PoshSec Steele release (0.1) is available for download on GitHub.

At this stage in the security incident, we have verified a security breach is in effect. We did this by notifying changes in the state and behavior of the system. Perhaps group memberships have changed, suspicious software installed, or unrecognized services are now listening on new ports. Fortunately, during the preparation phase we integrated the system into our Disaster Recovery plan.

 

Containment
There are two concepts behind successful containment. First, use a measured response in order to minimize the impact on the organization. Second, leverage the disaster recovery program and execute the runbook to maintain services.

When a breach is identified, kill all services and processes that are not in the baseline (Stop-Process). Oftentimes attackers have employed persistence techniques, so we must setup the computer to prevent new processes from spawning (see @obscuresec’s Invoke-ProcessLock script). This stops the breach in progress and prevents the attacker from continuing on this machine.

We now need to execute a disaster recovery runbook to resume services. Data files can be moved to a backup server using file replication services (New-DfsnFolderTarget). Services and software can be moved by replaying the build scripts on the backup server. The success metric here is minimizing downtime and data loss, thereby minimizing and potentially avoiding any business impact.

We can now move onto the network layer. If necessary, QoS and other NAC services can be set during the initial transfer. We then can move the compromised system onto a quarantine network. This VLAN should contain systems with the forensics and imaging tools necessary for the recovery process.

The switch commands for QoS, NAC, and VLAN vary by manufacturer. It is a good idea to determine what these commands are and how to execute them. A better idea is to automate these with PowerShell, leveraging the .Net Framework and libraries like SSH.Net and SharpSSH.

For more information about the network side of inicident containment, please see Mick Douglas’s talk: Automating Incident Response. The concepts Mick discusses can be executed manually, automated with switch scripts, or automated with PowerShell and SSH libraries.

To summarize Containment, we respond in a measured way based on the value the system delivers to the organization. Containment begins with disaster recovery: fail-over the services and data and minimize the business impact. We can then move the affected system to a quarantine network, and move onto the next stage: Eradication. The value PowerShell delivers is in automating the Containment process. When minutes count and time is expensive, automation lowers the impact of a breach.

This article series is cross-posted on the PoshSec blog.

Incident Management in PowerShell: Preparation

Posted by

We released PoshSec last Friday at BSides Detroit. We have named v0.1 the Steele release in honor of Will Steele. Will recognized PowerShell’s potential for improving an organization’s security posture early on. Last year, Matt Johnson — founder of the Michigan PowerShell User Group — joined Will and launched the PoshSec project. Sadly, Will passed away on Christmas Eve of 2011. A number of us have picked up the banner.

The Steele release team was led by Matt Johnson and included Rich Cassara (@rjcassara), Nick Jacob (@mortiousprime), Michael Ortega (@securitymoey), and J Wolfgang Goerlich (@jwgoerlich). You can download the code from GitHub. In memory of Will Steele.

This is the first of a five part series exploring PowerShell as it applies to Incident Management.

So what is Incident Management? Incident Management is a practice comprised of six stages. We prepare for the incident with automation and application of controls. We identify when an incident occurs. Believe it or not, this is where most organizations fall down. If you look at the Verizon Data Breach Investigations Report, companies can go weeks, months, sometimes even years before they identify that a breach has occurred. So we prepare for it, we identify it when it happens, we contain it so that it doesn’t spread to other systems, and then we clean up and recover. Finally, we figure out what happened and apply the lessons learned to reduce the risk of a re-occurrence.

Formally, IM consists of the following stages: Preparation, Identification, Containment, Eradication, Recovery, and Lessons Learned. We will explore these stages this week and examine the role PowerShell plays in each.

 

Preparation
The key practice in the Preparation stage is leveraging the time that you have on a project, before the system goes live. If time is money, the preparation time is the cheapest time.

Our most expensive time is later on, in the middle of a breach, or in a disaster recovery scenario. The server is in operation, the workflow is going on, and we are breaking the business by having that server asset unavailable. There is a material impact to the organization. It is very visible, from our management up to the CEO level. Downtime is our most expensive time.

The objective in Preparation is to bank roll as much time as possible. We want to ensure, therefore, that extra time is allocated during pre-launch for automating the system build, hardening the system, and implementing security controls. Then, when an incident does occur, we can identify and recover quickly.

System build is where PowerShell shines the brightest. As the DevOps saying goes, infrastructure is code. PowerShell was conceived of as a task framework and admin automation tool, and it can be used to script the entire Windows server build process. Take the time to automate the process and, once done, we place the build scripts in a CVS (code versioning software) to track changes. When an incident occurs, we can then pull on these scripts to reduce our time to recover.

Once built, we can harden to increase the time it will take an attacker to breach our defense. CIS Security Benchmarks (Center for Internet Security) provides guidance on settings and configurations. As with the build, the focus is on scripting each step in hardening. And again, we will want to store these scripts in a CVS for ready replays during an incident.

Finally, we implement security controls to detect and correct changes that may be indicators of compromise. For a breakdown of the desired controls, we can follow the CSIS 20 Critical Security Controls matrix. The Steele release of PoshSec automates (1) Inventory of Authorized and Unauthorized Devices; (2) Inventory of Authorized and Unauthorized Software; (11) Limitation and Control of Network Ports, Protocols, and Services; (12) Controlled Use of Administrative Privileges; and (16) Account Monitoring and Control.

The bottom line is we baseline areas of the system that attackers will change, store those baselines as XML files in a CVS, and check regularly for changes against the baseline. We use the Export-Clixml and Compare-Object cmdlets to simplify the process.

At this point in the process, we are treating our systems like code. The setup and securing is completed using PowerShell scripts. The final state is baselined. The baselines, along with the scripts, are stored in a CVS. We are now prepared for a security incident to occur.

 

Next step: Identification
Tomorrow, we will cover the Identification stage. What happens when something changes against the baseline? Say, a new user with a suspicious name added to a privileged group. Maybe it is a new Windows service that is looking up suspect domain names and sending traffic out over the Internet. Whatever it is, we have a change. That change is an indicator of compromise. Tomorrow, we will review finding and responding to IOCs.

This article series is cross-posted on the PoshSec blog.

Software vulnerability lifecycle

Posted by

How long does it take to go from excitement to panic? Put differently, how long is the vulnerability lifecycle?

We know the hardware side of the story. Moore’s law predicts transistors density double every 18 months. On the street, factoring in leasing, this means computing power jumps up every 36 months.

Now let’s cover the software side of the story. It takes a couple of years for software ideas to be developed and to reach critical mass. We see a 24-month development cycle. Add another 6-12 months for the software to become prevalent and investigated by hackers, both ethical and not.

I made a prediction this past weekend. Some at BSides Chicago were calling this Wolf’s Law. Not me. I checked the video replay. Nope. It is simply a hunch I have. Start the clock when developers get really excited about software, tools, or techniques. Stop the clock when a hacker presents an attack at a well-known conference.

Wolf’s Hunch says it takes 36 months to go from excitement to panic.

As a security industry, the trick is to get ahead of the process. How could we engage the developers at months 1-12? One way might be to attend dev conferences. Here is how I put it at BSides Chicago:

“You know what is scary? Right now, as we are all in here talking, there is a software developer conference going on. Right now. There are a whole bunch of software developer guys talking about the next biggest thing. 36 months from now, what the developers are really excited about, we will be panicking about.”

I checked the news this morning. During this past weekend, NY Disrupt was in full swing. At approximately the time I was speaking, developers were hard at it in the Hackathon. Lots of people are excited about the results, such as Jarvis:

“Jarvis works, using APIs provided by Twilio, Weather Underground and Ninja Blocks to help you control your home and check the current conditions, headlines and what’s making news, and more, all just by dialing a number from any telephone and issuing voice commands, It’s like a Siri, but housed on Windows Azure and able to plug into a lot more functionality.”

Uh huh. A Jarvis. Voice control. Public APIs. What could possibly go wrong?

Will my hunch play out? Check back here in May 2016. My money is on a story about a rising infosec star who is demonstrating how home APIs can be misused.

Out and About: Great Lakes InfraGard Conference

Posted by

I am presenting a breakout session at this year’s Great Lakes InfraGard Conference. Hope to see you there.

 

Securing Financial Services Data Across The Cloud: A Case Study

We came from stock tickers, paper orders, armored vehicles, and guarded vaults. We moved to data bursts, virtual private networks, and protocols like Financial Information eXchange (FIX). While our objective remains the same, protect the organization and protect the financial transactions, our methods and technologies have radically shifted. Looking back is not going to protect us.

This session presents a case study on a financial services firm that modernized its secure data exchange. The story begins with the environment that was developed in the previous decade. We will then look at high-level threat modelling and architectural decisions. A security-focused architecture works at several layers and this talk will explore them in depth; including Internet connections, firewalls, perimeters, hardened operating systems, encryption, data integration, and data warehousing. The case study concludes with how the firm transformed the infrastructure, layer by layer, protocol by protocol, until we were left with a modern, efficient, and security-focused architecture. After all, nostalgia has no place in financial services data security.