Learning the wrong lesson from DigiNotar

Archive for the ‘Risk Management’ Category

Learning the wrong lesson from DigiNotar

Posted by

DigiNotar declared bankruptcy this week, following a high profile attack that lead to malicious certifications being issued. Some five hundred certifications were issued, for everything from Google, to Twitter, to Microsoft, to the entire *.com and *.org namespace. Major browsers quickly removed DigiNotar’s root from the chain, thus protecting folks from these rouge certifications. And then DigiNotar was no more.

People are already saying this proves that IT security breaches put companies out of business.

I believe that is the wrong lesson.

Let’s take four companies with high profile breaches: DigiNotar, Distribute.IT, Sony, and TJXX. DigiNotar went bankrupt. Distribute.IT? Shuttered. Sony is back to business (handling it with an update to their SLA.) TJX is unaffected.

So why did TJX survive? At first, this does not make much sense. But consider the attack as it relates to impact to the organization’s mission.

TJX is in retail and has reasonably deep pockets. The attack did not so much as ruffle its ability to sell product. Save for a dip during the fall out from the attack, TJX did not suffer economic harm.

Sony is in the business of providing access to its services. Though the attack was not necessarily about availability, the attack severely affected Sony’s ability to reach the customer. They have deep pockets, however, and are making their way back. The reasoning behind the service level agreement and terms and conditions agreements is to minimize the cost exposure of future breaches.

Distribute.IT was in the hosting business. Their job was to keep other companies sites online, available, and protected. The attack was an availability attack that was made worse due to mismanagement of data backups. Distribute.IT, without the cash reserves and without any means to get back to business, was dead in the water.

The attack on DigiNotar struck right at the heart of their business. The mission of a certificate authority is to safeguard certificates and ensure issuance only to legitimate entities. We are talking about reliability and authenticity attacks against a company that markets a reliable and authentic security service. Further, due to DigiNotar’s limited reach (fewer than 2% of SSL hosts), there was little risk for the browser makers to remove DigiNotar’s root.

The lesson here is security controls must be framed within the context of the organization’s mission. Breaches can be weathered if the impact is low or in an area outside the core mission. Security breaches only put companies out of business when controls are not appropriately geared to the organization and when the financial impact is serious.

Impact driven risk management and business continuity

Posted by

InfoSec management hit the perfect storm. At the same time IT has exploded, budgets imploded. Our IT environments are growing due to consumerization, cloud computing, and sprawl. Our teams and budgets are shrinking due to the economic down turn. We are in an even tougher spot today that we were in 2008, when I began talking about the importance of driving information security from risk management.

We still have the same fundamental problems. How can we pitch the idea to our executive team? How can we secure this growing IT environment, when it is next to impossible to know what any one piece of it means to the business?

I have been driving my efforts from impact. We cannot defend what we do not understand. By mapping IT to the business activities it enables, and by performing an impact analysis, we can understand what all the stuff we are responsible for actually does. We can then tune the cost of security controls around the value of the IT assets.

Business continuity and risk management then flow naturally from this deeper understanding of our organization and how IT enables our organization to fulfill its mission.

These are #secbiz — or security in the business — concerns. I’d argue that impact driven risk management is a #secbiz approach. I made that argument today at a conference in Grand Rapids. I posted a copy of my talk online.

How asteroids falling from the sky improve security (GrrCon 2011)
http://blip.tv/j-wolfgang-goerlich/how-asteroids-falling-from-the-sky-improve-security-grrcon-2011-5561145

The next steps that I gave in my talk are continuing the conversation on Twitter under #sira and #secbiz, as well as joining the Society of Information Risk Analysts (SIRA) and the SecBiz group on LinkedIn. Please send some feedback my way. Is this is a good approach? What would make it better?

jwg

Note: This is the same talk. However, due to technical difficulties, the recording online is not the same as the one I gave in person. Same message, different out take.

Building a vulnerability management program

Posted by

Vulnerability management is one of the components of risk management. (The other two are asset management and threat management.) It is more than just keeping up on Microsoft patch Tuesday. First, the scope should include all your applications, operating systems, networking gear, and network architecture. Second, the process should include regular vulnerability assessments. And of course vulnerability management is an ongoing concern. What is secured today will be broken tomorrow.

Where to start? Diana Kelley has an in-depth article on building a vulnerability management program on SearchSecurity.com. “We will present a framework for building a vulnerability management life-cycle. Using examples from practitioners, you will get a from–the-trenches view of what works and what doesn’t when trying to win the ongoing vulnerability management war.”

Framework for building a vulnerability management life-cycle program:
http://searchsecurity.techtarget.com/magazineContent/Framework-for-building-a-vulnerability-management-lifecycle-program

I am mentioned a few times in the article. If you are a regular reader of this blog, you will recognize my themes. Start small and bootstrap your way to success. Bake security in: during evaluation and selection, during implementation, during operation, and during retirement. Integrate IT security with IT operations to reduce costs and increase security.

Ready to build a vulnerability management program? Definitely check out Diana Kelley’s piece. She lays it all out in a logical fashion.

Out and About: GrrCon

Posted by

I will be out at the GrrCON conference in Grand Rapids on Friday, September 16. I am giving a session in the InfoSec Management track. The topic is on business continuity planning and risk management. Sound boring? No worries. There is an amazing line up of speakers covering a wide range of topics. Hope to see you there.

How asteroids falling from the sky improves security
An asteroid fell from the sky and the data center is now a smoking crater. At least, that’s the scenario that launches your business continuity planning. BCP asks the questions: what do we have, what does it do, what is the risk and what is the value? The answers to these questions are also essential build blocks of a risk management program. This presents an opportunity for the savvy information security professional. In this session, we will look at ways to co-opt business continuity to advance an organization’s information security.

Hello SSAE16

Posted by

As mentioned in my last post on the subject, SAS70 has officially retired. SSAE16 (Statements on Standards for Attestation Engagement No. 16) has taken its place and improves upon SAS70 in several ways.

The improvements come from a shift of focus. SAS70 was about ensuring your control framework was sufficient and functional. SSAE16 is about ensuring your systems — including deployment and controls — are sufficient and functional. SAS70 was focused on the control structure around specific threats. SSAE16 incorporates risk management and ties together risks, threats, and controls. Going into this, SSAE16 looks set to provide a more complete result.

Another improvement is in the shift from audit to attestation. SOX (Sarbanes-Oxley Act of 2002) requires executive management to attest in writing to the accuracy of the financial statements. comparison, SAS70 does not require attestation. SSAE16 supports SOX by requiring a written attestation of the audit’s accuracy from executive management.

SSAE16 should provide a holistic audit with greater executive management participation. This is the first year I have done one, and my audit period begins in a couple months.

Wish me luck!

Goodbye SAS70

Posted by

SAS70 officially retires today, June 15, 2011. Taking its place is SSAE16.

SAS70 (Statement on Auditing Standards No. 70) is an audit framework that external parties follow to check the state of your controls. The audit is performed by financial services firms, and takes a top down approach. The objective is to ensure that financial results are recorded and reported accurately.

SAS70 has few common complaints: it lacks an objective technical spec, is carried out by CPAs at accounting firms, and misses technical details that leave businesses open to attack.

The SAS70 process emphasizes a truism that IT security folks sometimes lose sight of: the goal is securing the business’s ability to perform in the market. Though related, this is separate from the goal of securing all IT systems.

The SAS70 audit is top down and focused primarily on what drives the financial reporting. It is about prioritization. What is the top priority to a business? Financial success backed by accurate financial reporting.

A vulnerability assessment is bottom up. Your complete security audit would primarily focus on the IT domain, emphasizing technical controls and technical implementation. An audit here would tell you about your firewall ruleset and patching state, for example. What is the top priority for an IT security team? To not get breached.

These two priorities are not the same. Financial success does not prevent security breaches. Likewise, security breaches do not preclude financial success. Therefore, it makes sense to have separate auditor teams looking at the two separately.

As to the complaints of SAS70 audits, let’s step thru them with this background. First, there is no objective standard written into the SAS70 language. The result is that the applied standard is fluid and keeps up with the current standard of practice. Given SAS70 has been around for nineteen years, I think this speaks to the benefit of having an open-ended standard. Second, CPA firms rather than technology firms perform the audit. The benefit is that the resulting audit is driven from a financial perspective and scoped accordingly. The folks that I have worked with are very knowledgeable and are computer savvy, and often carry a CISSA or CISSP along with their CPA

So I found that SAS70 was a valuable tool for a top-down control assessment. As with all these standards, pairing the SAS70 with bottom-up technical assessments is necessary to truly secure an environment. The SAS70 had a positive impact on the industry, and I believe the SSAE16 is set to do the same.

Internet kill switches

Posted by

January 27: the day the Internet died.

Protests have been ongoing this week in Egypt. There has been a significant amount of press coverage on the political situation. The riots began on the January 25 and then, on January 27, President Hosni Mubarak unplugged Egypt from the Internet.

The reasons given for going dark are that the rioters were using the Web and Internet to coordinate. This makes sense, as we have seen Twitter and other social media sites used in recent unrests. The main concern for InfoSec is the precedence that Mubarak has set.

Will other countries, faced with similar situations, choose to unplug? It seems likely. For example, at the same time Egypt was unplugged, the U.S. re-introduced the “Internet kill switch” bill. (Read the bill at Thomas or see Wired’s kill switch coverage.) Of course, killing the Internet will have economic repercussions.

And that’s what I am thinking about today. Should the Internet be disabled, how would my firm continue to do business? How would we send and receive communications? In terms of InfoSec and engineering, what mitigations could be deployed for this risk?

J Wolfgang Goerlich

 

How did they do it? Both Ars Technica and Wired have articles on the technical aspects of unplugging. How are people coping? The old standby is modem dialup, although some are calling others to post information, or faxing information out to the Internet. Wired also posted a Wiki on how to communicate if your government shuts off your Internet.

Insurance

Posted by

In InfoSec risk management, one area that does not get much press is risk transference. That is, using insurance (or agreements) to transfer the risk to a third party. Brian Krebs makes the case, anecdotally, on his blog.

After an incident in which the attackers raided a company’s bank for $750K, “The company managed to recover three of the fraudulent transactions, and its total loss now stands at just shy of $100,000. Golden State Bridge is confident that after paying its $10,000 deductible, the insurance company will cover the rest…”

http://krebsonsecurity.com/2010/06/the-case-for-cybersecurity-insurance-part-i/

How to gracefully lose control over computing assets

Posted by

Cloud transition is about how to gracefully lose control over computing assets.

This is a good article. It traces the history of security from the military-minded security pros of yesterday, to the risk management security pros of today, to the great unknown of tomorrow. Given information security is about guarding information assets, InfoSec may shift toward vendor management and away from technological prowess. “For example, even in the case of stuff covered by compliance (you know, that critical Confidentiality stuff we’d never move to the Cloud), vendors will be quick to sell certified solutions (we’re already seeing this, actually).”

“Now in addition to worrying about measuring things like control effectiveness, A/V coverage, and risk, we’re going to have to understand things like: what level of Governance information are we going to require from which vendors? Once we have that Governance information, what are we going to actually do with it in order to make decisions?”

 

Referencing a post from https://securityblog.verizonenterprise.com/

Risk Management is prevention and Security Information Management is detection

Posted by

Risk Management (RM) is comprised of asset management, threat management, and vulnerability management. Asset management includes tying IT equipment to business processes. Asset management also includes performing an impact analysis to determine the relative value of the equipment based upon what the business would pay if the equipment was unavailable, and what the business would earn if the equipment was available. Threat management includes determining threat agents (the who) and threats (the what). For example, a disgruntled employee (threat agent) performs unauthorized physical access (threat 1) to sabotage equipment (threat 2). Vulnerability management is auditing, identifying, and re-mediating vulnerabilities in the IT hardware, software, and architecture. Risk management is tracking assets, threats, and vulnerabilities at a high level by scoring on priority (Risk = Asset * Threat * Vulnerability) and scoring on exposure (Risk = Likelihood * Impact).

Once prioritized, we can then move onto determining controls to reduce the risk. Controls can be divided into three broad methods: administrative or management, operational, and technical. Preventative and detective are the two main forms of controls. Preventative controls stop the threat agent from taking advantage of the threat. In the above example, a preventative control would be a locked door. Detective controls track violations and provide a warning system. For the disgruntled employee entering an unauthorized area, a detective control would be things like motion detectors. The resulting control matrix includes management preventative controls, management detective controls, operational preventative and detective controls, and so on for technical controls.

Security Information Management (SIM) is a technical detective control that is comprised of event monitoring and pattern detection. Event monitoring shows what happened when and where, from both the network and the computer perspectives. Pattern detection is then applied to look for known attacks or unknown anomalies. The challenge an InfoSec guy faces is that there is just too many events and too many attacks to perform this analysis manually. The purpose of a Sim is to aggregate all the detective controls from various parts of the network, automate the analysis, and roll it up into one single console.

My approach to managing security for a business networks is to use Risk Management for a top down approach. This allows me to prioritize my efforts for preventative controls. My team and I can then dig deep into the security options and system parameters offered by the IT equipment that is driving the business. For all other systems, I rely on detective controls summarized by a Security Information Management tool.

In my network architecture, RM drives preventative controls and SIM drives detective controls.