Private clouds, public clouds, and car repair

Archive for September, 2011

Private clouds, public clouds, and car repair

Posted by

I am getting some work done on one of my cars. I never have any time. I rarely have any patience. Occasionally, I occasionally have car troubles. So into the dealership I go.

Every time, I hear from my car savvy friends and coworkers. The dealership takes too long. The dealership costs too much. If there is anything custom or unique about your vehicle, it throws the dealership for a loop.

Sure. Doing it yourself can be faster and cheaper. But if, and only if, you have the time, tools, and training. Short of any of these three, and the dealership wins hands down. If you are like me, then you have no time and no tools more complex than pliers and a four bit screwdriver set.

What does this have to do with cloud computing? Well, it provides a good metaphor for businesses and their IT.

Some businesses have built excellent IT teams. Their teams have the time to bring services online, and to enable new business functionality. These are the businesses that equip their IT teams with the tools and provide the training. Hands down, no questions asked, these teams will deliver solutions with higher quality. These IT teams can do it in less time and for less cost.

Other businesses have neglected IT. These are the teams that are told to keep the lights on and maintain dial-tone. Their IT systems are outdated. Possibly, their personnel has outdated skillsets. It makes as much sense for the internal IT teams to take on infrastructure projects as it does for me to change out my transmission. The costs, efforts, and frustration will be higher. The quality? Lower.

These are two ends of the spectrum, of course. Most IT teams are a mix. They are strong in some areas, and weak in others.

I suggest we play to our strengths. Businesses look to enable new functionality. Like with car repairs, we can step back and consider. Does our team have the time, tools, and training in this area? What will bring the higher quality and lower costs? That’s the way to decide build versus buy and the our cloud versus public cloud questions.

Learning the wrong lesson from DigiNotar

Posted by

DigiNotar declared bankruptcy this week, following a high profile attack that lead to malicious certifications being issued. Some five hundred certifications were issued, for everything from Google, to Twitter, to Microsoft, to the entire *.com and *.org namespace. Major browsers quickly removed DigiNotar’s root from the chain, thus protecting folks from these rouge certifications. And then DigiNotar was no more.

People are already saying this proves that IT security breaches put companies out of business.

I believe that is the wrong lesson.

Let’s take four companies with high profile breaches: DigiNotar, Distribute.IT, Sony, and TJXX. DigiNotar went bankrupt. Distribute.IT? Shuttered. Sony is back to business (handling it with an update to their SLA.) TJX is unaffected.

So why did TJX survive? At first, this does not make much sense. But consider the attack as it relates to impact to the organization’s mission.

TJX is in retail and has reasonably deep pockets. The attack did not so much as ruffle its ability to sell product. Save for a dip during the fall out from the attack, TJX did not suffer economic harm.

Sony is in the business of providing access to its services. Though the attack was not necessarily about availability, the attack severely affected Sony’s ability to reach the customer. They have deep pockets, however, and are making their way back. The reasoning behind the service level agreement and terms and conditions agreements is to minimize the cost exposure of future breaches.

Distribute.IT was in the hosting business. Their job was to keep other companies sites online, available, and protected. The attack was an availability attack that was made worse due to mismanagement of data backups. Distribute.IT, without the cash reserves and without any means to get back to business, was dead in the water.

The attack on DigiNotar struck right at the heart of their business. The mission of a certificate authority is to safeguard certificates and ensure issuance only to legitimate entities. We are talking about reliability and authenticity attacks against a company that markets a reliable and authentic security service. Further, due to DigiNotar’s limited reach (fewer than 2% of SSL hosts), there was little risk for the browser makers to remove DigiNotar’s root.

The lesson here is security controls must be framed within the context of the organization’s mission. Breaches can be weathered if the impact is low or in an area outside the core mission. Security breaches only put companies out of business when controls are not appropriately geared to the organization and when the financial impact is serious.

Cost justifying 10 GbE networking for Hyper-V

Posted by

SearchSMBStorage.com has an article on 10 GbE. My team gets a mention. The link is below and on my Press mentions page.

For J. Wolfgang Goerlich, an IT professional at a 200-employee financial services company, making the switch to 10 Gigabit Ethernet (10 GbE) was a straightforward process. “Like many firms, we have a three-year technology refresh cycle. And last year, with a big push for private cloud, we looked at many things and decided 10 GbE would be an important enabler for those increased bandwidth needs.”

10 Gigabit Ethernet technology: A viable option for SMBs?
http://searchsmbstorage.techtarget.com/news/2240079428/10-Gigabit-Ethernet-technology-A-viable-option-for-SMBs

My team built a Hyper-V grid in 2007-2008 that worked rather nicely at 1 Gbps speeds.We assumed 80% capacity on a network link, a density of 4:1, and an average of 20% (~200 Mbps) per vm. In operation, the spec was close. We had a “server as a Frisbee” model that meant non-redundant networking. This wasn’t a concern because if a Hyper-V host failed (3% per year) it only impacted up to four hosts (%2 of the environment) for about a minute.

When designing the new Hyper-V grid in 2010, we realized this bandwidth was no longer going to cut it. Our working density is 12:1 with our usable density of 40:1. That meant 2.4 Gbps to 8 Gbps per node. Our 2010 model is “fewer pieces, higher reliability” and that translates into redundant network links. This was more important when a good portion of our servers (10-15%) would be impacted by a link failure.

Let’s do a quick back of the napkin sketch. Traditional 1 Gbps Ethernet would require 10 primary and 10 secondary Ethernet connections. That’s ten dual 1 Gbps adapters: 10 x $250 = $2,500. That’s twenty 1 Gbps ports: 20 x $105 = $2,100. Then there’s the time and materials cost for cabling all that up. Let’s call that $500. By contrast, one dual port 10 GbE adapter  is $700. We need two 10 GbE ports: 2 x $930 = $1,860. We need two cables ($120/per) plus installation. Let’s call that $400.

The total cost per Hyper-V host for 10 GbE is $2,960. Compared to the cost of 1 Gbps ($5,100), we are looking at a savings of $2,140. For higher density Hyper-V grids, 10 GbE is easily cost justified.

It took some engineering and re-organizing. We have been able to squeeze quite a bit of functionality and performance from the new technology. Cost savings plus enhancements? Win.

Six notebooks, three controls, and a third of a presentation

Posted by

Protecting the organization’s ability to execute on its mission, this should be the driver for security controls. At the same time I was giving that message, a series of events re-enforced the need for focus.

Here is the tale. The back story of my GrrCon talk. It is a tale of six notebooks. It is a tale of six security pros. And it is a tale of security being out of sync with mission.

Notebook one. My help desk provided me with a travel notebook, which I loaded up with my slide deck. I also made a copy on USB flash drive just in case. At the last minute, I decided to leave the notebook at the hotel room. After all, I thought, this was a hacker con. Did I want to expose the notebook to that risk? No, I decided, and opted for a little physical security.

Notebook two. Notebook two turned out to not be a notebook at all. See, the conferences that I have spoken at provided a notebook loaded with slides at the podium. I arrived early, checked the room, tried the mic and the notebook. Looked good, I thought. I later learned that the con had not provided notebooks. Why? “Um, Wolf, this is a hacker con.” Right. Physical security.

Notebook three. Turns out that what I thought was a con provided notebook was actually the speaker before me. She packed up, and I realized the mistake. Too late to return to the hotel now. I was on deck.

Notebook four. Infosec_Rogue and the #misec crew came to my rescue. Infosec_Rogue could not read my USB drive, of course, because his OS was locked down. (Good method of avoiding USB malware, btw. I lock USB down on all my Windows 7/2008 computers.) So we passed to the next notebook. OS security.

By this point, I had started in on my presentation. I apologize for not catching the names of the other folks that pitched in.

Notebook five. The fellow could read and copy my slides. Being a reasonably paranoid security guy, however, his Open Office was locked down. We have the slides! But we cannot show the slides. App security.

Notebook six. Copying the files from notebook five to a USB drive that could be read on notebook six, we were able to get the slides onto a computer with Office 2007. Bingo. We are in business. About a third of the way into my deck, my slides caught up with me. Score!

It was a funny but powerful reminder. The control environment: physical security; OS security by means of driver lock-down; application security by means of locking down Open Office. The impact to the mission: I gave a third of my talk with no slides. This was a talk on gearing security controls to the organization’s mission. Hmm, irony, much?

When I get back to the office, I am taking a hard look for security controls that get in the way of people getting their work done.

Cheers,

Wolfgang
Once again, thank you to the #misec crew for helping me out. You guys rock.

Impact driven risk management and business continuity

Posted by

InfoSec management hit the perfect storm. At the same time IT has exploded, budgets imploded. Our IT environments are growing due to consumerization, cloud computing, and sprawl. Our teams and budgets are shrinking due to the economic down turn. We are in an even tougher spot today that we were in 2008, when I began talking about the importance of driving information security from risk management.

We still have the same fundamental problems. How can we pitch the idea to our executive team? How can we secure this growing IT environment, when it is next to impossible to know what any one piece of it means to the business?

I have been driving my efforts from impact. We cannot defend what we do not understand. By mapping IT to the business activities it enables, and by performing an impact analysis, we can understand what all the stuff we are responsible for actually does. We can then tune the cost of security controls around the value of the IT assets.

Business continuity and risk management then flow naturally from this deeper understanding of our organization and how IT enables our organization to fulfill its mission.

These are #secbiz — or security in the business — concerns. I’d argue that impact driven risk management is a #secbiz approach. I made that argument today at a conference in Grand Rapids. I posted a copy of my talk online.

How asteroids falling from the sky improve security (GrrCon 2011)
http://blip.tv/j-wolfgang-goerlich/how-asteroids-falling-from-the-sky-improve-security-grrcon-2011-5561145

The next steps that I gave in my talk are continuing the conversation on Twitter under #sira and #secbiz, as well as joining the Society of Information Risk Analysts (SIRA) and the SecBiz group on LinkedIn. Please send some feedback my way. Is this is a good approach? What would make it better?

jwg

Note: This is the same talk. However, due to technical difficulties, the recording online is not the same as the one I gave in person. Same message, different out take.

Cloud Security Alliance in SE Michigan

Posted by

We kicked off a new Cloud Security Alliance (CSA) chapter in Detroit this morning. The new chapter will be serving South Eastern Michigan. While some groups are geared to socializing and networking, CSA SE MI looks to distinguish ourselves by actively working on projects. With my private cloud operational and my eye on public cloud offerings, I am excited to contribute to body of knowledge.

Watch this space for more to come on securing public and private clouds.

Grand Rapids on Friday (GrrCon)

Posted by

I will be in Grand Rapids this Friday, attending the information security conference GrrCon. I am speaking on business continuity and risk management in the executive management track. Meantime, there is a slate of excellent speakers covering everything from OS kernel attacks, SSL trust, social engineering, to hacking smart meters and hacking airplanes.

Come out and see the conference. Drop me an email if you want to hook up.

Building a vulnerability management program

Posted by

Vulnerability management is one of the components of risk management. (The other two are asset management and threat management.) It is more than just keeping up on Microsoft patch Tuesday. First, the scope should include all your applications, operating systems, networking gear, and network architecture. Second, the process should include regular vulnerability assessments. And of course vulnerability management is an ongoing concern. What is secured today will be broken tomorrow.

Where to start? Diana Kelley has an in-depth article on building a vulnerability management program on SearchSecurity.com. “We will present a framework for building a vulnerability management life-cycle. Using examples from practitioners, you will get a from–the-trenches view of what works and what doesn’t when trying to win the ongoing vulnerability management war.”

Framework for building a vulnerability management life-cycle program:
http://searchsecurity.techtarget.com/magazineContent/Framework-for-building-a-vulnerability-management-lifecycle-program

I am mentioned a few times in the article. If you are a regular reader of this blog, you will recognize my themes. Start small and bootstrap your way to success. Bake security in: during evaluation and selection, during implementation, during operation, and during retirement. Integrate IT security with IT operations to reduce costs and increase security.

Ready to build a vulnerability management program? Definitely check out Diana Kelley’s piece. She lays it all out in a logical fashion.