Cloud adoption and use

Archive for March, 2012

Cloud adoption and use

Posted by

I am tremendously in favor of virtualization, a staunch proponent for cloud computing, and I’d automate my own life if I could. After all, we dedicated most of last year to investigating and piloting various cloud backup solutions. But take a peek at my infrastructure and you might be surprised.

Why is my team still running physical servers? Why are we using so few public resources? And tape, really?

I am not the only one who is a bit behind on rolling out the new technology. Check out this study that came out on Forbes this week. “The slower adoption of cloud … reflects a greater hesitancy … remain conservative about putting mission-critical and customer data on the cloud. Regulations … may explain much of this reluctance. The prevalence of long-established corporate data centers with legacy systems throughout the US and Europe … may be another factor. Accordingly, the study confirms that overcoming the fear of security risks remains the key to adopting and benefiting from cloud applications.”

I have a sense that cloud computing, in the IaaS sense, is roughly where virtualization was circa 2004. It is good for point solutions. Some firms are looking at it for development regions. Now folks are beginning to investigate cloud for disaster recovery. (See, for example, Mark Stanislav’s Cloud Disaster Recovery presentation.) These low risk areas enable IT management to build competencies in the team. A next step would be moving out tier 3 apps. A few years after that, the mission-critical tier 1 apps will start to move. This will happen over the next five to eight years.

This logical progression gives the impression that I see everything moving to the cloud. As Ray DePena said this week, “Resist the cloud if you must, but know that it is inevitable.” I can see that. However inevitable cloud computing is, like virtualization, it does not fit all use cases.

Why are some servers still physical? In large part, it is due to legacy support. Some things cannot be virtualized and cannot be unplugged, without incurring significant costs. In some cases, this choice is driven by the software vendor. Some support contracts still mandate that they cover only physical servers. Legacy and vendors aside, some servers went physical because the performance gains outweigh the drawbacks. Decisions, decisions.

The majority of my environment is virtualized and is managed as a private cloud. Even there, however, there are gaps. Some areas are not automated and fully managed due to project constraints. We simply have not gotten there yet. Other areas probably will never be automated. With how infrequent an event occurs, and with how little manual work is needed, it does not make sense at my scale to invest the time. This is a conscious decision on where it is appropriate to apply automation.

Why are we not using so more public resources? Oh, I want to. Believe me. Now I am not keen on spending several weeks educating auditors until cloud reaches critical mass and the audit bodies catch up. But the real killer is costs. For stable systems, the economics do not make sense. The Forbes article points out that the drivers of public cloud are “speed and agility — not cost-cutting.” My team spent ten months in 2011 trying to make the economics work for cloud backup. Fast forward a half of a year, and we are still on tape. It is an informed decision based on the current pricing models.

Is cloud inevitable? The progression of the technology most surely is, as is the adoption of the technology in areas where it makes sense. The adoption curve of virtualization gives us some insight into the future. Today, there are successful firms that still run solely on physical servers with direct attached storage. Come 2020, as inevitable as cloud computing is, it is equally inevitable that there will be successful firms still running on in-house IT.

Many firms, such as mine, will continue to use a variety of approaches to meet a variety of needs. Cloud computing is simply the latest tactic. The strategy is striking the right balance between usability, flexibility, security, and economics.

Wolfgang

Side note: If you do not already follow Ray DePena, you should. He is @RayDePena on Twitter and cloudbender.com on the Web.

90% of IT managers run their team into the ground

Posted by

Many IT departments have a less than sterling reputation. Outages are frequent and costly. Projects are unpredictable and over budget.

A survey in 2011 of small business owners found 77% had experienced downtime that caused productivity to suffer in the previous year. A Symantec study found SMBs average five outages a year, with a median loss of $14k a day. Larger organizations have larger impacts, of course, with a Ponemon Institute finding $5k a minute for data center outage with an average of 2.5 outages a year.

It is more than outages that hit the pocketbook. IT projects, those engines of value creation, are often at risk as well. In fact, a good 73% of IT management believes projects are doomed right from the get-go. Usually or always, there are problems from the start. From the same survey, by Geneca, 80% of the IT teams surveyed spend at least half their time in rework. Another survey, by PM Solutions, finds that an average SMB company has $74m in at risk projects yearly.

Ouch. PM Solutions lists five top causes of project failures. Curiously, these map very closely to the top causes of data center outages. From the ZDNet article:

  • Requirements: Unclear, lack of agreement, lack of priority, contradictory, ambiguous, imprecise.
  • Resources: Lack of resources, resource conflicts, turnover of key resources, poor planning.
  • Schedules: Too tight, unrealistic, overly optimistic.
  • Planning: Based on insufficient data, missing items, insufficient details, poor estimates.
  • Risks: Unidentified or assumed, not managed.

What’s wrong with the requirements? Quite likely, the person gathering the requirements and defining the scope did not have enough experience with the problem domain or knowledge with the solution domain. Put differently, the person didn’t quite get the industry they were in or the technology they were using. Resources and schedules? Same thing. The people doing work that was not scoped out sufficiently, and perhaps subsequently getting burned out and leaving the firm. Identifying risks, providing good estimates, and producing quality results all requires experienced and knowledgeable professionals.

Training and tools. It is all about getting the right folks, giving them the right training, and providing the right tools. Recognizing, too, that what is right today is wrong tomorrow. This is an ongoing process.

It is more than tools. When people ask how my team is able to run a complex infrastructure with 9 folks when it used to take 26, a good emphasis is placed on virtualization and automation. That is appropriate insofar as good tools are a vital component of the strategy. However, giving me a stone Hearth Deck oven along with the utensils and ingredients puts me no closer to having a tasty artisan sandwich. That requires training and experience.

It was disappointing, therefore, to see the State of IT Skills survey that hit last week. “9 in 10 business managers see gaps in workers’ skill sets, yet organizations are more likely to outsource a task or hire someone new than invest in training an existing staff.” That does not fill me full of confidence. I outsource a number of commodity services today. I have to tell you, I am rarely impressed by their support or maintenance. I call outsourcing McDonald’s IT for a reason. You get what you pay for.

90% of IT managers are running their departments into the ground. That is my take on the State of IT Skills survey. Projects continue to come in late and over budget. Outages continue to occur. The studies above are talking about tens of thousands, hundreds of thousands, to millions of dollars in losses. Perhaps at one time, IT departments could get away with such things. There was not as much on IT, of course, and there was not much competition.

Today, IT is dial-tone. Today, competition is a telephone call away to the nearest cloud vendor. Tomorrow, if we continue on a path of equipping our teams for failure, perhaps only 10% of internal IT departments will remain. Those are the 1 in 10 IT managers who hire and retain the right people, and who train and equip their teams.

Out and About: GrrCon 2012

Posted by

September 27 and 28, I will be out in Grand Rapids for the GrrCon conference. I am working on a fun little project using .Net Framework to create covert channels, and then use the same tools along with OS controls to block and shutdown those channels. Come on out, visit with the Grand Rapids folks, and enjoy a great conference.

Punch and Counter-punch with .Net Apps
Presentation Abstract: Alice wants to send a message to Bob. Not on our network, she won’t! Who are these people? Then Alice punches a hole in the OS to send the message using some .Net code. We punch back with Windows and .Net security configurations. Punch and counter-punch, breach and block, attack and defend, the attack goes on. With this as the back story, we will walk thru sample .Net apps and Windows configurations that defenders use and attackers abuse. Short on slides and long on demo, this presentation will step thru the latest in .Net application security.

Disabling SMTP verbs like Turn for PCI compliance

Posted by

A friend of mine contacted me for advice on passing a PCI compliance audit. Apparently the auditor’s scans had detected TURN and ETRN were enabled on the SMTP server. The auditors referenced CVE-1999-0512 and CVE-1999-0531. Moreover, enabling TURN does pose some security risk.

His concern was, of course, not passing the audit. Research online turned up recommendations for Exchange and other mail servers. But there did not appear to be any advice for standard Windows SMTP. What to do?

SMTP in Windows 2000, Windows 2003, and Windows 2008 is a component under IIS. All SMTP configuration is stored within the IIS Metabase. The node is SmtpInboundCommandSupportOptions and the property ID is 36998. The value is a simple 32-bit flag and so some math is required. Here are component the values for the flag (fromMicrosoft KB 257569):

DSN = 0x40 = 64
ETRN = 0x80 = 128
TURN/ATRN = 0x400 = 1024
ENHANCEDSTATUSCODES = 0x1000 = 4096
CHUNKING = 0x100000 = 1048576
BINARYMIME = 0x200000 = 2097152
8bitmime = 0x400000 = 4194304

So let’s say you want Enhanced Status Codes, Binary Mime, and 8-bit Mime enabled. The value would be 0x601000 or 6295552 or 4096 (Enhanced Status Codes) + 2097152 (Binary Mime) + 4194304 (8-bit Mime). Simply add up the values of the verbs that will be enabled and convert to hexadecimal.

To set the value on an SMPT service that is not running Exchange, use the IIS Metabase utility (Mdutil.exe). Select the path to the SMTP service (smtpsvc/ by default), enter the property ID (prop:36998), specify the value is a 32-bit flag (dtype:DWORD), push the value down to all child nodes (attrib:INHERIT), and set the value of the enabled verbs.

The resulting command would be:

Mdutil.exe set -path:smtpsvc/ -prop:36998 -utype:UT_SERVER -dtype:DWORD -attrib:INHERIT -value:0x601000

Run the command to update the metabase, restart IIS and the SMTP service, and then retest. Only the enabled verbs will then appear. And that, hopefully, will put you in a better place. As my friend put it, “Thanks again for your help! I passed!”

Regards,

Wolfgang

Post script: Including SMTP verbs in a PCI test, while new to me, apparently has been going on for some time. See this post back from 2010:

Hello, Microsoft Group,

We have a few vulnerabilities on our servers. We have a PCI audit coming up and they are asking to upgrade the SMTP server

All modern SMTP servers reject the TURN command for security reasons. Upgrade to a newer SMTP server version. You should also disable the ETRN and ATRN commands unless you have a good reason for using them.

The original SMTP specification described a “TURN” command that allows the roles of server and client to be reversed in a session. When a client issues the “TURN” command, the server “turns around” and sends any queued mail for that domain to the client, essentially treating the client as an SMTP server.

The “TURN” command is obsolete and insecure. It specifies no authentication mechanism, allowing a single user from a domain to retrieve all queued mail for that domain (for all users). Modern SMTP servers reject the “TURN” command for these reasons. A replacement for “TURN” command, called “ETRN”, has been able to rectify some of the security problems with “TURN”. However, this proposal is not without its own security problems.

How can I disable the ETRN and ATRN commands? Please help me on this. Thanks.

A clear competitive advantage

Posted by

Local hamburger shops cannot compete head-to-head with McDonald’s. Clothes stores cannot compete head-to-head with Walmart. Local book stores? Yep. They, too, cannot compete head-to-head with Barnes and Nobles. Makes sense.

Why compete head-to-head with cloud (IaaS, PaaS, SaaS) providers? *

Take a look at thriving local restaurants, groceries, shops, and book stores. What do they have in common? It is a single-minded dedication to customer service. These thriving businesses do not complete head-to-head. Rather, they carve their own niche within the market and create a monopoly within that niche.

The second step in managing an IT team is to define and carve that niche. This begins by having a very clear understanding of our firm’s industry, organization, business units, and professionals. How does the firm compete in the industry? How are the goods bought and sold? Who are the people on the critical path, and what can our IT team do to improve their abilities?

When was the last time an IT team discussed these questions? For me, it was about two weeks back. During a knowledge sharing meeting, a teammate reviewed how my firm’s products are bought and sold, and how the technology he was building impacted that process. How about your team?

No one can understand an organization and its people better than the internal IT team. McDonald’s does not make a custom crafted burger for you. Walmart cannot custom stock products just for your needs. The same goes with Barnes and Nobles, and the slew of cloud providers tackling the commodity IT market. No one knows you like those closest to you.

The competitive advantage the IT team must cultivate and sustain is customer closeness. Know the people, know the business, know the industry. Then leverage commodity services while out competing on value-add services. Be the bistro to the cloud’s McDonald’s.

Wolfgang

* Now a good argument can be made that cloud providers do not actually cut costs. In particular, this seems apparent with IaaS providers. Check this out for yourself. Price out two servers that can host 16 vms with HA. Now take your lease rate for, say, three years. Add in your price for power and hosting. Compare that price with your IaaS vendor of choice. I find IaaS for consistent loads to cost almost four times that of a DYI infrastructure. But it does require a three year lock-in and cannot scale up and down with load demands. This finding gets to the “own the base and rent the spike” strategy.

Out and About: Stir Trek

Posted by

This coming May 4, I will be out at the Stir Trek conference in Columbus, OH. Tickets go on sale today at 1:59 pm. (3/14 1:59 for Pi day, get it?) I hear the conference sold out last year within five days, so if you are joining us, act fast. Stir Trek is a unique developer conference in that it combines technology talks with a private screening of a movie. This year, it is The Avengers. Quite the event.

I am in the Cloud computing track and will be sharing my experiences on DevOps and private/public cloud computing. Hope to see you there.

Running DevOps on a Microsoft Cloud
You have heard the rumors. DevOps is this touchy-feely culture thing where the developers run cowboy over the infrastructure using open source tools. But what if you are running a Microsoft infrastructure? What if you are in a highly regulated industry, say like finance? And what if you need to show hard dollar savings to support culture changes? Forget the rumors. We have the facts. In this session, we will present how a Midwest investment firm implemented DevOps on a cloud computing model. The tool stack is SharePoint, SQL Server Business Intelligence, and System Center. Let’s get past the rumors and see how existing organizations are getting the most from DevOps and the cloud.

A clear value proposition statement

Posted by

The first step in managing an IT team is to create a clear value proposition statement. This statement aligns the needs of the organization, the needs of the team, and the needs of the individual. The statement is then used to make downstream decisions, to identify ways to drive up benefits, and identify ways to drive down costs.

The value prop statement that I use is the unity of business value, team passion and interest, and team skills and knowledge. (Wikibon has a Venn diagram that illustrates this relationship here: DevOps — One Team, One System.) The nexus of these three areas are the hotspot where we focus our time and attention.

IT work in the hotspot drives either top-line growth or reduces costs on the bottom-line. Put differently, business value means activities that enable other business units to drive revenue, enable other business units to cut costs, or enables my team to reduce the IT budget. That is the business value side of the equation.

On the passion and interest side, providing engaging work is essential. Check out last year’s salary survey from Information Week and the question “What matters most to you about your job?” Your opinion and knowledge are valued (40%), challenge of job/responsibility (39%), recognition for work well done (31%), your work is important to the company’s success (22%), ability to work with leading-edge technology (21%), and ability to work on creating “new” innovative IT solutions (20%). The personal side of the equation is having a fulfilling and satisfying career.

In sum, the first place to start when managing a team — and the first place to start with this series — is developing a clear value proposition. This must delineate benefits from costs while aligning the team with the business.  In my way of thinking, the benefits come from working within the hotspot. The costs come from working outside, in areas that are not driving business value, are not of interest, or are outside our skill-set. In the next few articles, I will return to this concept and explore it in more detail.

Wolfgang
Books. I picked up on this value proposition in 2001 from Jim Collins’ book Good to Great. Definitely a must read for building teams and organizations.

 

An emerging movement

Posted by

Klint Finley on SiliconAngle put up a good article last week on what he calls the “Emerging Anti-Stupid Movement“. He quotes Linda Musthalher as saying, “There is a shortage of companies willing to invest in the training and development of enthusiastic and committed employees.” What is anti-stupid? Investing in your team to support and enable them in producing quality work.

Check out the article. Finley has some interesting stats; such as the average IT pro putting in 71 hours a week and about 1/3 of the IT pros dissatisfied and looking for another job.

I talk a lot about how my team works. My management style comes from years of running small consulting teams. My style is people-centric and focuses on the value proposition.

I talk a lot about it. But as was recently pointed out to me, I do not write a lot about it. Practically nothing, actually. Over the next few weeks, I am going to do a series of Management Monday posts. Perhaps the time is right to build on this “Anti-Stupid Movement“.