Disposable end-point model

Archive for the ‘Virtualization’ Category

Disposable end-point model

Posted by

One project in my portfolio at the moment is building what I call a disposable end-point model. It is a low priority project, but an ongoing one. The goal is to deliver the best user experience at the lowest price-point.

Portability is a must. Think about the concerns over swine flu and the like. What is your pandemic plan? My pandemic plan, at least from a technology standpoint, is straightforward. People work from home over the vpn and run apps from Citrix. So the end-point devices must be portable and dual-use.

Yet traditional notebooks are expensive. My firm, like most, has an inventory of aging notebooks. These older computers are costly to maintain (studies show ~$1K per device per 2 years) and replace if lost or stolen (studies show ~$50K per incident).

The sweet spot are computers that are cheaper than supporting aging devices and disposable if lost or stolen. No local data means no security incident, which erases the risk exposure of stolen devices. These inexpensive computers should be light-weight and easily ported from office to home. So I am looking at netbooks, which run around $500.

I spoke with Jeff Vance, Datamation, about these ideas. He recently wrote an excellent article that summarizes the netbook market and how data center managers are looking to use the devices: Will Desktop Virtualization and the Rise of Netbooks Kill the PC?

Virtualization and BCP Webinar Today

Posted by

How My Firm Reduced Costs and Delivers Agile IT Infrastructure through Virtualization

Virtualization has become an important part of many organizations’ IT strategy for 2009 and beyond.

The availability of both data and IT systems are at risk not only from natural disasters but also power outages, human error and hardware failures. To ensure that your company can quickly recover systems and data in the event of such an incident, it needs a reliable and cost-effective disaster recovery plan. The use of virtualization and data protection technology combined helps you control costs as your company grows, which is essential in any economic climate.

My firm has deployed CA ARCserve Backup and Microsoft Hyper-V server 2008 to create a simple and scalable disaster recovery environment. The combined solution is responsible for backing up around 36 terabytes of data every week.

Join this webcast to find out how CA ARCserve Backup, combined with Microsoft Hyper-V 2008, can work in tandem to protect many terabytes of data, and deliver an agile, cost-effective IT infrastructure.

In this webcast, you’ll also hear:

  • How to utilize CA ARCserve Backup to restore single files
  • How CA ARCserve can backup your physical environment and also restore virtual instance
  • Microsoft’s Virtualization Strategy
  • The role Microsoft’s Hyper-V plays today and what you can expect in the upcoming release of Windows Server 2008 R2
  • How in the event of a disaster, my firm is able to recover 86 physical servers to 12 standby servers in just two hours
  • How my firm has been able to minimize not only downtime but also its spend on disaster recovery utilizing this combined solution

Eric G. Pitcher
Eric Pitcher is vice president of technology strategy at CA, responsible for setting and communicating CA’s Recovery Management plans across the business unit, throughout CA and to partners and customers. Previously, Eric served as vice president of product management at CA, responsible for defining the process, requirements and product specifications for CA’s Recovery Management product lines. Prior to that, Eric worked as assistant vice president of CA’s research and development global SWAT team—a specialized task force designed to maximize the quality, customer satisfaction, and market competitiveness of CA’s storage management solutions.

Before joining CA, Eric was network and systems administrator at Universal Studios Florida and was responsible for server and network design, installation, administration and support on a network of more than 1,000 users. Eric earned a bachelor of science degree in business administration from the University of Central Florida.

Wolfgang Goerlich
J Wolfgang Goerlich, CISSP, CISA, is an information security professional who focuses on performance and availability. Mr. Goerlich is currently … the network operations and security manager. With ten years of experience, Mr. Goerlich has a solid understanding of both the IT infrastructure and the business it enables.

Isaac Roybal
Isaac Roybal is a Product Manager in Windows Server managing the Server Virtualization, including Microsoft’s Hyper-V, and has been involved with IT for over twelve years. Seven of those years have been with Microsoft. Isaac’s career started in Systems and Network Engineering working with VMS, Windows Server since NT 3.51 and IIS 4 in various capacities.

Virtualization Webinar next Monday

Posted by

As I mentioned before, I have played around a bit with Hyper-V and virtualized my production and recovery systems. CA did a case study on the project. This coming Monday, April 20 at 12:00 pm Eastern, I am doing a joint webcast with CA and Microsoft. The topic is still virtualization with the focus on disaster recovery. I doubt I will say anything new during the talk, excepting the talk will be much briefer than some others I have given on DR. CA’s going to talk a bit about their CDP, however, which is pretty cool stuff.

Virtual server Sprawl

Posted by

http://www.networkworld.com/news/2008/120508-virtual-server.html?netht=rn_120508&nladname=120508

Good article. I go back to the ending sentence: “but assuming you control sprawl, virtualization is typically worth it from an ROI perspective.”

Sprawl is much the same, whether physical or virtual. We have finite resources and definite deadlines. The bottom line is that we need effective capacity planning, configuration management, and change management. The better capacity meets requirements, the closer our foot print will reflect business demand. The faster we can affect change, thru configuration and change management, the better we can meet changing demands.

Personally, I am more focused on improving our capacity and flexibility. Sprawl I can manage.

 

J Wolfgang Goerlich

More on VDI

Posted by

VDI is great in theory but doesn’t price out right. Say we convert 25 desktops to VDI. That means we need 25 processors and 100 GB of memory (assuming each desktop has 4 GB). That figures out to be six servers, quadcore, with 18 GB of memory (assuming 1 GB for the OS). The servers would cost around $7K, so figure $42K plus licensing. Say $54K. That means I end up spending $2,160 per desktop (excluding the thin client) to have hardware that I could by at Dell for $1K.

But wait, I think, there are storage savings. Figure 100 GB per machine. A desktop hard drive runs about $50, or $2/GB. A server hard drive on the San runs $1,500 or $15/GB. The best case scenario would have  the provisioning gold-copy/stub model sharing one image across 25 machines. Desktop cost: $1,000. VDI cost: $1,500. Nope, storage is more expensive even assuming a best case.

The bottom line is I want to take a long, hard look at Citrix’s ROI calculator. It does not make sense in terms of hardware. TCO is where the case will be made. Knowing our desktop demands would help us to know if we need 6 servers or could get away with fewer. I can enable perf counters and do a study on the desktops to determine typical utilization. It could be done with WMI, scripting, and a little elbow grease.

XenDesktop and Virtual Desktop Infrastructure

Posted by

Citrix was in to present and discuss the technical merits of XenDesktop. I am considering VDI, which requires XenDesktop Enterprise and their provisioning server. Citrix’s technology sounds impressive. Still, the question looming large in my mind is what XenDesktop + Provisioning brings to the table that Hyper-V + SCCM lacks. It is impressive yet the proof is in the pudding. I may do a pilot Q1 or Q2 2009.

Hyper-V Disk Issues

Posted by

I am seeing an odd issue with Hyper-V vms on pass-thru disks. Say an event occurs on the storage array that causes the disks on the Hyper-V server go offline momentarily. They can be brought back online afterwards. Hyper-V then loses the handle on the disk. There are four broad categories of symptoms that then occur:

  1.  Very broadly speaking, if the disk contains server-specific information such as a paging file, then the server behaves erratically when it goes offline.
  2. If the disk in question goes offline and it contains the vm definition files (.bin, .vsv), then the vm disappears from the Hyper-V console.
  3.  If the disk goes offline and it contains vm disks (.vhd), then the vm in question crashes.
  4. If the disk is directly mapped to a vm as a host resource, then the vm is shutdown. Sometimes the state is saved. The settings show that the physical disk cannot be found. The vm’s saved state has to be deleted and then the physical disks reselected in the vm settings dialog.

I am still troubleshooting. More details to follow.

Huh? VMware’s ESX KO’s a roughly built Hyper-V package

Posted by

VMware’s ESX KO’s a roughly built Hyper-V package

“When the dust settled in the lab after two long months of testing Microsoft’s Hyper-V and VMware’s ESX in the areas of performance, compatibility, management, and security, it all boiled down to two issues: experience and religion.”

I spent quite a bit of time with both VMware and Hyper-V. I agree with some of what is in this article. VMWare is a more mature product and hence its vm management tools are more robust. VMWare also supports a wider array of non-Windows OS vms. All true. Yet all of what I am virtualizing at this point is Windows, and all of the management I need can be done thru the Hyper-V UI. Hence the question comes down, in my mind, to performance over dollars. In the bang for the buck factor, my bet is still on Hyper-V.

Virtualization for Disaster Recovery: Strategies

Posted by

Using virtualization as a disaster recovery strategy can in one of two scenarios:

First scenario is vm to vm. Put a hypervisor at the production site and another at the recovery site. Run the production server in a vm. Replicate the vm drives to the recovery site. During a disaster, boot the vm up on the recovery hypervisor.

The second scenario is bare metal to vm. Put a physical server running on bare metal at the production site. Stage the physical server with the necessary vm drivers (in Hyper-V, this is called the Integration Components.) Put a hypervisor at the recovery site. Replicate the disks. During a disaster, boot the server up as a vm on the recovery hypervisor. The second scenario requires block level replication and the ability for the hypervisor to read native disks. If both of these requirements are not possible, an alternative solution exists. This is to restore the production server into a vm using software that supports VM P2V DR. Examples of this software include Acronis, Arcserve, and Backup Exec. The downside is that this option takes significantly longer.