Cisco’s new business tablet

Archive for the ‘Architecture’ Category

Cisco’s new business tablet

Posted by

Perhaps another step forward towards disposable end-point tablet computing. (Wow, that was a mouth full). I would be interested in piloting the Cisco Cius coupled with VDI.

“Cisco announces that it will be launching an Android-based tablet next year named the Cius, aimed squarely at the business market.”

A prediction on cloud computing adoption

Posted by

I am making a prediction on how the small-medium business market will adopt cloud computing. That’s a risky business, predicting. But we are rolling into a new year, so the time feels right.

My premise is this: adoption of cloud computing will mirror the adoption of virtualization.

The first wave will be infrastructure: development, test environments, backup, disaster recovery. These are not your line-of-business apps. These are not tier 1 apps. These are the solutions that allow an IT team to cut their teeth and learn with minimal risk to the organization’s mission.

The second wave will be point solutions. These are IT solutions to business problems. Need a workflow app? Need a point-and-click reporting solution? Turn to the cloud. These can be considered tier 3, maybe even tier 2. Still, these are not line-of-business apps. These solutions allow an IT team to add value in the business with their cloud savvy knowledge.

This will inevitably lead to a wide range of technologies and vendors. Someone will call this cloud sprawl, and set of the third wave. Consolidation of existing solutions under one cohesive framework. At this point, the bumps will be smoothed over. The technology will be proven. IT teams and businesses will then seek to move tier 1 and line-of-business software to the cloud.

The time frame for this shift will be 3-5 years. My thought is this will play out like other game changing IT solutions. The pace will be set by the organization weighing cost savings against risk. The vanguard will be IT teams that progress thru the first and second waves, before the third wave comes and swamps the ship.

For IT teams, the trick is to build the in-house expertise and keeping cost competitiveness with public cloud solutions. For businesses, the trick is to ensure that IT solutions are proceeded with based on cost savings and value propositions, rather than based on hype. For everyone involved, this will be an interesting 3-5 years.

Disposable end-point model

Posted by

One project in my portfolio at the moment is building what I call a disposable end-point model. It is a low priority project, but an ongoing one. The goal is to deliver the best user experience at the lowest price-point.

Portability is a must. Think about the concerns over swine flu and the like. What is your pandemic plan? My pandemic plan, at least from a technology standpoint, is straightforward. People work from home over the vpn and run apps from Citrix. So the end-point devices must be portable and dual-use.

Yet traditional notebooks are expensive. My firm, like most, has an inventory of aging notebooks. These older computers are costly to maintain (studies show ~$1K per device per 2 years) and replace if lost or stolen (studies show ~$50K per incident).

The sweet spot are computers that are cheaper than supporting aging devices and disposable if lost or stolen. No local data means no security incident, which erases the risk exposure of stolen devices. These inexpensive computers should be light-weight and easily ported from office to home. So I am looking at netbooks, which run around $500.

I spoke with Jeff Vance, Datamation, about these ideas. He recently wrote an excellent article that summarizes the netbook market and how data center managers are looking to use the devices: Will Desktop Virtualization and the Rise of Netbooks Kill the PC?

Open Up and Lock Down

Posted by

Today’s networks balance opening up with locking down. The model perimeter, with a single access gateway protected with a firewall, is quickly disappearing. All end-points should now run their own firewalls. All hosts (particularly high valued servers) should now be bastion hosts. Access across the network should be locked down by default, and then opened up only for particular services.

I think we see this change reflected in several trends. The ongoing focus on detection controls over defensive controls is because modern networks have a significantly broader attack surface. Last year’s focus on end-point security was about making computers bastion hosts. Risk management and governance is a hot topic now and it seeks to understand and protect business networks in their entirety, end-to-end.

I can only use my own firm as an example. We have some 17 dedicated connections coming in from partners and exchanges. We have five inter-office connections. We have 6 perimeter firewalls, or 7 if you include the Microsoft ISA server. All servers are running a host firewall and are locked down. All this so we can gain access to the resources of partners and vendors, and to provide resources to partners and clients. And this is in a relatively small company with less than 200 employees. Imagine the complexity of mid-sized and enterprise networks.

Open Up. Collaborate and succeed. Lock Down. Secure and protect.

J Wolfgang Goerlich
The eroding enterprise boundary: Lock Down and Open Up
http://www.theregister.co.uk/2009/03/12/eroding_enterprise_boundary/

IBM Security Technology Outlook: An outlook on emerging security technology trends.
ftp://ftp.software.ibm.com/software/tivoli/whitepapers/outlook_emerging_security_technology_trends.pdf

Security is Design

Posted by

Welcome to 2009, and welcome back to my blog. This year’s focus is on using network architecture to create information security.

I come to this after reading some reports from Gartner Group: Three Lenses Into Information Security; Classifying and Prioritizing Software Vulnerabilities; and Aligning Security Architecture and Enterprise Architecture: Best Practices.

The first report posits that designing or architecting security is one of three lenses thru which to view InfoSec (the other two being process-focused and control-focused). Why this emphasis on architecture? The primary reason is that most vulnerabilities are not within the software themselves, but within your implementation.

“Gartner estimates that, today, 75% of successful attacks exploit configuration mistakes.” Furthermore, few of us have the skills, time, and license to modify the software to address the remaining 25% of the vulnerabilities. Thus the largest positive impact an InfoSec professional can have on security is thru planning and architecting the system design.

The secondary reason is that retrofitting system architectures with security after the fact is time intensive and service invasive. It often requires stopping work during the change implementation. It may require altering the work after implementation. This has a tangible cost. Gartner puts it thusly: “The careful application of security architecture principles will ensure the optimum level of protection at the minimum cost.”

The bottom line is that emphasizing security architecture in the original design minimizes costs and vulnerabilities.

Perimeter-less Security and Clouds on the Horizon

Posted by

Cloud computing is similar to what the tech industry has been calling “on-demand” or “utility” computing, terms used to describe the ability to tap into computing power on the Web with the same ease as plugging into an electric outlet in your home. But cloud computing is also different from the older concepts in a number of ways. One is scale. Google, Yahoo!, Microsoft, and Amazon.com have vast data centers full of tens of thousands of server computers, offering computing power of a magnitude never before available. Cloud computing is also more flexible. Clouds can be used not only to perform specific computing tasks, but also to handle wide swaths of the technologies companies need to run their operations. Then there’s efficiency: The servers are hooked to each other so they operate like a single large machine, so computing tasks large and small can be performed more quickly and cheaply than ever before. A key aspect of the new cloud data centers is the concept of “multitenancy.” Computing tasks being done for different individuals or companies are all handled on the same set of computers. As a result, more of the available computing power is being used at any given time.”

Clouds are on the horizon. I know very few data centers that host everything internally. Most, including my own, deliver a mixture of desktop applications, client-server applications, and hosted (e.g., cloud) web apps. The shift has an immediate impact on security planning. Information security architectures began with terminal-server applications and focused on strong perimeters. With apps moving to the desktops, the perimeter became a little wider and a little more porous. But we could still control the information, by restricting what data was on the desktops and using technologies like end-point security. In fact, one might argue that many of our controls today are based around restricting information to the data center and keeping it off the desktops. The next major shift, which we are already starting to see, is moving the information from data centers to third-party hosting providers. This is only going to accelerate as young people, weaned on MySpace and Gmail,  join the workforce. Another accelerant which we may see in the next few years is another economic downturn. Both sociological and economical changes are moving the data from controlled perimeters to uncontrolled open spaces. The clouds on the horizon are coming nearer.

The open question is this: how do we build controls in an age of perimeter-less security?

Encrypting private circuits (VPN over Frame Relay and MPLS)

Posted by

This is a summary of a debate I recently had with a network engineer. The question is whether or not to run a VPN over a private circuit.

Let’s start with a quick definition of terms.

Private circuits are provided by the telcos to sites that companies own. So say you have a site in Detroit and Chicago. Way back when, the way to connect the two was to run a dedicated line. A dedicated line provides dedicated bandwidth and constant latency at a price. This was rather expensive. For redundancy, you purchased separate lines.

Frame Relay was the telcos solution. You can think of Frame Relay as a virtual dedicated line. The connection relays network frames (layer 2 or data link) over several physical networks to create a logical end-to-end line. The Detroit-Chicago traffic crosses any number of devices and circuits, but the link is presented to your layer 2 switches like a dedicated line. This cuts down the cost. The trade-off is varying bandwidth and latency because Frame Relay is a shared resource. Like dedicated lines, you needed to purchase separate circuits for redundancy.

MPLS (Multi-Protocol Label Switching) aims to provide performance of dedicated circuits at the cost of frame relay. MPLS adds traffic management and in-network redundancy. Detroit-Chicago over MPLS is still a shared resource, but now the link can more effectively shape the traffic and utilize the networks of circuits. Redundancy is baked into the circuit.

In terms of cost, it goes redundant dedicated lines, redundant Frame Relay, and MPLS. All three are rather expensive. Meantime, the Internet reliability has increased and cost has decreased. Strictly from a costing perspective, you might build the Detroit-Chicago link with a VPN (Virtual Private Network) over the Internet. Like MPLS and Frame Relay, VPN is a logical end-to-end line that builds on top of a physical network.

Now to dive into the confusion.

Telcos sell dedicated lines as private circuits. Telcos sell Frame Relay and MPLS as virtual private circuits. The argument you will hear is that security is built-in because these are private circuits. There is also the operations efficiency of not having to deal with routing infrastructure and possibly even IP infrastructure. (Novell administrators love Frame Relay as they can run IPX/SPX over it.) A private circuit is private, right? Not exactly.

First, all the telco equipment between Detroit and Chicago have visibility into the IP traffic. A truism of network security is that a network is only as secure as the people who have access to the equipment. In this case, that could be any number of telco support technicians. They can read your packets.

Second, the privacy of the private circuit depends upon the correct configuration of the equipment. Your packets and some other organization’s packets are streaming down over the same routers. The only difference is the tags in the packets and the how the routers treat those tagged packets. When improperly configured, the circuit fails open. I have seen situations where new IP subnets suddenly appear on the LAN switch, and are traced back to a misconfigured frame relay. Whose packets were they? Did the owner of those packets know he was broadcasting onto our network? Probably not.

My recommendation.

Treat dedicated lines, Frame Relay, and MPLS circuits the same as you would untrusted Internet circuits. Encrypt all internal traffic that travels over these circuits.

Virtual private circuits are not the same as Virtual Private Networks. One is clear text and the other is encrypted. The first ensures availability and integrity. The second ensures confidentiality. For best results, run a VPN over a virtual private circuit.

What about bandwidth and latency? With decent gear, the bump in thru put should be no more than 10%. Look for routers or firewalls that advertise encryption and line speed.

What about packet shaping? Use the QoS preclassify features of your gear. This reads the DSCP/TOS from the unencrypted packets, and writes it to the encrypted packets. The MPLS gear can then handle the encrypted packets properly.

What about complexity and cost? The trade-off is security for operational complexity. You may need to purchase more gear. You will need to implement IP routing. These add to the overall cost of the solution. Given the increases in security of encryption over clear text, the trade-off is worth it.