Build Roombas not Rosies – Design Mondays

Archive for the ‘Architecture’ Category

Build Roombas not Rosies – Design Mondays

Posted by

The Jetsons debuted this month in 1962. The cartoon depicted a family living a hundred years in the future, 2062. The swooping architectural style, with the quite fun name Googie, serves as the visual language of the future in shows from The Incredibles to Futurama. The everyday gadgetry in the Jetsons foreshadows today’s drones, holograms, moving walkways and stationary treadmills, flat screen televisions, tablet computers, and smart watches.

Remember color television was on the very cutting edge of technology when The Jetsons debuted. This list is impressive. But that smart watch? That last one wasn’t by accident.

The dominant smart watch in 2020 is the Apple Watch, designed by Marc Newson and Jony Ive. In an interview with the New York Times, Marc Newson explained his fascination with the Jetsons lead him into the world of design. “Modernism and the idea of the future were synonymous with the romance of space travel and the exotic materials and processes of space technology. Newson’s streamlined aesthetic was influenced by his Jetsonian vision of the future.” I imagine the first time Newson FaceTimed Jony Ive on an Apple Watch, they felt the future had finally arrived.

Designing the future has constraints that imagining the future lacks.

For starters, people and culture constrain innovation. Consider George and his flying car, Elroy and his jetpack, and space tourism. All these are technically feasible in 2020. But I wouldn’t trust a young boy with a jetpack, nor would most of us have money for a trip to the moon. Another constraint is technical complexity. Sure, we have talking dogs. But the reality is much different from the Jetson’s Astro. And yes, we have AI and robotics. But Siri is no R.U.D.I.

When designing future security capabilities and controls, we need to identify and quantify the constrains. One technique for this is the Business Transformation Readiness Assessment. Evaluate factors such as:

  • Desire, willingness, and resolve 
  • IT capacity to execute
  • IT ability to implement and operate
  • Organizational capacity to execute
  • Organizational ability to implement and operate
  • More factors here: https://pubs.opengroup.org/…/chap26.html

With this evaluation, we can rank what’s feasible against what’s needed. We can act on areas with momentum (desire, willingness, resolve) and build capabilities that can be maintained. But! There’s one additional step.

We don’t need a robot to push around a vacuum when we have a robot vacuum. We don’t need a full AI/ML deep learning platform when we can have a well-tuned SIEM. Implement security in a minimum viable way.

Identify the constraints. Select the security capability the organization is most ready for. Then build Roombas, not Rosies.

Rosie the Robot, The Jetsons, Photography by Brilux.

This article is part of a series on designing cyber security capabilities. To see other articles in the series, including a full list of design principles, click here.

Philosophy and Methodology, the Meta-Design Approach of George Nelson – Design Monday

Posted by

Artists create unique piece for a limited audience. Designers create for scale. The tension exists between creating something that works and building something that’s repeatable.

This tension came up in conversation around the article I wrote about Kenji Kawakami and the art of Chindōgu. The principle is employing playful anarchy to bring security controls from useless to un-useless to useful. People were quick to point out that quantifiable, repeatable, scalable security is jeopardized by the ad hoc chaos of creation.

For guidance, look to George Nelson who was the Director of Design for Herman Miller from 1947 to 1972. One of the first designs George Nelson brought forward was a “sculpture-for-use” table by Isamu Noguchi. Sculpture remade as a repeatable product. Nelson also managed designers such as Charles and Ray Eames, Alexander Girard, and Robert Propst. It’s a simple comparison to draw from furniture to technology, from the difficulty of managing people like the Eames to the difficulty of managing today’s cybersecurity talent.

Here is how Nelson did it for twenty-five years:

Philosophy. Reading George Nelson’s introduction to the Herman Miller catalog in light of the intrinsic motivation framework laid out in the book Drive. Autonomy, mastery, purpose. Nelson’s philosophy is finely tuned for getting the best out of innovative people. An unstated undercurrent is that designs must be producible. After all, Herman Miller is a business. The trick was to protect the playful anarchy while harnessing the results for manufacturing at scale. “There is a hint of the craftsman as opposed to the industrialist.”

Methodology. In modern times, George Nelson has been described as a meta-designer. That is, he spent more time designing the furniture design process than he spent designing the actual furniture. While he retired some twenty years before the founding of IDEO, Nelson would have been right at home in the world of design thinking. He pioneered a formal way to go from a series of conversations, to a series of prototypes, to a finished product. Along the way, capturing information and providing feedback to refine not only the design but also the lifecycle itself. Nelson’s approach was showcased in the “The Design Process at Herman Miller” exhibit in 1975.

The challenge in cyber security design is taking a successful proof-of-concept and scaling from prototype to securing the overall organization. How to balance the artist with the designer? The craftsman with the industrialist? Playful anarchy to well-defined operations? Nelson held a philosophy geared to foster those intrinsic motivations of the creative mind. He created a methodology for taking ideas to market. George Nelson combined both into his meta-design approach.

For security leadership to get meta, develop a philosophy and methodology, design a way to design, and improve based on feedback.

Philosophy drives the satisfaction of our people. Methodology drives the success of our initiatives. We need both, and both need continuous improvement.

Sculpture-for-use, Noguchi table, photography by the Isamu Noguchi collection.

This article is part of a series on designing cyber security capabilities. To see other articles in the series, including a full list of design principles, click here.

CSO: Implementing Zero Trust

Posted by

Having a vision and a specific use case help get companies started toward Zero Trust implementation.

Excerpt from: Zero Trust Part 2: Implementation Considerations

A piece of advice at the outset: “Don’t do too much too fast,” says Wolfgang Goerlich, CISO Advisor with Cisco. “Have specific goals, meaningful use cases, and measurable results.”

To build momentum, start with a series of small Zero Trust projects with deliverable milestones, and demonstrate success every few months by showing how risk has been reduced.

“We need to show the board progress. With specific initiatives aimed at specific use cases, we can demonstrate progress towards Zero Trust,” Goerlich says. “You build momentum and a track record for success.”

Read the full article: https://www.csoonline.com/article/3537388/zero-trust-part-2-implementation-considerations.html


This post is an excerpt from a press article. To see other media mentions and press coverage, click to view the Media page or the News category. Do you want to interview Wolf for a similar article? Contact Wolf through his media request form.

CSO: Demystifying Zero Trust

Posted by

Despite the fact that Zero Trust has been around for a decade, there are still misconceptions about it in the marketplace.

Excerpt from: Zero Trust Part 1: Demystifying the Concept

Zero Trust is not one product or solution. Better to think of it as an approach, says Goerlich.

“Zero Trust is trusting someone to access something from somewhere,” he says. “Is it an employee, an application, a device? What is it accessing? What was can we determine if we trust this request? At the end of the day, Zero Trust means providing a consistent set of controls and policies for strong authentication and contextual access.”

The term was coined by Forrester Research in 2010. It was established as an information security concept based on the principle of “never trust, always verify.” Since then, the National Institutes of Standards and Technology (NIST) has produced comprehensive explanations and guidelines toward the implementation of Zero Trust architecture framework.

“NIST has a draft standard that dictates their view of Zero Trust — what the principles are, and what an architecture looks like,” Goerlich says. “The U.K. NCSC has done the same. Zero Trust has matured, and the need for it is now in sharp relief due to changes in the market and the way we use technology.”

Read the full article: https://www.csoonline.com/article/3537189/zero-trust-part-1-demystifying-the-concept.html

Wolf’s Additional Thoughts

I am leading a series of Zero Trust workshops this year. One concept I always stress: we’re applying existing technology to a new architecture. If you think back to Role Based Access Control (RBAC) was first being standardized, we used off-the-shelf x.509 directories and existing Unix/Windows groups to do it.

Now of course, better products offer better solutions. But the point remains. The application of existing standards to realize the principles of Zero Trust brings the concept beyond hype and into reality. Moreover, it makes it much easier to have confidence in Zero Trust. There’s no rip-and-replace. There’s no proprietary protocol layer. We’re simply taking authentication and access management to the next logical level.

Want to know more? Watch my calendar or subscribe to my newsletter to join an upcoming workshop.


This post is an excerpt from a press article. To see other media mentions and press coverage, click to view the Media page or the News category. Do you want to interview Wolf for a similar article? Contact Wolf through his media request form.

Dark Reading: OS, Authentication, Browser & Cloud Trends

Posted by

New research shows cloud apps are climbing, SMS authentication is falling, Chrome is the enterprise browser favorite, and Android leads outdated devices.

Excerpt from: OS, Authentication, Browser & Cloud Trends

Application integration is up across most key categories. The number of customers per cloud app is up 189% year-over-year, and the number of authentications per customer per app is up 56%.

The massive spike in cloud applications means any given employee has at least two or three cloud apps they use to do their jobs, says Wolfgang Goerlich, advisory CISO for Duo Security. “It was a big explosion of shadow IT,” he adds. “It really got away from a lot of the organizations.” Some people often use the same applications for personal and business use, driving the need for businesses to enforce their security policies for cloud-based applications and resources.

Read the full article: https://www.darkreading.com/cloud/security-snapshot-os-authentication-browser-and-cloud-trends/d/d-id/1335262

Wolf’s Additional Thoughts

IT history repeats itself.

The organization moves slow to provide employees with tools and technology. Consumer tech fills in the gap outside of the office. People get savvier and more experienced with tech. People innovate with what they know, to get done what they need to get done.

The organization notices people doing things in an innovative yet ad hoc way. Work is done to standardize tech use. More work is done to secure the tech use. The wild ways of people, the wilderness of shadow IT, is tamed and brought into the light.

We’re at this point now. That’s what the numbers show. But tamed IT is slower than shadow IT. If the past has taught us anything, it is that the cycle will repeat.


This post is an excerpt from a press article. To see other media mentions and press coverage, click to view the Media page or the News category.

Cloud adoption and use

Posted by

I am tremendously in favor of virtualization, a staunch proponent for cloud computing, and I’d automate my own life if I could. After all, we dedicated most of last year to investigating and piloting various cloud backup solutions. But take a peek at my infrastructure and you might be surprised.

Why is my team still running physical servers? Why are we using so few public resources? And tape, really?

I am not the only one who is a bit behind on rolling out the new technology. Check out this study that came out on Forbes this week. “The slower adoption of cloud … reflects a greater hesitancy … remain conservative about putting mission-critical and customer data on the cloud. Regulations … may explain much of this reluctance. The prevalence of long-established corporate data centers with legacy systems throughout the US and Europe … may be another factor. Accordingly, the study confirms that overcoming the fear of security risks remains the key to adopting and benefiting from cloud applications.”

I have a sense that cloud computing, in the IaaS sense, is roughly where virtualization was circa 2004. It is good for point solutions. Some firms are looking at it for development regions. Now folks are beginning to investigate cloud for disaster recovery. (See, for example, Mark Stanislav’s Cloud Disaster Recovery presentation.) These low risk areas enable IT management to build competencies in the team. A next step would be moving out tier 3 apps. A few years after that, the mission-critical tier 1 apps will start to move. This will happen over the next five to eight years.

This logical progression gives the impression that I see everything moving to the cloud. As Ray DePena said this week, “Resist the cloud if you must, but know that it is inevitable.” I can see that. However inevitable cloud computing is, like virtualization, it does not fit all use cases.

Why are some servers still physical? In large part, it is due to legacy support. Some things cannot be virtualized and cannot be unplugged, without incurring significant costs. In some cases, this choice is driven by the software vendor. Some support contracts still mandate that they cover only physical servers. Legacy and vendors aside, some servers went physical because the performance gains outweigh the drawbacks. Decisions, decisions.

The majority of my environment is virtualized and is managed as a private cloud. Even there, however, there are gaps. Some areas are not automated and fully managed due to project constraints. We simply have not gotten there yet. Other areas probably will never be automated. With how infrequent an event occurs, and with how little manual work is needed, it does not make sense at my scale to invest the time. This is a conscious decision on where it is appropriate to apply automation.

Why are we not using so more public resources? Oh, I want to. Believe me. Now I am not keen on spending several weeks educating auditors until cloud reaches critical mass and the audit bodies catch up. But the real killer is costs. For stable systems, the economics do not make sense. The Forbes article points out that the drivers of public cloud are “speed and agility — not cost-cutting.” My team spent ten months in 2011 trying to make the economics work for cloud backup. Fast forward a half of a year, and we are still on tape. It is an informed decision based on the current pricing models.

Is cloud inevitable? The progression of the technology most surely is, as is the adoption of the technology in areas where it makes sense. The adoption curve of virtualization gives us some insight into the future. Today, there are successful firms that still run solely on physical servers with direct attached storage. Come 2020, as inevitable as cloud computing is, it is equally inevitable that there will be successful firms still running on in-house IT.

Many firms, such as mine, will continue to use a variety of approaches to meet a variety of needs. Cloud computing is simply the latest tactic. The strategy is striking the right balance between usability, flexibility, security, and economics.

Wolfgang

Side note: If you do not already follow Ray DePena, you should. He is @RayDePena on Twitter and cloudbender.com on the Web.

Peer Incites next week

Posted by

I will be on Peer Incites next Tuesday, March 6th, for a lunch time chat on team management. The talk is scheduled for 12-1pm ET / 9-10am PT.

DevOps — the integration of software developement and IT operations — is a hot topic these days. In my current role, I took on IT operations in 2008 and took on software development in 2010. I have been driving the combined team using value proposition lens of the nexus of passion, skillsets, and business value. Add to this my favorite topic, training and skill hops, and we get a winning mix for leading a productive DevOps team.

I will dig into the nuts-and-bolts next Tuesday. Details are below. Hope you can join us.

Wolfgang

 

Mar 6 Peer Incite: Achieving Hyper Productivity Through DevOps – A new Methodology for Business Technology Management

By combining IT operations management and application development disciplines with highly-motivating human capital techniques, IT organizations can achieve amazing breakthroughs in productivity, IT quality, and time to deployment. DevOps, the intersection of application development and IT operations, is delivering incredible value through collaborative techniques and new IT management principles.

 

More details at:
http://wikibon.org/wiki/v/Mar_6_Peer_Incite:_Achieving_Hyper_Productivity_Through_DevOps_-_A_new_Methodology_for_Business_Technology_Management

Comments on Cloud computing disappoints early adopters

Posted by

Symantec surveyed several businesses to find out how they felt about cloud computing. The standard concerns about security were expressed. Still no concrete statistics on the difference between the threat exposure of in-house IT versus the threat exposure of public cloud IT. The concern about expertise surprises me, however, as managing a cloud environment is only slightly different than managing an enterprise data center. I have a hunch that it may be IT managers protecting their turf by claiming their guys don’t have the expertise, but I may be off. So what’s going cloud? Backups, security, and other non-business apps. No surprise there. Give it a few more years yet.

“While three out of four organizations have adopted or are currently adopting cloud services such as backup, storage and security, when it comes to the wholesale outsourcing of applications there is more talk than action, Symantec found. Concerns about security and a lack of expertise among IT staff are the main factors holding companies back, according to the survey of 5,300 organizations …”

Cloud computing disappoints early adopters:
http://www.reuters.com/article/2011/10/04/us-computing-cloud-survey-idUSTRE7932G720111004

Private clouds, public clouds, and car repair

Posted by

I am getting some work done on one of my cars. I never have any time. I rarely have any patience. Occasionally, I occasionally have car troubles. So into the dealership I go.

Every time, I hear from my car savvy friends and coworkers. The dealership takes too long. The dealership costs too much. If there is anything custom or unique about your vehicle, it throws the dealership for a loop.

Sure. Doing it yourself can be faster and cheaper. But if, and only if, you have the time, tools, and training. Short of any of these three, and the dealership wins hands down. If you are like me, then you have no time and no tools more complex than pliers and a four bit screwdriver set.

What does this have to do with cloud computing? Well, it provides a good metaphor for businesses and their IT.

Some businesses have built excellent IT teams. Their teams have the time to bring services online, and to enable new business functionality. These are the businesses that equip their IT teams with the tools and provide the training. Hands down, no questions asked, these teams will deliver solutions with higher quality. These IT teams can do it in less time and for less cost.

Other businesses have neglected IT. These are the teams that are told to keep the lights on and maintain dial-tone. Their IT systems are outdated. Possibly, their personnel has outdated skillsets. It makes as much sense for the internal IT teams to take on infrastructure projects as it does for me to change out my transmission. The costs, efforts, and frustration will be higher. The quality? Lower.

These are two ends of the spectrum, of course. Most IT teams are a mix. They are strong in some areas, and weak in others.

I suggest we play to our strengths. Businesses look to enable new functionality. Like with car repairs, we can step back and consider. Does our team have the time, tools, and training in this area? What will bring the higher quality and lower costs? That’s the way to decide build versus buy and the our cloud versus public cloud questions.

Cost justifying 10 GbE networking for Hyper-V

Posted by

SearchSMBStorage.com has an article on 10 GbE. My team gets a mention. The link is below and on my Press mentions page.

For J. Wolfgang Goerlich, an IT professional at a 200-employee financial services company, making the switch to 10 Gigabit Ethernet (10 GbE) was a straightforward process. “Like many firms, we have a three-year technology refresh cycle. And last year, with a big push for private cloud, we looked at many things and decided 10 GbE would be an important enabler for those increased bandwidth needs.”

10 Gigabit Ethernet technology: A viable option for SMBs?
http://searchsmbstorage.techtarget.com/news/2240079428/10-Gigabit-Ethernet-technology-A-viable-option-for-SMBs

My team built a Hyper-V grid in 2007-2008 that worked rather nicely at 1 Gbps speeds.We assumed 80% capacity on a network link, a density of 4:1, and an average of 20% (~200 Mbps) per vm. In operation, the spec was close. We had a “server as a Frisbee” model that meant non-redundant networking. This wasn’t a concern because if a Hyper-V host failed (3% per year) it only impacted up to four hosts (%2 of the environment) for about a minute.

When designing the new Hyper-V grid in 2010, we realized this bandwidth was no longer going to cut it. Our working density is 12:1 with our usable density of 40:1. That meant 2.4 Gbps to 8 Gbps per node. Our 2010 model is “fewer pieces, higher reliability” and that translates into redundant network links. This was more important when a good portion of our servers (10-15%) would be impacted by a link failure.

Let’s do a quick back of the napkin sketch. Traditional 1 Gbps Ethernet would require 10 primary and 10 secondary Ethernet connections. That’s ten dual 1 Gbps adapters: 10 x $250 = $2,500. That’s twenty 1 Gbps ports: 20 x $105 = $2,100. Then there’s the time and materials cost for cabling all that up. Let’s call that $500. By contrast, one dual port 10 GbE adapter  is $700. We need two 10 GbE ports: 2 x $930 = $1,860. We need two cables ($120/per) plus installation. Let’s call that $400.

The total cost per Hyper-V host for 10 GbE is $2,960. Compared to the cost of 1 Gbps ($5,100), we are looking at a savings of $2,140. For higher density Hyper-V grids, 10 GbE is easily cost justified.

It took some engineering and re-organizing. We have been able to squeeze quite a bit of functionality and performance from the new technology. Cost savings plus enhancements? Win.