Verizon’s new Business Internet Secure bundle for small businesses taps Cisco and BlackBerry security services to help protect customers’ routers and connected devices. A recent Verizon Business survey found 38% of small businesses moved to remote work because of the COVID-19 pandemic.
To support this transition, Verizon Business Internet Secure protects against threats at two points where attacks typically occur: employee devices with BlackBerry and the internet with Cisco Umbrella.
Even pre-pandemic, small businesses faced the same threats and potential damages from an attack, according to a Cisco security report based on a survey of almost 500 SMBs. The report also found that these companies take security preparedness every bit as seriously as their larger counterparts. And this matters because the security industry has traditionally been biased against SMBs, perpetuating the myth that they don’t prioritize cybersecurity, the report says.
“SMB executives, IT executives, security executives in these businesses have done their best to address the problem,” said Wolfgang Goerlich, advisory CISO at Cisco Duo in an earlier interview. What this means is that SMB IT and security leaders now have to ask themselves what’s next, he added. “Where do I go from here?”
This post is an excerpt from a press article. To see other media mentions and press coverage, click to view the Media page or the News category. Do you want to interview Wolf for a similar article? Contact Wolf through his media request form.
Paul Rand is behind so many stories this series has covered. The Olivetti Valentine typewriter designed by Ettore Sottsass and used by Dieter Rams in his documentary? Paul Rand did Olivetti’s US advertising. Speaking of Deiter Rams, the Braun shavers that made Rams famous? Paul Rand bought every model. (Though Rand once said he would “buy just for their beauty and then put them in a drawer.”) IDEO, the birthplace of design thinking? Paul Rand did IDEO’s logo. He collaborated on a team with Charles Eames on IBM’s Design Program. I like to think some of that work was in the IBM plaza building that Ludwig Mies van der Rohe designed. The building, by the way, sported the iconic IBM logo which was, you guessed it, designed by Paul Rand.
Paul Rand was instrumental in creating the culture and discipline of graphic design. He taught the next generation at Yale from 1956 to 1985, with a break in the 1970s. Rand was visiting professor and critic at a number of other institutions. Check out the book Paul Rand: Conversations with Students for a view into that work. “What is design?” Paul would often ask. When he wasn’t creating, Rand was instructing, and through instruction, he was creating culture.
Like Paul Rand fostered designers who brought ideas to wider audiences, security leaders need to foster advocates who will bring security ideas to the wider workforce.
We don’t talk much about advocates. A security advocate is a member of the security team who focuses on getting practices into the hands of the workforce. It’s more common for us to talk about security champions. A security champion is a member of the business itself, who collaborates with the security team on best practices. A fully fleshed out security capability has advocates working with champions to interpret and implement security controls. In a well-run security capability, those controls will be usable and widely adopted, because of the partnership of advocates and champions.
To learn more about cyber security advocates and what they need to succeed, check out the “It’s Scary…It’s Confusing…It’s Dull” research paper. These professionals “advocate for systems and policies that are usable, minimize requisite knowledge, and compensate for the inevitability of user error.”
Here are four practices from Paul Rand that we can apply to designing a security advocacy program:
(1) Coach on tangible work, not abstract principles. Rand’s courses were practical not theoretical, with advice given based on the student’s work. He focused stories, literature, examples, and more through the lens of the work at hand.
(2) Coach one-on-one, avoid one size fits all. Paul Rand worked individually with students, and a session on their work “went on as long as was necessary to set the student on the right track and was laced with stories from Paul’s vast career as they were appropriate to the issue at hand. When he worked with students, he poured his heart and soul into it.”
(3) Use short cycle times. Typically, the criticism on individual work in Rand’s courses came weekly. Feedback was quick, specific, and direct. Compare this to many security programs where manager feedback comes at annual reviews.
(4) Encourage personalization. Rand taught designers to build their own set of techniques, their own visual vocabulary, to solve problems. That’s not for the sake of originality. “Don’t try to be original,” Rand often said, “just try to be good.” It’s to develop a sense of the designer’s personal needs and strengths and how to mesh those with the audience’s instincts and intuitions.
When designing a cyber security program, give thought into how leadership will coach advocates. Give thought to how advocates will cultivate security champions. With a nod to Paul Rand, prompt both with a deceptively simple question. “What is security?”
This article is part of a series on designing cyber security capabilities. To see other articles in the series, including a full list of design principles, click here.
A new technology is a new toy. “Toys are not really as innocent as they look. Toys and games are the prelude to serious ideas.”
So said Charles and Ray Eames. The Eames ran a design studio in California (1943–1988) producing architecture, films, furniture. Arguably their most well-known piece was the Eames Lounge Chair. The chair, produced by Herman Miller, ushered in a new era of materials and is a valuable collector’s item today. It’s impossible to overstate this. It was impossible to make furniture that way before Eames. But this story isn’t about a chair.
This story is about a toy elephant.
A decade before the Eames molded wood for a Herman Miller chair, they were playing with molding processes in toys. The result? The Eames Elephant, a toy intricately crafted from molded plywood. The complexity of the elephant was foretold by dozens of unnamed playful experiments. The elephant itself foreshadowed the lounge chair. Without play, without toys, the Eames would never have mastered the underlying skills that produced the later masterpiece.
Playtime is fertile ground for innovation.
The power and necessity of play is a cross-discipline truth. In music, Miles Davis once said “I’ll play it first and tell you what it is later.” In biology, Alexander Fleming often said “I like to play with microbes.” Physics? Andre Geim stated the “playful attitude has always been the hallmark of my research.” The final word on this human condition goes, appropriately enough, to the psychologist Carl Jung. “The creation of something new is not accomplished by the intellect, but by the play instinct arising from inner necessity. The creative mind plays with the object it loves.”
A pilot is purposeful play. We need to pilot ideas and technologies as we frame up the security capability. To get the best work, people doing the pilot must be dedicated, be engaged, and enjoying themselves. As leaders, we clear calendars and make space. We also need to clear bureaucracy and other hinderance to fun. As implementers, we need to clear our heads and reach a state of flow. The purpose of a pilot is to improve our understanding of how things work, and to build underlying skills for what we’ll build next.
See Scale with Philosophy and Methodology for insights on managing the chaos. In the article, I compared Charles and Ray Eames to hackers. I easily imagine them at home in hackerspaces or hacker cons. The Eames embodied the hacker ethic years before “hacker” was even a term. Hands-on. Learning by doing. A strong sense that work, be it design or be it computing, changes the world when we love what we are doing.
The elephant in the room is the best pilot projects won’t look anything like work.
This article is part of a series on designing cyber security capabilities. To see other articles in the series, including a full list of design principles, click here.
CyberSecurity design weekly recap for October 26-31.
This week: Renzo Piano and the California Academy of Sciences. There’s a tension when designing a security architecture. The architecture must meet and mirror culture of the organization. The design can’t run contrary to how the organization works. But at the same time, the new controls must facilitate a cultural change towards a more secure way of being. The architecture mirrors while it modifies. Principle: Design for change and stability.
Previously: Paul Hekkert and the Unified Model of Aesthetics. Most Advanced, Yet Acceptable (MAYA) is the name Hekkert has given this principle. How advanced can the design be while still remaining familiar, still being acceptable, still looking like work? The answer will vary from organization to organization due to culture. But the question must remain top of mind for security leaders pushing the envelope. Principle: Balance familiarity with novelty.
One thing more: I was asked this week: “How can companies reduce the human errors that so often lead to security breaches?” Here’s the thing. The number one cause of problems in early flight? Human error. The number one cause of manufacturing accidents? Human error. Number one cause of nuclear power plant problems? Human error. Security problems? Yep, human error. The root cause of all these issues: poor design.
This article is part of a series on designing cyber security capabilities. To see other articles in the series, including a full list of design principles, click here.
It has been said San Francisco is forty-nine square miles surrounded by reality. Fleeing Michigan snows for a week in San Francisco leads to feeling the otherworldliness. One flight and everything changes.
In San Francisco, underneath a series of hills reminiscent of Hobbit holes, is the California Academy of Sciences. The hills reflect the structures below, such as the planetarium. The overall field forms a living roof which keeps “interior temperatures about 10 degrees cooler than a standard roof and reducing low frequency noise by 40 decibels. It also decreases the urban heat island effect, staying about 40 degrees cooler than a standard roof.” This according to the California Academy of Sciences press release from 2007.
Renzo Piano designed building. His starting point was a question that’s delightful in his lateral thinking: “what if we were to lift up a piece of the park and put a building underneath?” In the California Academy of Sciences building and throughout Piano’s work, he returns again and again to themes of culture and change.
“The world keeps changing,” Renzo Piano said on the TED stage. “Changes are difficult to swallow by people. And architecture is a mirror of those changes. Architecture is the built expression of those changes. Those changes create adventure. They create adventure, and architecture is adventure.”
There’s a tension when designing a security architecture. The architecture must meet and mirror culture of the organization. The design can’t run contrary to how the organization works. But at the same time, the new controls must facilitate a cultural change towards a more secure way of being. The architecture mirrors while it modifies.
There’s another tension when designing a security architecture. Ongoing change will impact how people perceive and experience security. But at the same time, the security principles and posture must remain unchanged in the face of far ranging organizational change. “Architects give a shape to the change,” Piano once said. The architecture is flexible but stable.
My last trip in the US, before the pandemic, was to San Francisco. Within a month, everything had changed. We are experiencing the greatest migration in human history. A migration from the office to the home, certainly. More significantly, a migration from the physical to the digital. We now live in 1440 square pixels surrounded by reality.
Security architects must meet the wave of this change while holding steadfast to our security principles.
This article is part of a series on designing cyber security capabilities. To see other articles in the series, including a full list of design principles, click here.
“Here are the materials, ideas, and forces at work in our world. These are the tools with which the World of Tomorrow must be made.” With that, the pamphlet announced the 1939 New York World’s Fair.
Alfonso Iannelli was right at home in the World of Tomorrow. Having gotten his start designing posters for vaudeville, Iannelli was also right at home with hype. Sunbeam Products was showcasing two of Iannelli’s designs: a toaster and a coffee pot, or the T-9 Toastmaster and C-20 Coffeemaster. These hardly seem innovative to today’s audience. But toasters were still an emerging tech in the 1930s. And the C-20 pioneered the vacuum coffee process which even today connoisseurs consider the superior way to make coffee.
Most importantly, the C-20 and T-9 brought the Streamline Moderne style to life. The push towards modernism was a recurring theme in Iannelli’s work. And there it was, at the World’s Fair, courtesy of Sunbeam.
Unified in style and updated in technology, these appliances have parallels in security capabilities. We’re often updating existing capabilities along with designing and implementing new ones. For example, suppose we have an existing workforce identity and access management program. Suppose we also have customer identities within the ecommerce website. A common challenge is to bring these two programs up-to-date and centralize the way identity is secured.
When developing a vision for the future, we naturally look for ways to implement the latest technology. It is equally important that we look for ways to standardize and unify the design for the experience.
Find the Streamline Moderne of identity and access management. First, find your vision.
After acclaim at the New York World’s Fair, Sunbeam put the coffee maker and toaster into production. The Coffeemaster would stay on the market nearly thirty years, wrapping up its run in 1964. Meanwhile? The Toastmaster was immortalized in a slice of Americana. On the cover of the Saturday Evening Post in 1948, central to the Norman Rockwell painting, there sat Alfonso Iannelli’s toaster. Moderne had arrived.
The starting point was the World of Tomorrow. Likewise, with your vision, the starting point is showcasing a pilot. Develop a proof-of-concept. Tie it to something larger. Hype it with all the gusto of a vaudeville poster.
Showcase your vision. Take this moment to gain early support and feedback.
This article is part of a series on designing cyber security capabilities. To see other articles in the series, including a full list of design principles, click here.
It was the early nineties when I first saw the photograph of a small robot wandering the desert. I would go on to buy the Robo Sapien book which featured photographs from the same shoot, along with more from Peter Menzel. Iconic. Simple. Inspiring and, most of all, achievable.
Robotics in the 1980s and 1990s were incredibly complex and costly. Significant computing power and sensor tech was needed to move a limb. The idea of walking robots was a dream, to some, a fantasy. Rodney Brooks had made some advances with Genghis and Attila. But these were still tens of thousands of dollars. Such robots were available to grad students and researchers, but out tantalizingly of reach for the rest of us.
Enter Mark Tilden. The robot in the Menzel’s photograph, and the rest of Tilden’s menagerie in the 1990s, had a price tag of a few hundred dollars. Many were built from scrap parts and recycled electronics. This allowed for rapid prototyping, which in turn facilitated rapid innovation. End result? Simple robots that worked. Inexpensive robots that walked.
The real lesson I took from Tilden, which I applied both when I built his style of robots and when I designed IT systems, was how to copy an idea. It works like this:
Identify the features are providing the value
Deconstruct those into underlying principles and tasks
Emulate those tasks using the people and technology you have on hand
Act on those tasks to reproduce the effect, prototype and iterate, to develop your own way of providing the value
Tilden called his process biomimicry because the stated goal was to mimic biological systems. More broadly, applying Tilden’s process to my framework, you can envision the steps as follows:
Identify = Insects walk with legs controlled by a core set of neurons oscillating in a loop
Deconstruct = an oscillator with feedback
Emulate = two, four, or six inverter oscillators, or in BEAM nomenclature, Bicore, Quadcore, or Hexcore
Act = Unibug 1.0, seen in the photograph below
I wager this is the same process Tilden used to build unthinkable robots for a fraction of the cost using parts he had lying around. Meanwhile, in security, we’re challenged to build security capabilities with little budget using what we have on hand. This is where my IDEA method shines.
Implementing any capability reference model or framework is beyond the capacity of most organizations. So? Don’t.
In October 2019, I was in Haifa visiting the Technion. There I saw robots which mimicked the snakes which populate the deserts of Israel. The same movements that facilitate movement through the deserts of Israel are useful in navigating the rubble of fallen buildings and industrial accidents, in order to find survivors. My mind was instantly transported back to Mark Tilden and his spare-part creatures. It struck me that Alon Wolf’s bio-inspired snakes are the technological children of Tilden’s early experiments.
By following a process that closely mirrors my IDEA model, the engineers at the Technion had created a simple, efficient, and focused device which literally saves lives. They identified an unlikely source of inspiration and deconstructed that down to its most iconic element: the serpentine wiggle. They iterated until they were able to emulate this wiggle. Then they put their invention into action: rescuing folks who would otherwise perish.
We can do the same thing in our cyber security work.
Select your reference model. (Say, for an Identity and Access Management or IAM platform.) Use the process above to see where the value is coming from. (Let’s say, on-boarding and off-boarding.) Deconstruct these down to a few core objectives. Then, see what’s available in your organization in terms of tools and techniques. Run inexpensive and quick pilots to try out the ideas and form a plan.
Don’t act on all the things. Act on the right things.
This article is part of a series on designing cyber security capabilities. To see other articles in the series, including a full list of design principles, click here.
On a winter evening in 2014, Nikki Sylianteng got a parking ticket. It wasn’t a surprise. This was in LA where the city collects around $140 million from tickets annually. Sylianteng’s $95 parking ticket wasn’t significant and it wasn’t a surprise. But what happened next was.
When designing security capabilities, we have two aspects to consider:
• The paths people take to complete work – number of steps, familiarity, and friction of each step • The choices people make during work – number of choices, predictability, and cognitive load
I argue that security can improve people’s work. Make it easier. Make it faster. I often get pushback on this argument, and for good reason. A very real problem is that security teams don’t have good visibility into the path and the choices. Even more worrisome, we don’t get good feedback when things are difficult or when security controls are making them worse.
Millions live in LA. Hundreds of thousands get tickets in LA. One person gave feedback with a solution.
Why? It is the same reason the workforce tolerates bad security controls: habituation. People get used it. They become blind to the annoyances along the path they have to take to complete their workflow. Listen for these tell-tale phrases:
• That’s just the way the world works • We’ve always done it this way • Things could be worse
That’s an indication of a workflow security may be to improve while increasing security. There lies habituation. There lies unnecessary steps or choices. There lies an opportunity to improve the path. But we need a partner on the inside, someone who can see beyond the habituation, someone who has what’s called beginner’s mind.
This is what drew me to the story of Sylianteng and her parking ticket. (Listen to Nikki Sylianteng tell her story herself here.) She didn’t accept the ticket. She couldn’t accept the way the parking signs were. She launched To Park or Not to Park and radically redesigned the parking signs. She has since created tools that anyone can use to create their own simplified parking signs.
Imagine our security goal is parking enforcement. Our control, the parking sign. Four million people in LA see the signs. Some follow them. Others don’t. Only one person actually says this is a problem, and takes it on themself to correct the problem. Do we embrace this person? Well. We should. According to Nikki Sylianteng, her new approach “has shown a 60% improvement in compliance and has pilots in 9 cities worldwide.”
Find those with a unique combination of beginner’s mind and desire to make a change. Embrace them. They are your security champions, and by working together, leaps in adoption and compliance are possible.
This article is part of a series on designing cyber security capabilities. To see other articles in the series, including a full list of design principles, click here.
Security is changing rapidly, and the COVID-19 pandemic hasn’t helped. A Cisco roundtable of chief information security officer advisers plotted the course for a secure future.
It’s time for collaboration, not control. CISOs can’t simply dictate security policy and expect users to fall in line. Not only will workers not fall in line with top-down security directives, they’re also likely to intentionally subvert them to get what they want out of the tech they use at work. “The more constraints placed on users, the more creative they become,” Goerlich said. Savvy users, Goerlich said, can be an asset to a cybersecurity team, helping to secure networks by collaborating with CISOs instead of working against them.
AI and machine learning: CISOs are right to be skeptical. “Training an AI model can take months,” Goerlich said, adding that a rapid change like the kind encountered with stay-at-home orders can throw machine learning models out the window. There were countless alerts and false positives thrown by AI-powered security software at the start of the pandemic, Goerlich said.
It’s time to embrace a passwordless future. “Passwords have had their time. Nowadays attackers don’t break in, they log in,” Archdeacon said. Goerlich said the transition will be driven by two things: What users expect from consumer devices (e.g., FaceID, Microsoft Hello, etc.), and new security standards like FIDO2 that make passwordless security practical.
I’ve taken to calling what happened in March and April as “the Spring when the AIs went insane.” Everyone shifted from working from the office to working from home, and then some shifted back when many were returning to the office. This occurred in three months. Typical general purpose UEBA takes 6-months or more to train. The result was a significant increase in false positives as the human response to the pandemic outstripped the UEBA AI/ML ability to learn. Everything was unusual. Everything was a threat. Everything generated an alert. In other words, the AIs went insane.
This post is an excerpt from a press article. To see other media mentions and press coverage, click to view the Media page or the News category. Do you want to interview Wolf for a similar article? Contact Wolf through his media request form.
Seeing is Forgetting the Name of the Thing One Sees. A fantastic title, right? I was having a coffee meeting with a new product designer a few months back. As can happen, I was pretty wound up, going on about the need for usability and human-centric design in cybersecurity. She told me, “you need to read Seeing is Forgetting the Name of the Thing One Sees.”
The book covers conversations Lawrence Weschler, the author, had over three time periods with Robert Irwin. It gets to the heard of Irwin’s philosophy and approach. Irwin began abstract in the 1960s. He painted lines. He painted dots. But when displaying his work, Irwin noticed the way the art was experienced was influenced by factors outside of his paintings. Any of us who have seen optical illusions with colors and lines understand this instinctively and likely think nothing of it. But to Irwin, who was obsessed with the experience to the point of banning photography, this simply wouldn’t do. Irwin took to replastering and repainting walls, sometimes whole studios, where his art was displayed.
Robert Irwin insisted on controlling the entire experience and this led to the realization that the surroundings were just as important as the artwork itself.
We’ve been slow at coming to a similar realization in cybersecurity. Consider the Web application. A thousand things have to go right for it to work, and a thousand things can go wrong from a security perspective. OWASP framed these issues up into a top 10 list. This simplified the work of developing a secure Web app. However, OWASP initially focused solely on the app itself. Of the six releases since 2003, only the last two releases included the walls and studios, the vulnerable server components, on the OWASP top 10. We’re slow to recognize the importance of the surroundings.
Robert Irwin’s obsession with the surroundings transformed the artist from painter to landscaper. He has gone on to produce more than fifty large scale projects since 1975.
From the perspective of a designer, we must consider how the new capability fits into the existing cybersecurity portfolio and, more broadly, into the organization. We have to replaster the walls. We must make sure it fits in the studio. From the defensive perspective, this makes a lot of sense. A criminal faced with a strong control will look at the environment for other weaknesses and take advantage of gaps. From the usability perspective, Robert Irwin reminds us that how something is seen is as much about the thing as it is about the overall experience.
Security is not the control itself. Security is the surroundings.
This article is part of a series on designing cyber security capabilities. To see other articles in the series, including a full list of design principles, click here.