The Imposter Syndrome Network Podcast

Archive for the ‘Blogs’ Category

The Imposter Syndrome Network Podcast

Posted by

I’m on the Imposter Syndrome Network with Zoe Rose and Chris Grundemann this week. I’m emphasizing trust and relationships in the imposter syndrome conversation. “If they trust you, you can have a degree of freedom to interact, explore, to get it right. But if they don’t, it doesn’t matter how good you are. They are going to doubt you.”

I also cover my imposter syndrome coaching framework: good imposter syndrome, bad imposter syndrome, and systemic imposter syndrome. The good is where you’re feeling the pressure to up your game, where you’re in a room with many brilliant people. The bad is where you let imposter syndrome prevent you from taking opportunities and when it gets in the way of you going into that room. Finally, there is the systemic challenges where the reason you feel like an imposter is because the culture, the people in the room, are actively making you feel like you don’t belong.

“It’s intrinsic, as leaders, to help people move towards good imposter syndrome and recognize and address systemic. If everyone on your team is being a jerk to a few coworkers, doesn’t matter how much you can tell them ‘be confident, you’re okay, you belong here.’ They’re not going to feel it, and it’s really on you as the manager to address that.”

This is my advice to leaders helping people through imposter syndrome. Understand which of the three — good, bad, systemic — and act accordingly. There is always a reason someone is feeling the way they do, and if it’s systemic, it’s on us to address it.

Imposter Syndrome Network

Have a listen here: https://www.buzzsprout.com/2016832/11567691


To see listen to other podcast interviews, click to view the Podcasts page or the Podcasts category.

Things Wolfgang Goerlich Says – Design Monday

Posted by

Alright, alright. This feels a bit strange. But I’m collecting my folksy sayings on cybersecurity leadership and design in one place. I’ll update this over time.

Good Security

  • Good security is usable security.
  • Good security gets out of the way of users while getting in the way of adversaries.
    • Good security frustrates attackers not users.
  • Good security first delivers a business outcome and then, as a result, increases security.
  • Good security supports changing maturity.
  • Good security projects leave people hungry to play again

Cloud Security

  • Ownership is not a security control.
  • Security is not what we control, it is what they do.

Defense and Offense

  • When work looks like work, work gets done.
  • Risk isn’t the language of the business. Story is.
  • Security happens where mankind meets machine.
  • The more constraints placed on users, the more creative they become.
  • All a better mousetrap does is breed better mice.

Media Mentions

Always remember friends: The Cyber War will not be won with platitudes.

— Wolf

Applying Public Health Risk Management to the NIST Risk Management Framework (RMF) – Introduction

Posted by

Everyone has a pandemic story. Here’s mine.

Before the lockdowns, before we were all wearing masks, before travel ground to a halt, I was in Switzerland. It was a good time: I had a presentation to give about securing DevOps, and after a couple of days at the event, I took my wife on a rail trip around Europe. We were celebrating the completion of her recent book manuscript, which she had submitted to her publisher on our way out of town. Our plan was to travel through mid-March.

Then we got the call. We were in Budapest. My employer telephoned to say that there was a travel ban going into effect on midnight, March 13th. With very little notice, we returned to our hotel, threw our clothes into suitcases, rushed to the train station, and we took an overnight train to Prague. By the time we got to Prague, they had an idea of how to get us as far as Paris. So we took a flight to Paris. We landed in Paris and there was bedlam. Everyone was trying to get off the continent. Somehow? We were able to get the very last seat on the very last flight to the States. We made it home two hours before the travel ban.

After that, everything shut down. We did our part. We saw the risks and did our part to bend the curve. A month went by, then three months went by, then six months went by. And each time I was preparing for events, certain that things would reopen in a couple of months. Surely this was going to end. Surely this was going to wrap up.

And a weird thing happened to me. After watching the Covid numbers day in and day out, I found myself very habituated to the risk. After waiting for months, even though the numbers were frankly worse than they were in the beginning of the pandemic, I figured the risk must have subsided. Surely there was no longer a monster outside of our cave. It must have wandered away by now, right? There’s no way that we are still in danger. The caveman brain in all of us does curious things when it comes to risk management.

That sense, that nagging sense, that cognitive dissonance, that tension between logically knowing the risks but emotionally feeling everything must surely be fine, that led me to study how risk was being managed and communicated during the pandemic.

I’ve been the person providing numbers to the executive team from my security team. I’ve been the one to explain, “I know the numbers are the same and I know everything feels like it should be okay, but we really are in a bad spot.” But the pandemic gave me the experience of the other side: hearing the numbers and struggling to interpret the data to make informed decisions. There’s a great deal of overlap, I believe, in these two domains, cybersecurity and healthcare.

What can we learn from behavior science and from the psychology of our shared experience over two years? How can we take these lessons back to cybersecurity?

On the two-year anniversary of taking the last flight home from Paris, I’m going to look at risk management in a blog series. I’ll detail some of what we learned in the pandemic about how people process risk. I’m going to share here with you in the hopes that collectively, as information security and risk management practitioners, we can learn something about the nature of human psychology and thereby do a better job at protecting our organizations.

This is part one of a nine-part series. I welcome any and all feedback. Let’s learn together.

Identify improvements as security matures – Design Monday

Posted by

In writing the book Rethinking Sitting, Peter Opsvik manages to do with chairs what we should do with cyber security: study the item in the wider context of how people interact.

Peter Opsvik’s critique is that furniture design isn’t “particularly concerned with the needs of the sitting human body.” Many rituals, he believed, are driven by a need to relieve people and compensate for poor seats; like kneeling to pray or standing to sing. Opsvik considered how the positioning of a chair, say in a kitchen or dining area, can make a person feel more or less connected, more or less important. He also spent considerable time thinking about how sitting changes as children grow into adults.

Design spans time frames: an experience lasting an hour, a stage in life lasting years, a lifetime. It spans contexts: personal, communal, societal.

We struggle with this in cyber security. Take, for example, break glass account. Right then. We setup an account with administrative-level access, write the password on an envelope, and stuff the envelop in a vault. But what happens when most administrators are working remotely? Fair point. Let’s move the password from a physical vault to a password vault, and share the vault with our backup person. But what happens when the vault goes down? How about when the person resigns and leaves for another company? How do we handle the longer lifecycle of this seemingly simple control?

Peter Opsvik’s answer to the lifecycle question is the Tripp Trapp chair. The chair is well-made, long-lasting, and stable. Simply change the seat and footrest, and the chair accommodates the user from infancy to adult. Five sets of adjustments as they mature.

The chair reminds me of the five stage maturity models. Security capabilities move from initial, repeatable, defined, capable, and finally, to optimized. To design a Tripp Trapp security control, think through how to reconfigure the control to support the evolving capability. Ideally, simplify these adjustments down to a small number of items.

What’s the seat and footrest in our break glass example? I suggest the credential storage and credential access. That is, how we set it up, and how the person handling the emergency breaks the glass.

Tripp-Trapp-Tresko is Norwegian for Tic-Tac-Toe. In the kids game, like chairs and like security, you succeed by thinking ahead. “The best sitting position,” Opsvik once said, “is always the next position.” Start with minimum viable security. Plan for future stages early, and identify the adjustments we can make. Good security controls support an evolving capability maturity.

.

The Tripp Trapp Chair from Stokke.

This article is part of a series on designing cyber security capabilities. To see other articles in the series, including a full list of design principles, click here.

Adoption of hardware-based security keys

Posted by

Google last week revealed that it was coordinating efforts with global partners to hand out free USB security keys to 10,000 elected officials, political campaign workers, human rights activists and journalists, and other users considered to be at high risk of getting hacked.

Excerpt from: Tech giants encouraging adoption of hardware-based auth keys

“Whenever a major organization makes a major announcement bolstering their security controls, it sparks conversation and movement in the broader industry,” agreed Wolfgang Goerlich, advisory CISO at Cisco Secure. “Google’s announcement that it is enrolling 10,000 people in authenticating with strong security keys will make it easier to explain a similar need in other organizations.”

And this isn’t the first such corporate endorsement of hardware-based authentication. Among the companies using FIDO’s standards for Universal 2nd Factor (U2F) authentication keys is Yubico, which like Google has been working with DDC to provide its hardware-based authentication keys to campaigns from both major parties.

Read the full article: https://www.scmagazine.com/analysis/physical-security/tech-giants-encouraging-adoption-of-hardware-based-auth-keys


This post is an excerpt from a press article. To see other media mentions and press coverage, click to view the Media page or the News category. Do you want to interview Wolf for a similar article? Contact Wolf through his media request form.

Remote Work Drives Continued 2FA Adoption

Posted by

79% percent of people used two-factor authentication at least once in 2021, with 72% regularly using the technology, as remote work, social media, and online retail spur demand.

Excerpt from: Security Fears & Remote Work Drive Continued 2FA Adoption

SMS texts continued to be the most-used type of two-factor authentication, with 85% of people using that 2FA technology. Verification emails are the second most common type at 74%, while passcodes issued by mobile authentication apps came in third with 44%.

Companies need to educate consumers more on the pitfalls of SMS text messages as a second factor, Goerlich says. More than half of people surveyed would choose SMS as the second factor for a new account, while less than 10% would choose a mobile passcode application and 7% would use a push notification. SMS tied with security keys, such as YubiKey and other technology, for highest perceived security and topped the list for usability.

“There is a clear mismatch between what the survey respondents are using in terms of security and what researchers have found and identified in terms of security,” he says. “It makes sense that SMS is rated high in usability, and there is a really strong familiarity with the factor, but a lot of issues have been identified by researchers.”

Attempts to educate people on security problems with SMS should be careful, however, not to dissuade them from using two-factor authentication at all, Goerlich stressed.

Read the full article: https://www.darkreading.com/authentication/security-fears-remote-work-drive-continued-2fa-adoption


This post is an excerpt from a press article. To see other media mentions and press coverage, click to view the Media page or the News category. Do you want to interview Wolf for a similar article? Contact Wolf through his media request form.

Security Architecture Principles – Design Monday

Posted by

Clack. Clack. Two hands. Hunt and peck typing. Clack. Clack. The beautiful red Valentine typewriter. Clack. Dieter Rams at his desk. This is the opening shot of the Rams documentary. What is he typing? Ten principles for good design.

An entire series of articles could be written applying Dieter Rams‘ principles to cybersecurity. This is not that. Instead, let’s look to Rams as an example of creating and living with principles.

What makes a good architecture principle? It makes a statement. “Good design is honest,” Dieter Rams might type out. “Buy not build” is one I often encounter. A good architecture principle has a rationale. “It does not make a product more innovative, powerful or valuable than it really is. It does not attempt to manipulate the consumer with promises that cannot be kept.” For buy not build, our development resources are valuable and must be deployed only in areas where there is a clear advantage and where an existing solution doesn’t satisfy the majority of our needs. Finally, a good principle makes an impact. It has implications for later decisions.

“I like orderly confusion very much. But this is neither orderly nor properly confused.” Dieter Rams says about an hour into the documentary, while evaluating objects against his esthetic and principles. “Others may like it. I do not.” A set of good architecture principles enables the team to make decisions. These decisions may be very different from other security teams, even other security teams in similar industries and at similar times. The success of a security architecture depends not upon the individual decisions. Rather, success depends on the consistency across decisions, initiatives, and capabilities. Consistency through principles.

Consistency poses a challenge. The same thing means different things to different people. For architecture principles to work, the team must debate implications and applications. An example of this comes in the documentary when Mark Adams walks Dieter Rams through the new Vitsoe headquarters. For background, Adams is the managing director of Vitsoe, the firm which produces Rams’ furniture. “I want it to be completely honest that that is a fire barrier,” Adams explains. But is it honest? And does the honesty balance against the other principles? After a moment of thought, Rams says simply: “It’s a little bit irritating.” After some back and forth, they decide to sand it and blend it in. (In the photo below, you can see the resulting gray fire panels.) The moment captures this discussion of application. Principles live through debate.

Be principled. Develop a small set of architectural principles to guide the technical design. Live with them. Argue them. Disagree and commit. Apply and iterate them. But be principled.

Vitsoe London Headquarters, Photography by Vitsoe.

This article is part of a series on designing cyber security capabilities. To see other articles in the series, including a full list of design principles, click here.

Pilot with security chaos engineering – Design Monday

Posted by

No security capability operates as intended. Even with perfect data, perfect planning, and perfect foresight? Small differences between our assumptions and reality quickly add up to unpredictable situations. Security faces the proverbial butterfly flapping its wings in Brazil producing tornado in the United States.

The butterfly effect was coined by Edward Lorenz, a meteorologist and father of chaos theory. It all started when the limitations of computing led to the limitations in forecasting. It’s a pattern that still plays out today, leading some to point to the need for chaos engineering.

Edward Lorenz was working on one of the first desktop computers: the Royal McBee LGP-30. Desktop in the sense that the computer was, in fact, the size of a desk. It also cost nearly a half billion dollars, in today’s US currency. We’re talking state-of-the-art vacuum tube technology. A teletype machine, the Friden Flexowriter, provided both input and output. It printed at a glacial ten characters per second.

These constraints of his machine inspired Edward Lorenz. But I’m getting ahead of myself.

So there Lorenz was, modeling the weather. To save memory, as he ran the calculations, he printed the results to charts. At ten characters a second this was tedious. To save time, he printed to three decimal points.

The LGP-30 would hum and pop while it calculated a value to six decimal places. The Flexowriter would bang and punch out the result to three decimal places. Calculate 0.573547 and print 0.574. Again and again, line by line, while Lorenz waited.

This shouldn’t have been a big deal. The differences between the calculated results and printed values were quite small. But when Lorenz retyped the numbers and reran the models, he noticed something extraordinary. Weather on the original chart and the new chart would track for a day or two. But pretty soon, they’d differ widely, unexpectedly. What was once a calm day suddenly turned into a tornado. All due  to the tiny differences in the source data. Edward Lorenz had discovered chaos theory.

“Complexity. It’s extremely difficult to predict all the outcomes of one seemingly small change.” David Lavezzo of Capital One wrote in the book Security Chaos Engineering. “Measurement is hard.” And even when we have metrics, which we rarely do, these small changes compound and lead us into unforeseen territory.

You can’t just rely on the temperature numbers predicted at the beginning of the week. You have to actually go outside. See if you need a jacket. See if you should be wearing shorts. The same is true of security. We can’t rely on our long-range forecast. We need to check the reality on the ground. Regularly. From there, adapt according to our principles.

We future-proof our security architecture by choosing versatility. We design for adaptability by prioritizing principles over rules-based approaches. But when we get to implementation, we should expect that we’ve missed something. Expect people and applications and devices and butterflies have behaved in ways that are a few decimal places further than we had considered.

We need some applied chaos to test and harden our implementation. The emerging domain of security chaos engineering is providing some useful techniques. Inject some evidence. Change some settings. Run some exploits. Validate that the security controls continue to operate. Security chaos engineering provides a way to explore the unexpected.

But ultimately, the take-away from Edward Lorenz is one of humility. We simply don’t know what will come. With the data we have, we can’t predict what will happen. Decades of advances in computing since the Royal McBee LGP-30 haven’t changed this equation. When implementing security, pilot with chaos to prepare for the unforeseen.

Royal McBee LGP-30 replica by Jürgen Müller

This article is part of a series on designing cyber security capabilities. To see other articles in the series, including a full list of design principles, click here.

We got it wrong! – Great Lakes Security Conference

Posted by

This session is on all the things we all say all the time, about all the things we call know. Security through obscurity is bad. Defense in depth is good. Stop clicking things. Next generation is bad, or maybe, next generation is good. The list goes on and on. The resulting rules of thumb are sometimes contradictory and often misleading. With war stories and anecdotes, we’ll explore what happens when teams run security by tribal knowledge instead of research and reason. Spoiler alert: they get pwned. Turns out, we were wrong.

Presented for Great Lakes Security Conference (GLSC) 2021.

Watch more videos on my YouTube channel.

Change the game – Design Monday

Posted by

Cyber security can be thought of as a game. Offense and defense. A set of motions and movements to score points, or to prevent the other team from scoring. Red team and blue team. A series of tactics and techniques to break in, or to detect and prevent such action. This thought is a good starting point. But we shouldn’t simply work on being better at the game. We need to change it.

Take basketball. When basketball debuted at the Berlin Summer Olympics in 1936, the game looked much the same as it does today. Sure, there have been subsequent rule changes. But the ball and hoop, well, those are classic.

Except.

During the first fifteen years of basketball, no one thought beyond the basket. Peach basket, to be precise. James Naismith famously nailed a peach basket to a gymnasium wall and thus invented the game. But it was the whole basket. After points were scored, a ladder would be used to fetch the ball. Sometimes, they used a stick to push the ball out. For fifteen years.

Why?

One reason is it’s hard to see beyond things. Functional fixedness. Another reason? We’re hardwired to add rather than subtract. Given the choice between adding a fetching stick and removing the bottom of the basket, we almost always choose the stick.

This human tendency has been studied. (See: People systematically overlook changes). There’s even book on the topic, Subtract: The Untapped Science of Less. The Subtract book looks at it from practically every domain, science to business to medicine and more. Except cyber security. Perhaps we can make it into a future edition.

Imagine people using IT in the organization. Imagine that’s the game we’re seeking to win. Get a sense of the players and the ball using business impact analysis. Get a sense of the movement and plays using journey mapping. Now imagine ways to secure this.

Your instinct will be to add. Pause. Look around for the peach baskets which can be replaced with hoops. Find something to subtract that improves the security.

Then change the game.

Peach baskets: the basket in basketball.

This article is part of a series on designing cyber security capabilities. To see other articles in the series, including a full list of design principles, click here.