Empathy, kindness, and behavior economics on We Hack Purple Podcast

Archive for the ‘Risk Management’ Category

Empathy, kindness, and behavior economics on We Hack Purple Podcast

Posted by

Tanya Janca invited me onto her We Hack Purple Podcast to discuss vulnerabilities beyond code. Along the way, we cover behavior economics and the importance of empathy in cybersecurity design. “Kindness is the original security principle” makes an appearance, as we talk about how all this and more applies to building better products.

Our conversation was sponsored by the Diana Initiative, a conference committed to helping all those underrepresented in Information Security.

 


To see listen to other podcast interviews, click to view the Podcasts page or the Podcasts category.

Applying Public Health Risk Management to the NIST Risk Management Framework (RMF) – Introduction

Posted by

Everyone has a pandemic story. Here’s mine.

Before the lockdowns, before we were all wearing masks, before travel ground to a halt, I was in Switzerland. It was a good time: I had a presentation to give about securing DevOps, and after a couple of days at the event, I took my wife on a rail trip around Europe. We were celebrating the completion of her recent book manuscript, which she had submitted to her publisher on our way out of town. Our plan was to travel through mid-March.

Then we got the call. We were in Budapest. My employer telephoned to say that there was a travel ban going into effect on midnight, March 13th. With very little notice, we returned to our hotel, threw our clothes into suitcases, rushed to the train station, and we took an overnight train to Prague. By the time we got to Prague, they had an idea of how to get us as far as Paris. So we took a flight to Paris. We landed in Paris and there was bedlam. Everyone was trying to get off the continent. Somehow? We were able to get the very last seat on the very last flight to the States. We made it home two hours before the travel ban.

After that, everything shut down. We did our part. We saw the risks and did our part to bend the curve. A month went by, then three months went by, then six months went by. And each time I was preparing for events, certain that things would reopen in a couple of months. Surely this was going to end. Surely this was going to wrap up.

And a weird thing happened to me. After watching the Covid numbers day in and day out, I found myself very habituated to the risk. After waiting for months, even though the numbers were frankly worse than they were in the beginning of the pandemic, I figured the risk must have subsided. Surely there was no longer a monster outside of our cave. It must have wandered away by now, right? There’s no way that we are still in danger. The caveman brain in all of us does curious things when it comes to risk management.

That sense, that nagging sense, that cognitive dissonance, that tension between logically knowing the risks but emotionally feeling everything must surely be fine, that led me to study how risk was being managed and communicated during the pandemic.

I’ve been the person providing numbers to the executive team from my security team. I’ve been the one to explain, “I know the numbers are the same and I know everything feels like it should be okay, but we really are in a bad spot.” But the pandemic gave me the experience of the other side: hearing the numbers and struggling to interpret the data to make informed decisions. There’s a great deal of overlap, I believe, in these two domains, cybersecurity and healthcare.

What can we learn from behavior science and from the psychology of our shared experience over two years? How can we take these lessons back to cybersecurity?

On the two-year anniversary of taking the last flight home from Paris, I’m going to look at risk management in a blog series. I’ll detail some of what we learned in the pandemic about how people process risk. I’m going to share here with you in the hopes that collectively, as information security and risk management practitioners, we can learn something about the nature of human psychology and thereby do a better job at protecting our organizations.

This is part one of a nine-part series. I welcome any and all feedback. Let’s learn together.

SC Magazine: Rethinking Risk

Posted by

It’s time to rethink risk – both how to operationalize it and how to define it. With all the incompatible views of risk from different stakeholders through an enterprise, it’s hardly surprising that so many organizations struggle to get beyond checklist security mentality.

Excerpt from: Rethinking risk

“Start with a listening tour: What (those other LOB executives) care about, what their business objectives are,” says J. Wolfgang Goerlich, advisory CISO of Duo Security. “You must interpret and explain security needs as business outcomes. Security can no longer be about avoiding the bad things. It must align to the business direction.”

Read the full article here: https://www.scmagazine.com/home/security-news/features/rethinking-cyber-risk/

Wolf’s Additional Thoughts

I’ve been vocal about my disillusionment over risk management. It has it’s place, to be sure. It was my starting point. And I gave a number of talks advocating risk management, say 2008-2015, including one for the Society of Information Risk Analysts (SIRA). Risk management techniques are excellent at prioritizing efforts within the security function. But having built programs around risk management, I’ve realized the limitations.

People don’t think in terms of risk. Risk treatment tables don’t resonate with our stakeholders. High or low is meaningless without context. People don’t get it.

People also don’t act on risk. Wendy Nather coined this “cheeseburger risk management,” a term which I love. People will eat cheeseburgers even though they know the risk. They’ll eat right up until they have a heart attack. Only then will people get serious about what they eat, and as evidence shows, that discipline only lasts for a short time.

Evan Schuman’s coverage of these difficulties is a great place to begin questioning where and how we use risk in cybersecurity. I’m continuing exploring alternatives to communicating with the business, getting buy-in, and driving action in my security principles design series.


This post is an excerpt from a press article. To see other media mentions and press coverage, click to view the Media page or the News category. Do you want to interview Wolf for a similar article? Contact Wolf through his media request form.

Valuing Assets – Design Monday

Posted by

The staring red camera and chillingly calm voice of HAL 9000 inspired and unnerved a generation of IT people. It’s well known that Arthur C. Clarke drew inspiration from IBM to name HAL. But where did the 9000 come from? This traces back to the first Italian mainframe: the Elea 9000. Look at photos of the Elea 9000 and the HAL 9000 in Discovery One, and you will see some visual similarities too. The Elea 9000 had a certain beauty, owed in part to Ettore Sottsass.

Ettore Sottsass was a design consultant for Elea 9003 in the 1950s. In the 1960s, Sottsass would design the iconic Valentine typewriter. From the heights of technology, Sottsass turned his talent to furniture. Chairs. If you’re thinking that’s an odd choice, you’re not alone. Many asked him about this shift. “A chair must be really important as an object, because my mother always told me to offer my chair to a lady,” Sottsass reportedly said. And so he focused on chairs.

There is a lesson here for security. A fundamental is evaluating the value of an asset to determine what is at risk. Of the ways to determine this, the most common are what the asset generates for the organization and what it would cost the organization to replace it. Both measured in dollars. That’s great for computers and typewriters, but what about chairs? Put a different way, quantitative approaches overlook the significance people put on our tools. Securing by what we can measure in dollars leads to decisions which are blind to the human factors.

“I’m sorry, Dave. I’m afraid I can’t do that.” I get chills every time I hear that line. There’s something cold about mechanically making decisions based purely on numbers. When introducing human-centric design to our security programs, we must consider all the ways people determine value. Remember the subjective. Remember the chairs.

Olivetti Elea 9003, photography by yewknee.co,

This article is part of a series on designing cyber security capabilities. To see other articles in the series, including a full list of design principles, click here.

Phone phreaking visits Apple Pay’s authentication

Posted by

There is a new attack on Apple Pay involving an old phreak tactic. Read about it here:

Has Your Phone Number Been Stolen? Another Apple Pay Fraud Hits the Nation
https://www.mainstreet.com/article/has-your-phone-number-been-stolen-another-apple-pay-fraud-hits-the-nation

 

The fraud works by knowing the mobile carrier and number the target uses for device identification, contacting the carrier to port the number to a phone the criminal has, then using the number to authenticate and add the criminal’s device to the victim’s Apple Pay account. Illegally porting telephone numbers has been around for some time. Criminals are re-using the old technique to subvert Apple Pay’s device authentication mechanism.

What can consumers do to protect themselves? First, use a telephone number that is not well known for device authentication. Many people use their home landline phone number, which is often easy to discover. Second, inquire with the carrier about their policies around authorizing porting and notifying customers. Third, keep a close eye on Apple Pay for unfamiliar devices.

The ways banks can protect consumers is as old as the tactic of stealing phone numbers. It comes down to account monitoring and fraud detection. Today’s behavioral analytics are equally adept at spotting misused credit cards as they are spotting misused accounts linked to Apple Pay. Banks and other financial institutions must review their anti-fraud programs to ensure they apply to emerging payment processes like Apple Pay.

All in all, this is an example of an old tactic being applied to a new payment processing system. When developing new systems, it always pays to consider how previous attacks might apply.

Starbucks gift card fraud

Posted by

Starbucks is in the news as criminals abuse its online services through fraudulent gift card purchases. On the surface, the issue appears to be about consumers’ passwords and the poor practices around their use. There is more to the story, however, and I would argue two deeper concerns are the real issue. The first is in how emerging payment systems are monitored and secured. The second is in how online services are developed and maintained.

The Starbucks security hole is simple enough. The criminal breaks into the coffee-loving victim’s account by guessing their password or using the password reset features. They then load a Starbucks gift card using the victim’s stored payment information, and transfer that card to themselves. This is usually automated so that several gift cards can be filled and stolen in a short period of time. The attack normally ends only when the victim receives notices on the gift cards and resets their Starbucks password.

Starbucks reportedly processed $2 billion in mobile payments last year. That’s a serious amount of business that requires a re-adjustment of their risk appetite to reflect the target their business has become. Moreover, as retailers and emerging payment systems develop bank-like functionality (funds transfer, cards), they need to start thinking more like banks. Anti-fraud techniques such as behavior monitoring for unusual activity is a prime example. Another is offering consumer protections such as reimbursements (at this point, Starbucks defers consumers to work with PayPal or their credit card company.) When transactions are into the billions, it’s time for mobile payments to offer credit card equivalent security for consumers.

The other aspect of consumer protection is the online service itself. In , threat modeling is one of the first steps. The goal is to look at the functionality being developed and to identify ways it could be abused. With this in mind, security and privacy requirements can be defined. After Starbucks built their services, they could have performed scenario-based penetration tests to ensure the controls met the requirements, and the requirements prevent the threat. Given that gift card fraud is well known and that the controls in place are lacking, it’s clear that Starbucks did not complete these steps as part of their development program.

In summary, yes, consumers need to watch their password hygiene and monitor their accounts. But there’s more to the story. As companies build online services that handle billions in payments, they must mature their processes in handling fraud and building applications. We need credit card equivalent security for transactions. Developers need a secure development lifecycle for preventing their services from being abused. Starbucks is today’s example of organizations falling short on both areas, and leaving the consumers with the tab.

Cross posted from: http://content.cbihome.com/blog/starbucks_giftcard_fraud

Action-Oriented IT Risk Management

Posted by

blog_CampIT

Last week at Chicago’s Camp IT, I presented on IT risk management and concluded with focusing on the intersection of risk and action. This is a CIO Centric Approach that reprioritizes risks based on an organization’s constraints and IT capabilities. My Chicago talk led to several good discussions, and this article quickly summarizes the method and how you can apply it to your risk management program.

First, let’s briefly recap risk rating by impact and likelihood. This qualitative IT risk management approach enumerates concerns and then assigns a 1-5 score for impact to the organization and the likelihood of the threat being realized. The practicality of such an exercise depends in large part on how the values are derived, with more mature programs using a weighted approach that includes the organization’s mission, objectives, and mandates. Once completed, a risk rating table is generated that compares to the one below.

risk-effort-table

The advantage, for a security owner, is in immediately seeing which concerns, once mitigated, would produce the largest reduction in the organization’s overall risk. We can then produce the annual audit phonebook with a long laundry list of recommendations.

The disadvantage, for the IT owner, is in not factoring in effort. For example, suppose one risk rated 15 takes 12 months to resolve and another takes 3 months. Yet both are listed side-by-side and prioritized equally by the security owner. The trouble stems from the risk rating exercise not bubbling up quick wins and prioritized actions.

Let’s revisit the risks by looking at constraints and capabilities. First, we brainstorm a list of two or three constraints that would slow the risk treatment process. The list will vary from time to time, and from organization to organization. For the purpose of this article, let’s go with:

  • Culture – the current team and organizational culture accepts the change
  • Budget – the budget is available to implement the change

Next, let’s list the capabilities. Again, this list will vary. A good starting point is:

  • Available staff – the people implementing the change are available and skilled
  • Available tech – the technology needed to address the risk is available
  • Compliance – the compliance team is engaged in assisting with the change

With this list, we can now weight the impact of each constraint and capability to execute. The weighting is typically developed in a roundtable discussion with the stakeholders. For example, we may decide:

  • Culture = 20%
  • Budget = 10%
  • Available Staff = 35%
  • Available Tech = 25%
  • Compliance = 10%

With the factors and weights decided, we can talk through each risk treatment. Ranking each one at 1 (difficult), 3 (moderate), or 5 (achievable). The risk treatment score then becomes the weighted average and reflects how actionable the control is. For example:

DSS05.07) Monitor infrastructure for security events

  • Culture = 20% = 3
  • Budget = 10% = 5
  • Available Staff = 35% = 3
  • Available Tech = 25% = 5
  • Compliance = 10% = 3

3.7 = (20%*3) + (10%*5) + (35%*3) + (25%*5) + (10%*3)

Having reviewed the mitigations, we can plot the risk treatment options along one axis of a chart. We can plot the previously defined risk ratings along the other axis (impact * likelihood / 5). The completed table, shown below, aligns the risks that CISO is concerned about with the areas the CIO has capabilities to address.

risk-treatment-table-1

Action-oriented IT risk management is a straightforward extension to an assessment that can greatly improve the resulting mitigations. By being CIO Centric and prioritizing based on an organization’s constraints and IT capabilities, we accelerate time-to-value and risk reduction. It’s one more simple way to bridge the gap between audits and results.

Cross posted at CBI: http://content.cbihome.com/blog/cbi-action-oriented-it-risk-management

Shelfware and Constraint Analysis

Posted by

Risk management and, indeed, all security activities do not happen in a vacuum. We need buy-in and time from business end-users, IT professionals, and more. Yet all to often, we plan these activities without doing a joint constraint analysis. The result is work that is understaffed and simply does not get done.

A recent survey highlights this condition. “According to Osterman Research, of the $115 per user respondents spent on security-related software in 2014, $33 was either underutilized or never used at all. In other words, in an organization of 500 users, more than $16,000 in security-related software investments was either partially or completed wasted.” IT staff “was too busy to implement the software properly, IT did not have enough time to do so, there were not enough people available to do so, or IT did not understand the software well enough,” the report states.

Personally, I am not ready to throw the IT staff under the bus. Let’s hold up a mirror. When was the last time we planned risk mitigation while taking into account IT’s time and knowledge? When was the last time we included training and staffing in our business case?

All to rarely. It is time to take constraints into account.

Upcoming keynote: CampIT

Posted by

I am keynoting the upcoming Camp IT on Enterprise Risk / Security Management.

 

Donald E. Stephens Convention Center
5555 N River Rd
Rosemont, IL 60018

February 5, 2015
9:00am-5:00pm

Calculating Your Acceptable Level of Risk

With so many potential risks it can be difficult to determine which an enterprise can live with, which it can’t, and which it can cope with when reduced to an acceptable level of risk. Determining an acceptable level of risk needs to be undertaken when there is a significant change in a business’ activities within the environment. Examples are updating policies and training or improving security controls and contingency plans, the risks need constant monitoring to ensure the right balance between risk, security and profit.

In this session attendees will learn how to build a framework to define an acceptable level of risk.

Risk management circa 2018

Posted by

This past Tuesday, I was out at Eastern Michigan University speaking with information assurance students. The prof invited me to visit his Risk-Vulnerability Analysis class and asked that I give my Practical Risk Management talk.

Practical Risk Management was a talk I had given widely in 2007-2008, describing my efforts to stand up a risk management practice for a financial services firm. The case study covers aspects that I found went surprisingly well, and aspects that I found were surprisingly hard. Since five or six years had passed, I had expected to have to significantly revise the slide deck. Clearly, lots has changed, right?

Surprisingly, no.

The areas we wrestled with last decade remain challenging for clients and organizations today. I found little had changed. On the bright side, that fact simplified my revisions to the slide deck for Eastern Michigan University. On the down side, of course, that means we continue to struggle.

Why? In part, it is because of the seductive simplicity of the Risk = Asset * Vulnerability * Threat formula. Find the values, plug them in, multiply, and prioritize. Easy, right?

Easy, except asset management and valuation is tricky. Few organizations have a reliable hardware and software inventory. Fewer still have automated audits and the ability to see, immediately, when the inventory changes. This matters as such changes are often an indicator of compromise. Few organizations, too, can tie assets to business processes and provide financial valuation on impact. The question of what we have and why it matters is elusive.

And vulnerability management? Putting the dependency on an accurate asset inventory aside, vulnerability management is not quite a slam dunk either. True, software such as Qualys takes the grunt work out of the process. Automation can also shift from annual assessments to continuous vulnerability assessments. Yet the real difficulty in vulnerability management continues to be driving the remediation efforts. Thus we see many vulnerability management programs with tens of thousands of open vulnerabilities.

Threat management has made some progress. In 2008, my chief concern was a lack of threat intelligence and information on what actual attackers were using to achieve actual objectives. Today, we have better information sharing (ISACs, CERTs). We also have services like Risk I/O that map vulnerabilities to threat intel feeds. Tighter integration goes a long way towards prioritizing on realistic risks. Nevertheless, as evidenced by penetration test results, the gaps in asset and vulnerability management, combined with control weaknesses and architectural security concerns, offer the motivated threat actor a variety of ways to compromise an organization.

Five years of time, with not much progress to show for it. This has me saving a copy of my slide deck to give again in 2018.

What changes can we make to obsolete my Practical Risk Management talk? Simple. We can beef up and automate asset management. We can shift from the technical aspects of vulnerability management to the social aspects, facilitating remediation efforts with other departments. Finally, we can more tightly integrate threat intel with vulnerability management and begin doing regular red team assessments to identify architectural and control concerns. In three broad strokes, we can make a dent technical aspects of risk management and enable us to get out of the weeds.

Asset management. Vulnerability management. Threat management. Three areas, three programs, three ways to make a significant difference between now and 2018. The clock is ticking. Let’s get this done.