Microsoft Valuable Professional (MVP)

Archive for the ‘Security’ Category

Microsoft Valuable Professional (MVP)

Posted by

Microsoft has recognized my work in Cloud Computing security with a 2017-2018 Microsoft Valuable Professional (MVP) award. I’ve long relied upon the guidance and advice from MVPs. It’s a fantastic program. I’m honored to now be included, specifically under Enterprise Security.

Hybrid cloud security: 8 key considerations

Posted by

Hybrid cloud should strengthen your organization’s security posture, not diminish it. But that doesn’t mean improved security is a default setting. While security fears are declining as cloud matures, security remains an ongoing challenge that needs to be managed in any organization. And a hybrid cloud environment comes with its own particular set of security considerations.


1. Ensure you have complete visibility.

Too often in modern IT, CIOs and other IT leaders have blind spots in their environments, or they focus too narrowly (or even exclusively) on their on-premises infrastructure, says cybersecurity veteran J. Wolfgang Goerlich, who serves as VP of strategic programs at CBI.

Now that companies and their end users can use hundreds of cloud-based apps, and multiple departments can spin up their own virtual server on an Infrastructure-as-a-Service platform, complete visibility across private cloud, public cloud, and traditional infrastructure is a must. A lack of visibility, says Goerlich, snowballs into much greater security risks than are necessary.

2. Every asset needs an owner.

If you lack 360-degree visibility, you probably lack ownership. Every piece of your hybrid cloud architecture needs an owner.

“A key tenet in IT security is having an owner identified for every asset, and having the owner responsible for least privilege and segregation of duties over the asset,” Goerlich says. “Lack of visibility results in a lack of ownership. This means, quite often, hybrid cloud environments have loosely defined access controls and often are without segregation of duties. Excessive permissions introduce risk, and unowned risk is unaddressed risk.”

Read the full article:

Hybrid cloud security: 8 key considerations

Securing Food Production

Posted by

As a rule, I like to work out an idea over year. Explore this aspect. Explore this other aspect. Have discussions with folks in the know and folks who are learning, and come up with yet another take. And I do this, year after year, getting a firmer grasp on the theory and strategy behind a particular security problem.

This year? It’s been the operational technology behind food production. I’ve explored this three ways:

Food Fight. The first few Food Fights were interactive question-and-answer sessions at BSides events. These described the problems we see in the food production industry, and explore how to assess them technically. I gave these sessions at BSides Indianapolis, BSides Chicago, BSides Cleveland, and BSides Detroit. Then, at CircleCityCon, I gave Food Fight on the main stage. To get a sense of this talk, watch BSides Cleveland’s recording.

Food for Thought. While Food Fight is more technical, Food for Thought is more governance. The talk explores operational technology from the perspective of risk management. It’s describes shining a light on the OT risks and integrating the findings into an overall security program. I gave Food for Thought at the Central Ohio InfoSec Summit and the North American International Cyber Summit.

Guarding Dinner, or, Lunch. There’s technical vulnerabilities. There’s cyber security risks. So, now what? The Guarding talk covers several steps organizations can follow the prevent attacks on industrial controls, such as those found in food production. I use a threat model as the foundation and walk through the defense. I gave this talk at MCRCon and as the lunch talk at GrrCon. Watch the GrrCon Lunch talk here.

I’m retiring the series of talks. It was a good way to have conversations around industrial control systems. And we’ve used the lessons learned, both in the original case study and in creating these slide decks, with several manufacturing clients. With that up and running and the knowledge out there, I’m moving onto my next area of interest.

Sneak peek: it’s strategically using encryption, building on past work with threat modeling and business analysis. Stay tuned.

Tower Defense

Posted by

This was originally posed on The Analogies Project and co-written by Claus Houmann. Please visit The Analogies Project for more IT security analogies and ideas. 

Enterprise defense today is hard. Anyone reading the news regularly will have noticed a never-ending stream of attacks, breaches, and data lost to cyber criminals that either attack for financial gain or to cause a company harm.

The companies taking this threat seriously appoint someone to coordinate enterprise defense, and that someone usually receives a job title resembling Chief Information Security Officer, Information Security Director, or Manager. These very people then work to maximize the limited budgets companies have for security. And these very CISOs are also often the ones to take the blame when and if something happens. It is a tough position to be in, and one that warrants a new approach.

One such approach is to consider the job of the CISO analogous to playing tower defense games.

What is a tower defense game? Well, first off we have a map and a mission of protection. The attacks come in a predictable path that can be planned for, similarly to threat modelling and threat intelligence. When attacks come, in waves or over time, we have to choose among a number of different defenses to counter/shoot down these attacks.

Defenses have attributes in common with cyber security. Each defense has a cost, so we’ll have to start with cost effective defenses. Each defense has a likelihood of success or failure, so we’ll have to stack defenses to ensure success. And as the attack progresses, some defenses are successful for some tactics and ineffective for others. Careful planning, then, is needed to create an effective deployment of defenses along the path the attacks take.

As an example, suppose we start with the most cost-effective defense such as a laser tower. The laser tower will shoot down attackers, and as more and more attackers come, we’ll deploy more laser towers in strategic locations on the map. This resembles the CISO building an enterprise defense. However, the attackers will then evolve and start using flying attacks which your ground-facing laser tower cannot counter, at which point you’ll have to add to your laser towers or replace with anti-aircraft missile batteries. This is the CISO deploying new processes, people and tools to counter new attack vectors that were getting through in unacceptable numbers. And so it goes, with each round escalating the attacks and defenses.

In the tower defense game, you actually earn money by beating the earlier stage attacks, potentially giving you enough budget to build new defenses for the later stage attacks. For the CISO, this is analogous to using past successes and proper planning to build the business case for investing in the security program. The messaging becomes one of sustainably developing controls along established attack paths, understanding that programs must be maintained and developed to keep pace with crime.

In sum, let’s make real life a bit more like tower defense games. Let’s understand the path the criminals take, understand that no one defense is completely effective, and that no defensive strategy survives beyond a couple of rounds. We promise not to build an expense-in-depth defense (thanks again, again for this phrase, Rick Holland). Instead, playing tower defense is a way to build a capacity for defense proactively – and justify the security budget.

Channel 9: An Interview with Wolf Goerlich

Posted by

Join Technical Evangelist, Annie Bubinski, for an interview with Wolf Goerlich (@jwgoerlich), who presented this year at CodeMash 2016 about Security Culture in Development.

CodeMash has educated developers on current practices, methodologies, and technology trends in a variety of platforms and development languages for 10 years in a row. In honor of the 10th anniversary of CodeMash and the launch of Windows 10, Microsoft Academy College Hires teamed up to record interviews with 10 different CodeMash Speakers.


Why You Should Work in Information Security

Posted by

Rasmussen College reached out for advice on why information security is a great field to be in. My response is below. Click through to read more thoughts.


Expert Advice on Why You Should Work in Information Security … NOW


1. Working in information security is exciting, challenging and never-ending

“Information security is new unexplored territory … and this creates exciting and challenging work,” says J. Wolfgang Goerlich, vice president of consulting at VioPoint.

Information security professionals work on teams to develop tactics that will help find and solve unauthorized access as well as potential data breaches. A crucial part of the job in information security is keeping companies from having to deal with unwanted exposure.

The best information security teams, Goerlich says, are those that provide “consistent mentoring and cross-training.” He says professionals in this field must be constantly learning and sharing what they know.

“As the technology is shifting and the attacks are morphing, the career effectively is one of life-long learning,” Goerlich says.

IT Maturity: The First Ten Steps to a Secure Future

Posted by

Today’s security leaders drive change across business strategy, technology, compliance and legal, and operations. Yet even as the scope has widened, the fundamental questions remain the same: Where are we today? Where are our benchmarks and targets? How can we best close the gap?

A risk-based maturity approach is often being employed to answer these questions. Such a model, when fully considered, is comprised of the following three components:

  • Controls Framework – this could be a top-level framework such as ISO 27001-27002 and NIST 800-53, industry frameworks such has NERC CIP and PCI DSS, or third-party frameworks such as the CIS Critical Security Controls
  • Maturity Framework – the most common is the Capability Maturity Model Integration (CMMI), however, various standards have specific maturity frameworks and some organizations have developed internal maturity models
  • Cultural Framework – the most common is the Security Culture Framework

All three frameworks yield the deepest insights into the current state and provide the clearest answers into potential improvements. That said, an assessment can be performed using simply the controls framework to get a quick read. It is up the organization to determine the level of effort to invest in the assessment. For the rest of this article, we will assume that all three frameworks are in play.

In a risk-based maturity approach, having determined the frameworks, the security leader and his team then complete the following ten-step process:

  1. Assess the security program’s controls and compliance to the control framework
  2. For each implemented control, assess the current people, processes, and technologies
  3. Perform both process validation (is it functioning as designed) and technical validation (is the control sufficient) to ensure the control addresses the risk
  4. For each implemented and functioning control, assess the maturity and identify improvements
  5. Document implemented controls that is not addressing the risk, and missing controls
  6. Analyze the organization’s capabilities and constraints for these missing controls (see our previous article on Action-Oriented IT Risk Management)
  7. Develop a project plan for immediate, short-term, mid-term, and long-term improvements in the control
  8. Create a communications plan and project metrics to ensure that these improvements change the culture as well as changing the security posture, using a cultural framework
  9. Execute the plan
  10. Re-assess the controls, maturity, and culture on a regular basis to adjust the plan

The above ten-step process establishes, maintains, and improves the quality of risk management program and overall security posture. It baselines the current program and provides a roadmap for making process and technical improvements. Each improvement is tracked technically (does it work), procedurally (is it sustainable), and culturally (is it implicitly performed). Culture is key, turning the IT risk program into a set of behaviors adopted by the entire organization. When everyone does their part to protect the organization, without the need for excessive oversight and intervention, the security leader moves from day-to-day supervision and toward strategy and value.

Controls, maturity, culture: three levers for advancing the security program and elevating the leader’s role.

Cross-posetd at

Moving Tokens to the Point of Sale Can Slow Crooks

Posted by

Before Target, there was TJX, the major 2007 breach that impacted about 45 million credit cards. The crime and its prevention were basic, and provide a lesson for today’s retailers that are battling a new wave of data theft.

It is easy to forget, going on a decade later, how relatively simple the TJX crime actually was. TJX’s Wi-Fi was unprotected and the wireless network allowed access to the back-end IT systems that stored credit cards in the clear in centralized databases.

Several security improvements have been made since then, of course, but the most fundamental is shifting from using credit card information to tokens in those back-end databases. Using tokens as part of a process called format-preserving tokenization meant that criminals could not just walk out the front door with the database. PCI issued guidance on tokenization, many retailers adopted it, and for a while the security controls seemed to be working.

Until, of course, Target took TJX’s place as splashy retail breach. Approximately 40 million credit cards were stolen in November and December 2013. Target was using format preserving tokenization. So what happened?

Unable to get readable credit card numbers from Target’s database, the criminals went after the point of sale systems. Here, the credit cards were available in the clear. It was only after reading the card information that the token was generated and passed onto the retailers’ back-end systems. On the one hand, the impact on the consumers between TJX and Target was roughly the same. On the other hand, the cost to the attacker was much higher. Rather than gaining access to one database, they had to gain access into 1,700 stores and get data back out of these secured networks.

If we want to stop attacks such as the Target breach, tokenization needs to be moved up to the point of interaction. Emerging payment methods like Apple Pay and Google Wallet do just that. The tokenization occurs when the consumer enrolls in Apple Pay or Google Wallet. The token is passed via Near Field Communication (NFC) to the point of sale and the card information is never directly exposed within the retailers’ systems. We just raised the criminal’s level of difficulty from one database to a thousand stores to millions of phones.

That is not to suggest that systems like Apple Pay and Google Wallet are the stopping point. As ubiquity of NFC payments increases so will the efforts to steal from the consumers. Mass adoption is well underway, as demonstrated by the separate announcements late last year that McDonald’s and Subway are supporting NFC payments in over 40,000 locations. Not surprisingly, news has begun to surface about Apple Pay fraud, including attacks on the enrollment process and schemes to add wallets to stolen Apple devices.

Each action we take moves the criminals’ activities. The adoption of tokenization on back-end systems moved the criminals to the point of sale systems. The adoption of NFC moves the criminals to the consumer’s devices. New controls provide protection for a finite amount of time, but crime ultimately finds a way. Retailers who inspect the entire payment processing chain regularly, performing ethical hacking to find the cracks, are the retailers who avoid being the next splashy name in the news. Those that lag behind and only adopt the controls that fight the last breach remain criminals’ favorite marks.

Originally posted at:

Who Watches the Watchers? Firewall Monitoring

Posted by

Even in the face of being declared dead — often and repeatedly since 2004 — the firewall remains a viable security control. De-perimeterization simply leads to a specialization of controls between IT in the cloud and IT on the ground, with the firewall taking on new roles internally. Especially for payment processing, healthcare, and energy, the firewalled network is still a key element of today’s standards and regulations.

The trouble is, all firewalls share a weakness. It isn’t in the IP stack, firmware, or interfaces. No, the weakness is much more fundamental. All firewalls depend on proper configuration and are a single change away from a breach.

Barracuda Networks is well known for its Web Application Firewalls (WAF) which protect against attacks such as SQL injection and others listed in the OWASP Top 10. Back in 2011, however, a change process went awry and disabled Barracuda’s WAF protection for its own servers. Within hours, some tens of thousands of records were stolen via an injection vulnerability on a Barracuda website. All it took was a single misconfiguration.

FireMon Security Manager 8.0 Tools for firewall change management have sprung up to address these concerns. Centralizing the audit log for all changes on all firewalls is great for looking back, however, as Barracuda experienced, a breach can happen within hours. IT admins require real-time detection and notification on changes, which is one of the many features FireMon offers. It can model complex changes and provide a what-if analysis cross-referencing the firewalls with an organization’s policy and compliance obligations.

Firewalls will continue to be a foundational control for an organization’s internal IT. The control for the controller, the watcher for the watcher, is secure change management. This means change planning, detection, auditing, and alerting. Operationally, it also means tracking history and the ability to troubleshoot issues by comparing changes across time. For organizations running complex segmented networks, management tools like FireMon are invaluable for preventing breach by change.

Securing The Development Lifecycle

Posted by

One line. Ever since the Blaster worm snaked across the Internet, the security community has known that it takes but one line of vulnerable code. Heartbleed and iOS Goto Fail made the point again last year. Both were one line mistakes. Even the Bash Shellshock vulnerability was made possible by a small number of lines of code.

To manage the risk of code-level vulnerabilities, many organizations have implemented security testing in their software development lifecycle. Such testing has touch-points in the implementation, verification, and maintenance phases. For example, an organization might …

Read the rest at