Monitoring attack paths

Archive for the ‘Security Information Management’ Category

Monitoring attack paths

Posted by

SIEMs are used for establishing security controls and responding to attacks. From my SimWitty days to my new role managing VioPoint’s SOC, we draw a distinction between these two. For controls-based activities, we think in terms of use cases. A SIEM use case defines a particular way the SIEM gathers and reports on data. For threat-based activities, an abuse case that defines an attacker’s activity and how the organization would detect the activity. The use case drives value and the abuse case protects against value loss.

Abuse Cases Map Possible Paths

An abuse case begins by describing the attacker and their objectives. Who are they? What are they after? What tactics and techniques are these attackers likely to use? From there, the abuse case defines the path the attacker would take to achieve their objectives. For example, a typical abuse may include:

(1) External reconnaissance
(2) Initial breach
(3) Escalate privileges
(4) Persistence
(5) Internal reconnaissance
(6) Lateral breach
(7) Maintain presence
(8) Achieve objective

The modus operandi will thus be modeled for a particular threat.

The Final Step In Monitoring

The final step in using SIEM to respond to attacks is to overlay the abuse case with the technical controls. How would we detect and prevent a particular tactic used in persistence, for example? What about the lateral breach phase in an attack path? Thinking through these controls allows us to give ourselves credit for where we are doing well, and allows us to identify opportunities for enhancing the controls.

To get the most out of a SIEM, from a threat perspective, we create a set of high-level threat models and setup monitoring along the identified attack paths. A well-defined abuse case does just that.

Bypassing IDS/NSM detection

Posted by

There are a number of ways an attacker can circumvent the protection of network security monitoring. He can use evasion techniques to avoid detection, or use diversion techniques to distract the defender. Here are a couple methods.

Protocol misuse. NetFlow and layer 1/2/3 statistics track hardware addresses, IP addresses, and TCP/UDP ports. Application layer detail is generally not analyzed and tracked. Any packet sent over port 80 will be assumed an HTTP packet, anything over port 53 a DNS packet, and so on. An attacker can send information over alternate ports to mask their activities. Alternatively, some protocols can be directly misused to carry out an attacker’s aims. For example, see the OzymanDNS app that tunnels SSH and transfers files over the standard DNS protocol. When application layer tracking is not enabled, an attacker has a blind spot that they can use.

Kaminsky, D. (2004, July 29). Release!, from Dan Kaminsky’s Blog: http://dankaminsky.com/2004/07/29/51/

Payload obfuscation. An attacker can also create a blind spot by obfuscating (or disguising) their application layer traffic. If application layer analysis is enabled, it may be utilizing pattern matching for application layer analysis. The attacker has to modify the packet or its payload enough to no longer match the pattern. Perhaps the simplest method is fragmentation, where the IP packet is broken into fragments. Any one fragment will not match the pattern detection. When the fragments get to the host computer, the host re-assembles the packet. The attacker’s payload is then delivered undetected.

Schiffman, M. (2010, February 15). A Brief History of Malware Obfuscation, from Cisco:http://blogs.cisco.com/security/a_brief_history_of_malware_obfuscation_part_1_of_2/

Timm, K. (2002, May 05). IDS Evasion Techniques and Tactics, from Symantec: http://www.symantec.com/connect/articles/ids-evasion-techniques-and-tactics

Denial of Service. A solid NSM solution is one that performs application layer analysis, checks for fragmentation, and negates common obfuscation techniques. An attacker then has options. Think of the smash and grab crimes, where the criminal gets in, gets what they can, and gets out quickly. The equivalent is the attacker who triggers the NSM in one area to create a distraction while they attack in another area. For example, an attacker launches a Denial of Service attack on a network link unrelated to their real target. Alternatively, the DoS targets the NSM infrastructure itself. If the attack is a quick raid of the victim’s network, such methods may pay off.

In sum, attackers can hide in the blind spots, cover their tracks, or make diversions.

Pentetration testing lab

Posted by

Security Information Management systems are meant to catch and report anything suspicious, right? So how do we test them? Creating a vulnerable network and exploiting it. The following tools can be used to create a testing lab to validate network security and web application security controls

 
Attack systems:

Back|Track — The most widely used and well developed penetration distro. The main disadvantage is bloat and lack of Hyper-V support. (Live disc; Slax; netsec)
http://www.backtrack-linux.org/

Matriux — The new kid on the block, with a faster and leaner distro than Back|Track and native Hyper-V support. (Live disc, Hyper-V; Kubuntu; netsec)
http://www.matriux.com/

Neopwn — A penetration testing distro created for smart phones. (Debian; netsec)
http://www.neopwn.com/

Pentoo — Gentoo meets pentesting. (Live disc; Gentoo; netsec).
http://pentoo.ch/

Samurai Web Testing Framework — Specifically targeted towards web application security testing. (Live disc, Ubuntu, appsec)
http://samurai.inguardians.com/

 

Target systems:

Damn Vulnerable Linux (DVL) — The classic vulnerable Linux environment. (Live disc; netsec)

De-ICE — A series of systems to provide real-world security challenges, used in training sessions. (Live disc; netsec)

Metasploitable — Metasploit’s answer to the question: now that I have Metasploit installed, what can I attack? (VMware; Ubuntu; netsec)

Damn Vulnerable Web App (DVWA) — A preconfigured web server hosting a LAMP stack (Linux, Apache, MySQL, PHP) with a series of common vulnerabilities. (Live disc; Ubuntu; appsec;)
http://www.dvwa.co.uk/

Moth — From the people that brought you w3af, Moth is a preconfigured web server with vulnerable PHP scripts and PHP-IDS. (VMware; Ubuntu; appsec)
http://www.bonsai-sec.com/en/research/moth.php

Mutillidae — An insecure PHP web app that implements the OWASP Top 10. (Installer; appsec)
http://www.irongeek.com/i.php?page=mutillidae/mutillidae-deliberately-vulnerable-php-owasp-top-10

WebGoat — An insecure J2EE web app that OWASP uses for security training. (Installer; appsec)
http://www.owasp.org/index.php/Category:OWASP_WebGoat_Project

Nessus Tip: auditing services on non-standard ports

Posted by

One security trick is to host network services on different ports. For example, a web server may be on 8080 or a database server may be on 3333; instead of TCP 80 and 3306 respectively. This is also an operations trick for scenarios that may have port conflicts, like clustering and nat’ing.

Non-standard TCP ports can cause vulnerabilities to be missed when scanning with Nessus. Nessus, by default, only checks known ports.

The workaround is to preload the plugins (for example, Apache and MySQL) and to set Nessus to check all ports. Under the scan policy preferences section, check “Probe services on every port” and “Thorough tests”. That will give you a more complete picture of the target’s security posture.

For more information, see:

Using Nessus Thorough Checks for In-depth Audits
http://blog.tenablesecurity.com/2010/03/using-nessus-thorough-checks-for-indepth-audits.html

Risk Management is prevention and Security Information Management is detection

Posted by

Risk Management (RM) is comprised of asset management, threat management, and vulnerability management. Asset management includes tying IT equipment to business processes. Asset management also includes performing an impact analysis to determine the relative value of the equipment based upon what the business would pay if the equipment was unavailable, and what the business would earn if the equipment was available. Threat management includes determining threat agents (the who) and threats (the what). For example, a disgruntled employee (threat agent) performs unauthorized physical access (threat 1) to sabotage equipment (threat 2). Vulnerability management is auditing, identifying, and re-mediating vulnerabilities in the IT hardware, software, and architecture. Risk management is tracking assets, threats, and vulnerabilities at a high level by scoring on priority (Risk = Asset * Threat * Vulnerability) and scoring on exposure (Risk = Likelihood * Impact).

Once prioritized, we can then move onto determining controls to reduce the risk. Controls can be divided into three broad methods: administrative or management, operational, and technical. Preventative and detective are the two main forms of controls. Preventative controls stop the threat agent from taking advantage of the threat. In the above example, a preventative control would be a locked door. Detective controls track violations and provide a warning system. For the disgruntled employee entering an unauthorized area, a detective control would be things like motion detectors. The resulting control matrix includes management preventative controls, management detective controls, operational preventative and detective controls, and so on for technical controls.

Security Information Management (SIM) is a technical detective control that is comprised of event monitoring and pattern detection. Event monitoring shows what happened when and where, from both the network and the computer perspectives. Pattern detection is then applied to look for known attacks or unknown anomalies. The challenge an InfoSec guy faces is that there is just too many events and too many attacks to perform this analysis manually. The purpose of a Sim is to aggregate all the detective controls from various parts of the network, automate the analysis, and roll it up into one single console.

My approach to managing security for a business networks is to use Risk Management for a top down approach. This allows me to prioritize my efforts for preventative controls. My team and I can then dig deep into the security options and system parameters offered by the IT equipment that is driving the business. For all other systems, I rely on detective controls summarized by a Security Information Management tool.

In my network architecture, RM drives preventative controls and SIM drives detective controls.

Nmap output to XML and SQL

Posted by

The Nmap port scanner has a handful of output options. It has its own proprietary format (-oN). If you want to play with the data, you can use XML output (-oX) or grep text files (-oG). The -oA will export in all three formats.

Why export to XML or grepable text? Typically, because you want to audit several IP hosts and store the results in a database.

A quicker method is to use the Nmap::Parser module with a Perl script. This method comes courtesy of Anthony Persaud. His Nmap-Parser automates reading the XML output and writing to SQL tables. MySQL and SQLite are both supported. Nmap-Parser is now up to version 1.19.

Use case: nightly IP scans of a subnet along with TCP scans of select hosts, as part of a security information management process.

LinkedIn Security Information Management Group

Posted by

I have been working on a Security Information Management (Sim) system for some many years, off and on. It started as a collection of WMI scripts that gathered information into a flat file structure. Initially these were only for system logs. More recently, I have moved to a SQL back-end and added network traffic captures and analysis. A few people have joined in my efforts and we hope to have software release within a year.

The SimWitty project has a website and a LinkedIn group. I hope you will come join us. We could use the help, particularly in C# development and SQL Server 2005 optimizations.

A look at Q1 Labs’ QRadar

Posted by

Information security can be fundamentally described in terms of protection, detection, and response. One can say a system is secure if it takes an attacker a very long time to break the protection. For example, in encryption, cryptanalysts claim it will take thousands of years to break certain ciphers with large keys. By the time the protection is cracked, the information is no longer relevant or worthwhile. Time is an important benchmark. InfoSec professionals spend a lot of time bolstering protection mechanisms.

For an attack to be prevented, the time protection provides has to be longer than the value of the information, or the time it takes to detect and respond to the attack. Take this month’s indictment of the computer criminals who stole some 41 million debit and credit cards from computer systems at TJX, Office Max, Boston Market, Barnes & Nobles and others. If retailers had detected the attack in less time and responded, the loss of information would have been much less.

In fact, detection within 24 hours of the attacker’s initial reconnaissance and appropriate response can stop information from being stolen at all. Thus we should spend as much time on detection as we spend on prevention. However, that is often not the case because detecting means watching what is going on in the system. For any sizeable network today, there is always more going on that one person (or even a team) can watch.

This is the need that security information management (SIM) consoles fill. They watch the network and boil down the information into the key statistics and events. Source data comes from event logs, network flow, and sensors. Performance is important here as networks get rather busy (a typical 100 computer network sees about 20 events and 200 packets per second). SIM consoles then correlate these events and report on suspicious and irregular activities. Hence the criteria for SIMs are ease of use, log and network performance, correlation and detection abilities, and reporting depth and clarity. I recently had an opportunity to evaluate a product in this space: Q1 Labs QRadar.

The breadth of their offering immediately got my attention. QRadar provides all the detection of my personal patchwork of tools. I use a C# app with a SQL database for Windows log management. There is a Syslog system for the Unix/Linux logs. On the network side, Compuware’s NetworkVantage is running for top-level reports. Yet this does not allow me to drill down into the details, which is important for doing forensics, so I have another system that captures network traffic and dumps into Wireshark for analysis. Neither provide real-time alerting. For that, I have deployed Snort and some other off-brand intrusion detection system (the name escapes me at the moment.) During investigations, I have to manually pull information out of all these systems and correlate it with pencil and Excel.

QRadar does this all automatically. The time savings is a real boost in productivity. Yet for all the functionality packed into the product, somehow Q1 Labs has managed to keep the interface clean and uncluttered. The main page is a dashboard I can customize with the feeds that matter to me. These feeds might be hosts at risk, number of attacks, top talkers, et cetera. The UI was very straightforward.

Performance is also up to snuff. QRadars pedigree includes Q1 Labs’ earlier network anomaly detection and monitoring tools, so that technology is rather mature. There are two options: netflow (switch taps) and qflow (software sniffers). If your equipment supports netflow, use it, because this option provides the best performance. Both options perform within the 200 packets per second range, and scale up to thousands of packets.

QRadar’s correlation engine is equally well developed. Forget doing analysis with a stack of printed logs and a sharpened pencil. This tool identifies known attacks quickly and has few false positives with the regular network traffic. There is also an ad hoc capability in the interface. I can specify specific content to look for, like somebody’s name, or a regular expression to match, like a credit card number. Then I can tell QRadar to look for events and packets that match, and pull back a report. QRadar can also return a packet capture that I can view in tools like Wireshark. This is handy for forensics after the attack has been detected and contained.

Of course, sometimes it is quicker to use the built-in reports. There are dozens to choose from. Each report can be ran on demand or scheduled. The output can be sent to the dashboard, saved as a file, or emailed out. This is very flexible and another time saver. Imagine, for example, running a report on failed logons every morning. This report then appears in your inbox. It can also be sent to a ticket tracking system for auditing purposes. It is very straight forward.

QRadar still has some rough spots. The product has a chimera feel produced from integrating log management and network management. The UI is inconsistent: some objects require clicks, some double-clicks, and others right-clicks. Which click is it? You often have to try all three to get the right result. The flexibility in reporting also leads to some odd results, as it is very easy to set up circular loops as you click thru reports for details. Yet these are minor details that will surely work themselves out as it evolves.

With TJX and co in the news, most security vendors that come calling this month will speak of how their solutions could have curbed the damage. The real acid test is time. How much longer will the information be protected? Alternatively, how much quicker will an attack be detected? Protection mechanisms can only provide partial security. Further, once breached, the exposure goes up dramatically with the amount of time an attacker has on your system. Detection tools are required to compensate for chinks in the armor and contain attacks. So ask the vendor the question, and check out their response.

The best response I have heard comes from Q1 Labs. If there is attack occurring on your systems, it will show up in QRadar first. Detection time drops significantly when network and host-based information is consolidated and correlated. Combining both the top-level overview, necessary for day-to-day management, with the deep-dive details, necessary for incident response and forensics, puts QRadar ahead of the pack.  QRadar is an excellent tool and its reporting and digital forensics capabilities will definitely improve an organization’s security posture.

Rolling your own SIM

Posted by

I have been looking at pay-to-play security information management tools. Reviewed Q1Labs Radar, Cisco Mars, and Novell’s offering. The costs are tad high, particularly when a lot of the basic collections I can do with WMI scripts and C# code.

OSSIM (Open Source Security Information Management) is another option that I am looking into. Or maybe I will roll my own. Here are the key tools:

Hosts:

Log monitoring: Kiwi syslog, Snare
Signature-based analysis: Nagios, OSSEC
Vulnerability assessments: Nessus

Networks:

Local monitoring: Arpwatch
Signature-based analysis: Snort
Statistical-based analysis: Spade

Correlation:

Splunk
SQL Server 2005 SSRS and SSAS

Code or configure? Where is the best return for my time? I wager rolling my own will be a good learning experience. The money saved can then be invested in training materials and resources. Further, any analysis and cleanup will not go to waste if I change course. An off-the-shelf SIM tool will plug into a cleaned up network just as easily as it would into a unmonitored network, if not easier. I am going to keep tinkering for the time being.

That sums up my thinking at the moment.

Perimeter-less Security and Clouds on the Horizon

Posted by

Cloud computing is similar to what the tech industry has been calling “on-demand” or “utility” computing, terms used to describe the ability to tap into computing power on the Web with the same ease as plugging into an electric outlet in your home. But cloud computing is also different from the older concepts in a number of ways. One is scale. Google, Yahoo!, Microsoft, and Amazon.com have vast data centers full of tens of thousands of server computers, offering computing power of a magnitude never before available. Cloud computing is also more flexible. Clouds can be used not only to perform specific computing tasks, but also to handle wide swaths of the technologies companies need to run their operations. Then there’s efficiency: The servers are hooked to each other so they operate like a single large machine, so computing tasks large and small can be performed more quickly and cheaply than ever before. A key aspect of the new cloud data centers is the concept of “multitenancy.” Computing tasks being done for different individuals or companies are all handled on the same set of computers. As a result, more of the available computing power is being used at any given time.”

Clouds are on the horizon. I know very few data centers that host everything internally. Most, including my own, deliver a mixture of desktop applications, client-server applications, and hosted (e.g., cloud) web apps. The shift has an immediate impact on security planning. Information security architectures began with terminal-server applications and focused on strong perimeters. With apps moving to the desktops, the perimeter became a little wider and a little more porous. But we could still control the information, by restricting what data was on the desktops and using technologies like end-point security. In fact, one might argue that many of our controls today are based around restricting information to the data center and keeping it off the desktops. The next major shift, which we are already starting to see, is moving the information from data centers to third-party hosting providers. This is only going to accelerate as young people, weaned on MySpace and Gmail,  join the workforce. Another accelerant which we may see in the next few years is another economic downturn. Both sociological and economical changes are moving the data from controlled perimeters to uncontrolled open spaces. The clouds on the horizon are coming nearer.

The open question is this: how do we build controls in an age of perimeter-less security?