BizTech: Securing Remote Work in a Transformed World

Archive for the ‘Systems Engineering’ Category

BizTech: Securing Remote Work in a Transformed World

Posted by

“Now that everyone has shifted to work from home, it’s as if we’ve got 10,000 branches,” Goerlich said. “So the techniques we use aren’t scaling, the approaches we use aren’t scaling, we don’t have the manpower, the technology to possibly secure 10,000 branches.”

Excerpt from: Securing Remote Work in a Transformed World

That added complexity means security approaches that once defined work styles for decades now have to be reconsidered or retired — which means the moat needs a rethink.

“We start to talk about traditional IT as being this environment that had a hard-candy shell around it, or a castle with a moat,” said Kevin Swanson, a Microsoft Surface Specialist. “And you protected all of these outside threats from the things that were important to your business on the inside.

“That dynamic is changing.”

Read the full article: https://biztechmagazine.com/article/2020/08/cdw-tech-talk-securing-remote-work-transformed-world


This post is an excerpt from a press article. To see other media mentions and press coverage, click to view the Media page or the News category. Do you want to interview Wolf for a similar article? Contact Wolf through his media request form.

Get over being afraid of our shadow

Posted by

This week, I shared an article (Cloud-Based DevOps) and it generated a healthy debate about shadow IT. The article explains the rationale behind one manager taking on the expense and the risk of providing IT for his business unit. The resulting debate highlights what I see as the changing role of in-house IT.

Ask any IT leader what their top concerns are. I guarantee staffing and budgets are right up there. We all have too much to do, and too little to do it with. IT teams have been in this mode for some years now.

See, the traditional IT function was one based on command-and-control. It had to be. IT systems were complex, they were high maintenance, and they were new. I do not mean “new” like “new iPhone”. I mean new like how the automobile was new when it was introduced as a horse-and-carriage and replacement. IT was involved because we had to be. We knew how to make it work and, if it was not implemented correctly, we felt the failure in terms of increased support tickets and higher maintenance costs.

Management strategies are very much based in this old school model. The right way to do things, many will advocate, is for the business to come to IT. IT then designs, develops, and deploys the solution. Bonus points if there is a budgeting charge-back system in place.

Ask any IT leader what top trends are concerning them. Several are likely to be mentioned. Consider IT consumerization, shadow IT, bring-your-own-device, cloud computing, and so on. Each of these is a risk because it involves the end-users making choices for themselves, without the need for IT to be involved at all.

In the Cloud-Based DevOps article, the author is spending tens of thousands of dollars per month. He and his team are provisioning and managing much of the IT they need.

Let’s put these side-by-side. IT has more work than the team can accomplish. The business is saying IT is too bogged down to handle the request. IT is saying that they lack the budget. The business is saying that they can afford it.

Why not simply enable the business to take on non-core IT? As counter-intuitive it may seem, embracing the do-it-yourself trends is vital to modern IT management.

Today’s IT systems are “new” like the “new iPhone”. And today’s IT end-users often know as much, if not more, about their IT needs and their IT systems than the IT team. With an entire generation having grown up with the technology, today’s employees are not looking for approval. They are looking for guidance.

Embracing shadow IT is a tactic that can allow for better service. . As that frees up time, the IT department can focus ever more attention on the core business systems. Let’s make it easier, not harder, for non-IT departments to provision IT. Let’s also extend IT governance, through consultative education, to the business unit making the purchasing decisions. Embrace and extend.

IT consumerization can be securable and supportable. The IT department, however, will need to adjust to a post command-and-control world. We can choose to fight the end-users. Of course, suchmoves are exactly what drives people like the article’s author to without notifying IT. We can choose to make the business a smarter consumer. This latter choice is the one, in my opinion, that enables smarter uses of IT and begins to truly shift the IT department to a service organization.

The choice is ours. I say, let’s get over our fear of shadow IT.

Sticky PowerShell and Command Prompt settings

Posted by

I have been doing more with PowerShell recently. One thing that troubled me was that my color, font, and size settings were not sticky. I would get PowerShell configured just right. But when I would launch PowerShell from the cmdline or use it to debug in Visual Studio, my personal settings would be lost.

The cause is that the settings in a console session are applied to the shortcut link. The settings are not applied universally to the user profile. Suppose you start a console and click the console icon to set the font, layout, and color. These settings are saved in the shortcut that was used to launch that particular console.

To persist across sessions, you need to update the default values in your user profile. This is stored in the registry under HKEY_CURRENT_USER\Console.

The command prompt is under: [HKEY_CURRENT_USER\Console\%SystemRoot%_system32_cmd.exe]

The PowerShell prompt is under: [HKEY_CURRENT_USER\Console\%SystemRoot%_system32_WindowsPowerShell_v1.0_powershell.exe]

Here are the values:

ColorTable00 == dword storing the background color in blue, green, red.
ColorTable07 == dword storing the foreground color in blue, green, red.
FaceName == string storing the name of the font, such as “Lucida Console”.
FontFamily == dword set to 0x36 for fixed width fonts.
FontSize == dword set to 00, then the font size in hex, then 0000. For example, 10-point (0xA) font is 000A0000.
FontWeight == a dword defaulting to 0x190.
WindowSize == a dword in coord format: 2-byte height and 2-byte width. For example, 0028 0078 is 40 characters tall by 120 characters wide.
ScreenBufferSize == dword in coord format. Note the WindowSize cannot exceed the ScreenBufferSize.

I prefer a retro green screen console with large fonts and a large buffer. Here are the values from my registry:


Windows Registry Editor Version 5.00

[HKEY_CURRENT_USER\Console\%SystemRoot%_system32_WindowsPowerShell_v1.0_powershell.exe]
"ColorTable00"=dword:000f2000
"ColorTable07"=dword:0000f000
"FaceName"="Lucida Console"
"FontFamily"=dword:00000036
"FontSize"=dword:00100000
"FontWeight"=dword:00000190
"WindowSize"=dword:00280078
"ScreenBufferSize"=dword:012c0078

 

Grep versus Select-String Speedtest

Posted by

How fast is grep? Reasonably fast. Over the weekend, we were discussing on Twitter a post from Mike Haertel. Mike was the original developer of GNU grep. In the post titled “why GNU grep is fast“, Mike described the algorithm grep uses. He also provided this excellent advice: “#1 trick: GNU grep is fast because it AVOIDS LOOKING AT EVERY INPUT BYTE. #2 trick: GNU grep is fast because it EXECUTES VERY FEW INSTRUCTIONS FOR EACH BYTE that it *does* look at.” “The key to making programs fast is to make them do practically nothing.”
This had me wondering about how PowerShell’s Select-String stacks up. Richard Minerich (@rickasaurus) brought up a good point: compiled C code is generally faster than C# code. As PowerShell rests on .NET, we can make an assumption that grep should be faster than Select-String. Mark Boltz (@mtezna) suggested running several tests of both and taking an average to get a sense of how Select-String stacks up.

If Select-String was significantly slower, then a good weekend project might be to write a faster parser. I do have the occassional free weekend and I was very curious. Today, I performed such a test. Read on to find out what I learned.
Test Parameters

I generated sample files using a sample dictionary file. Each file contained sentences of random length (5-25 random words). One in ten sentences contained the word “key” at a random location within the sentence. There were eleven sample files: 1,000 sentences, 10,000 sentences, 20,000 sentences, and so on to 100,000 sentences. (You can download the resulting test files here: grep-select-string-test.zip).

Each search was performed seven times. System.Diagnostics.Stopwatch was used as the time source. The total milliseconds elapsed was used as the time measure. The minimum time and the maximum time were dropped. The time recorded was the average of the remaining five tests.
I used the latest GNU grep for Windows, version 4.2.1 released 2012-12/18. The command executed for grepping the file was: grep “key” “file1000.txt”
For PowerShell, I used version 3 (build 6.2.9200.16398). The PowerShell equivilant of the grep command was: Select-String -Pattern “key” -Path .\file1000.txt

The host operating system is Windows 2008 Server R2 SP1 with the latest hotfixes.

 

Results

In the following graph, the number of lines in the sample files is plotted on the x-axis. The total time to search the sample file is plotted on the y-axis in milliseconds.

Lines — Grep — Select-String
1,000 — 248.2245 — 29.8712
10,000 — 1,907.8156 — 299.4792
20,000 — 4,013.5332 — 643.2678
30,000 — 6,689.0545 — 1,036.1867
40,000 — 8,419.1654 — 1,319.9755
50,000 — 10,870.3179 — 1,662.6931
60,000 — 12,487.7127 — 1,955.2525
70,000 — 15,048.1311 — 2,344.9599
80,000 — 16,623.6946 — 2,594.3496
90,000 — 16,775.1033 — 2,995.7644
100,000 — 18,697.6675 — 3,303.2918
The bottom line? Select-String is significantly faster than GNU grep on Windows Server 2008 R2. PowerShell is closing the gap between Linux and Windows shell environments.

Privilege management at CSO

Posted by

Least Privilege Management (LPM) is in the news …

The concept has been around for decades. J. Wolfgang Goerlich, information systems and information security manager for a Michigan-based financial services firm, said it was, “first explicitly called out as a design goal in the Multics operating system, in a paper by Jerome Saltzer in 1974.”

But, it appears that so far, it has still not gone mainstream. Verizon’s 2012 Data Breach Investigations Report found that, of the breaches it surveyed, 96% were not highly difficult for attackers and 97% could have been avoided through simple or intermediate controls.

“In an ideal world, the employee’s job description, system privileges, and available applications all match,” Goerlich said. “The person has the right tools and right permissions to complete a well-defined business process.”

“The real world is messy. Employees often have flexible job descriptions. The applications require more privileges than the business process requires,” he said. “[That means] trade-offs to ensure people can do their jobs, which invariably means elevating the privileges on the system to a point where the necessary applications function. But no further.”

Read the full article at CSO: Privilege management could cut breaches — if it were used

Considerations when testing Denial of Service

Posted by

Stress-testing has long been a part of every IT Operations toolkit. When a new system goes in, we want to know where the weaknesses and bottlenecks are. Stress-testing is the only way.

Now, hacktivists have been providing stress-tests for years in the form of distributed denial of service attacks. Such DDoS are complementary with just about any news event. As moves are underway to make DDoS a form of free speech, we can expect more in the future.

With that as a background, I have been asked recently for advice on how to test for a DDoS. Here are some considerations.

First, test on the farthest router away that you own. The “you own” part is essential. Let’s not run a DDoS across the public Internet or even across your hosting provider’s network. That is a quick way to run afoul of terms of service and, potentially, the law. Moreover, it is not a good test. A DDoS from, say, home will be bottlenecked by your ISP and the Internet backbone (1-10 Mbps). A better test is off the router interface (100-1000 Mbps).

Second, use a distributed test. A distributed test is a common practice when stress-testing. It required to get a D in the DDoS. Alright, that was a bad joke. The point is that you want to remove individual device differences from affecting the test, such as a bottleneck within the OS or the testing application. My rule of thumb is 5:1. So if you are testing one router interface at 1 Gbps, you would want to send 5 Gbps of data via five separate computers.

Third, use a combination of traditional administration tools and the tools in use for DDoS. Stress-test both the network layer and the HTTP layer of the application. If I were to launch a DDoS test, I would likely go with iperf, loic, and hoic. Check also for tools specific to the web server, such as ab for Apache. Put together a test plan with test scripts and repeat this plan in a consistent fashion.

Forth, test with disposable systems. The best test machine is one with a basic installation of the OS, the test tools, and the test scripts. This minimizes variables in the test. Also, while rare, it is not unheard of for tools like loic and hoic to be bundled with malicious software. Once the test is complete, the systems used for testing should be re-imaged before returned to service.

Let’s summarize by looking at a hypothetical example. Assume we have two Internet routers, two core routers, two firewalls, and then two front-end web servers. All are on 1 Gbps network connections. I would re-image five notebooks with a base OS and the DDoS tools. With all five plugged into the network switch on the Internet routers, I would execute the DDoS test and collect the results. Then repeat the exact same test (via script) on the core routers network, on the firewall network, and on the web server network. The last step is to review the entire data set to identify bottlenecks and make recommendations for securing the network against DDoS.

That’s it. These are simple considerations that reduce the risk and increase the effectiveness of DDoS testing.

Write-up of the 29c3 CTF “What’s This” Challenge

Posted by

Subtitled: “How to capture a flag in twelve easy days”

The 29th Chaos Communication Congress (29C3) held an online capture the flag (CTF) event this year. There were several challenges, which you can see at the CTF Time page for the 29c3 CTF. I spent most of the time on the “What’s This” challenge. The clue was a USB packet capture file named what_this.pcap.

The first thing we did was run strings what_this.pcap and look at the ASCII and Unicode strings in the capture. ASCII: CASIO DIGITAL_CAMERA 1.00, FAT16, ACR122U103h. Unicode: CASIO QV DIGITAL, CASIO COMPUTER, CCID USB Reader.

The second thing we did was to open the capture in Wireshark 1.84. (Using the lastest version of Wireshark is important as the USB packet parser is still being implemented.) We knew Philip Polstra had covered USB forensics in the past, such as at GrrCon, and Philip pointed us to http://www.linux-usb.org/usb.ids for identifying devices. We see a Genesys Logic USB-2.0 4-Port HUB (frame.number==2), a Linux Foundation USB 2.0 root hub (frame.number==4), Holtek Semiconductor Shortboard Lefty (frame.number==32, 42), a Casio Computer device (frame.number==96, 106), and another Casio Computer device (frame.number==1790).

Supposition? The person is running Linux with a keyboard, Casio camera, and smart card (CCID) reader attached over USB. A Mifare Classic card (ACR122U103) is swiped on the CCID reader. The camera is mounted (FAT16) and a file or files are read from the device.

Next, we extracted the keystrokes. I had previously written a simple keystroke analyzer for the CSAW CTF Qualification Round 2012. This simply took the second byte in the USB keyboard packets (URB_INTERRUPT) and added 94. This meant the alphabetical characters were correct, however, all special characters and linefeeds were lost. The #misec 29c3 CTF captain, j3remy, passed along a lookup table. Using this lookup table, we found the following keystrokes:

dmeesg
mmouunt t vfaat //ddev/ssdb1 /usb
ccd usb
llss —l
fiille laagg
dmmeessg
nfc-lsisst
ccat flaag \ aespipe -p 3 -d 3,,, nffs-llisst \c-llisst \ grrep uuid \ cut =-d -f 10\ dd sbbs=113 couunnt=2

There are a number of problems with this method of analyzing keystrokes. First, when the key is held down too long, we get multiple letters (dmeesg). Second, special keys like shift and backspace are ignored. I redid my parser to read bytes 1, 2, and 3. The first byte in a keyboard packet is whether or not shift is depressed. The second byte is the character (including keys like enter and backspace). The third byte is the error code for repeating characters. Using this information, I mapped the HID usage tables toMicrosoft’s SendKeys documentation and replayed the packet file into Notepad.

dmesg
mount -t vfat /dev/sdb1 usb
cd usb
ls -l
file  lag
dmesg
nfc-list
cat flag | aespipe -p 3 -d 3<<< "`nfc-list | grep UID | cut -d  " " -f 10-| dd bs=13 count=2`"

Supposition? The person at the keyboard plugged in the Casio camera and mounted it to usb. He listed the folder contents, then scanned for the Mifare Card (nfc-list lists near field communications devices via ISO14443A). Once confirmed, he read the flag from the camera and decrypted it via AES 128-bit encryption in CBC mode (man aespipe). The passcode was the UID of the Mifare Card in bytes (nfc-list | grep | cut | dd). To find the flag, we need both the UID and the flag file.

The hard work of finding the UID was done by j3remy. He followed the ACR122U API guide and traced the calls/responses. For example, frame.number==1954 reads Data: ff 00 00 00 02 d4 02, or get (ff) the firmware (2d d4). The response comes in frame.number==1961 Data: 61 08, 8 bytes coming with the firmware. Then frame.number==1966, Data: ff c0 00 00 00 08, get (ff) read (c0) the 8 bytes (08). And the card reader returns the firmware d5 03 32 01 06 07 90 00 in frame.number==1973. j3remy likewise parsed the communications and found frame.number==3427 which reads: d54b010100440007049748ba3423809000

d5 4b == pre-amble
01 == number of tag found
01 == target number
00 44 == SNES_RES
07 == Length of UID
04 97 48 ba 34 23 80 == UID
90 00 == Operation finished

The next step was to properly format the UID as the nfc-list command would display it. This took some doing. Effectively, there are 4 blank spaces before ATQA, 7 blank spaces before UID, and 6 spaces before SAK. There is one space after : and before the hexadecimal value. Each hexadecimal value is double-spaced. With that in mind, we created an nfc-list.txt file:

    ATQA (SENS_RES): 00  44
       UID (NFCID1): 04  97  48  ba  34  23  80
      SAK (SEL_RES): 00

Determining the spacing took some time. Once we had it, we could run the cat | grep | dd command and correctly return 26 bytes of ASCII characters.

$ echo "`cat nfc-list.txt | grep UID | cut -d  " " -f 10-| dd bs=13 count=2`"
2+0 records in
2+0 records out
26 bytes (26 B) copied, 6.2772e-05 s, 414 kB/s
04  97  48  ba  34  23  80

To recap: we have the UID, we have correctly converted the UID to a AES 128-bit decryption key, and we are now ready to decrypt the file. How to find the file, though? Rather than reverse engineering FAT16 over USB, we took a brute force approach. We exported all USB packets with data into files named flagnnnn (where nnnn was the frame.number). We then ran the following script:

FILES=flag*
for f in $FILES
do
    echo -e \n Processing $f file… \n
    cat $f | aespipe -p 3ls -d 3<<< `cat nfc-list.txt | grep UID | cut -d ‘ ‘ -f 10-| dd bs=13 count=2`
done

There it was. In the file flag1746, in the frame.number==1746, from the Casio camera (device 26.1) to the computer (host), we found a byte array that decrypted properly to:

29C3_ISO1443A_what_else?

What else, indeed? Well played.

Special thanks to Friedrich (aka airmack) and the 29c3 CTF organizers for an enjoyable challenge. Thanks for j3remy for captaining the #misec team and helping make sense of the ACR122 API. Helping us solve this were Philip Polstra and PhreakingGeek. It took a while, but we got it!

Disabling SMTP verbs like Turn for PCI compliance

Posted by

A friend of mine contacted me for advice on passing a PCI compliance audit. Apparently the auditor’s scans had detected TURN and ETRN were enabled on the SMTP server. The auditors referenced CVE-1999-0512 and CVE-1999-0531. Moreover, enabling TURN does pose some security risk.

His concern was, of course, not passing the audit. Research online turned up recommendations for Exchange and other mail servers. But there did not appear to be any advice for standard Windows SMTP. What to do?

SMTP in Windows 2000, Windows 2003, and Windows 2008 is a component under IIS. All SMTP configuration is stored within the IIS Metabase. The node is SmtpInboundCommandSupportOptions and the property ID is 36998. The value is a simple 32-bit flag and so some math is required. Here are component the values for the flag (fromMicrosoft KB 257569):

DSN = 0x40 = 64
ETRN = 0x80 = 128
TURN/ATRN = 0x400 = 1024
ENHANCEDSTATUSCODES = 0x1000 = 4096
CHUNKING = 0x100000 = 1048576
BINARYMIME = 0x200000 = 2097152
8bitmime = 0x400000 = 4194304

So let’s say you want Enhanced Status Codes, Binary Mime, and 8-bit Mime enabled. The value would be 0x601000 or 6295552 or 4096 (Enhanced Status Codes) + 2097152 (Binary Mime) + 4194304 (8-bit Mime). Simply add up the values of the verbs that will be enabled and convert to hexadecimal.

To set the value on an SMPT service that is not running Exchange, use the IIS Metabase utility (Mdutil.exe). Select the path to the SMTP service (smtpsvc/ by default), enter the property ID (prop:36998), specify the value is a 32-bit flag (dtype:DWORD), push the value down to all child nodes (attrib:INHERIT), and set the value of the enabled verbs.

The resulting command would be:

Mdutil.exe set -path:smtpsvc/ -prop:36998 -utype:UT_SERVER -dtype:DWORD -attrib:INHERIT -value:0x601000

Run the command to update the metabase, restart IIS and the SMTP service, and then retest. Only the enabled verbs will then appear. And that, hopefully, will put you in a better place. As my friend put it, “Thanks again for your help! I passed!”

Regards,

Wolfgang

Post script: Including SMTP verbs in a PCI test, while new to me, apparently has been going on for some time. See this post back from 2010:

Hello, Microsoft Group,

We have a few vulnerabilities on our servers. We have a PCI audit coming up and they are asking to upgrade the SMTP server

All modern SMTP servers reject the TURN command for security reasons. Upgrade to a newer SMTP server version. You should also disable the ETRN and ATRN commands unless you have a good reason for using them.

The original SMTP specification described a “TURN” command that allows the roles of server and client to be reversed in a session. When a client issues the “TURN” command, the server “turns around” and sends any queued mail for that domain to the client, essentially treating the client as an SMTP server.

The “TURN” command is obsolete and insecure. It specifies no authentication mechanism, allowing a single user from a domain to retrieve all queued mail for that domain (for all users). Modern SMTP servers reject the “TURN” command for these reasons. A replacement for “TURN” command, called “ETRN”, has been able to rectify some of the security problems with “TURN”. However, this proposal is not without its own security problems.

How can I disable the ETRN and ATRN commands? Please help me on this. Thanks.

Software support for password strength

Posted by

Xkcd is the QED of our industry. Want proof? Check out Randall Munroe’s comic on Password Strength.

Longer phrases trump mixed up passwords every time. “Correcthorsebatterystaple” will take significantly longer to crack than, say, “p@ssw0rd”. Given this, you might wonder why the industry has not changed to longer phrases. I blame the vendors. There are a number of apps that I support and websites that I visit that still limit passwords to 14 characters. Moreover, many explicitly for prevent special characters. Software support is a problem.

There are other problems, too. See today’s Dark Reading article for the pros and cons of using phrases.

Dark Reading: Passphrases A Viable Alternative To Passwords?

There is a wish among some enterprise users that they could institute phrases, but they’re experiencing a technology lag within the software and identity management worlds that stymies the urge.

“One reason (organizations don’t use passphrases) is the number of software applications that do not support long or complex passphrases,” says J. Wolfgang Goerlich, Network Operations and Security Manager for a midwest financial services firm. “Length and special characters seem to be a challenge for some vendors. Sometimes referred to as technological debt, many IT departments must maintain a suite of apps that have not been updated with modern security recommendations.”

The team, the tools, and the time

Posted by

“Want to find out how good someone is? Take away all their tools and say, ‘Now do it.'” — @SecShoggoth

Have you heard of Thomas Thwaite? He took a maker’s approach to toasters. By reverse engineering a £3.99 Argo toaster, Thwaite was able to build his own model. He smelted the iron. He melted the plastics. He may have argued with a volleyball named Wilson. I am not sure on that last point. But after nine months and £1187.54, Thwaite had himself a toaster.

A tweet by Tyler Hudak (@SecShoggoth) had me comparing toasters to information technology. Just what is a tool? Is it that application you are using? Fine. Let’s rewrite the app to show how good we are. But wait … what about the IDE? Is that a tool? No worries. We will use cat and bang the C code out straight. What about the compiler? What about the language itself? The OS? The computer itself? How about the motherboard and daughter cards? What about ICs? The transistor?

“If you want to make an apple pie from scratch, you must first create the universe.” Carl Sagan sums up the slippery slope we ride.

We live in a remix society. We — in the IT and InfoSec industry — work on the largest hackable platform in human history. Everything we do depends upon the work of others. Everything we make builds upon the tools of others. Every day we take from and give back to this hackable platform we call modern IT.

We can compare the new generation’s approach to IT as the Nintendo generation. Heck, they just download an app, point-and-click, and done. That’s not IT.

I recall folks lambasting my generation because we had a GUI. Heck, we had keyboards and mice. All we had to do was boot up, point-and-click, and done. That’s not IT. That’s not real computing.

I wager the generation before were heckled because they did not have to use punch cards. And don’t get me started about slackers who use transistors instead of vacuum tubes.

There is a certain rugged nostalgia for folks like Thomas Thwaite. People who toss aside the benefits of society to forge their own way are admirable. Equally admirable, in my opinion, are those who save time and money with clever hacks to the platform. These are folks that excel thru expert use of modern tools.

See, IT has become a team sport. The one man toaster and the lone sysadmin are throw backs. The way forward is mastery of your specific tool-set combined with a team of folks equally skilled in complementary tools. Give me a team, tools, £1187.54, and nine months. We will change the world.

Wolfgang

 

Note this article comes from a discussion on Twitter between@SecShoggoth, @RogueClown, and @LenIsham.@SecShoggoth blogged on expanding your skill sets beyond the tools you are comfortable with here: Tools and News.