DNS covert channels

Archive for the ‘Systems Engineering’ Category

DNS covert channels

Posted by

I am having some fun with DNS and covert channels over the holidays.

At its most simplest, DNS can be used as a text based covert channel. The DNS client sends the message via a CNAME lookup. The DNS server sends a message in response via a CNAME response. By co-opting this process, any character sequence can be sent back and forth.

What if we need to do more? Say, transfer a file? Or even browse the web? The answer here is text encoding.

Most IT folks would jump to the conclusion that the traffic simply needs to be Base64 encoded. There is a slight wrinkle. DNS CNAME queries only support 63 characters: alphabetical lower case, upper case, numeric, and dash (-). Base64 encoding is out.

The next possibility is Base32 encoding. While not often used, it fits within the DNS RFC and therefore works out of the box.

The disadvantages of Base32 over DNS is packet payload size and transmissions. DNS is UDP and, therefore, may suffer from dropped packets. Further, the packets can only be so long.  DNS host names are limited to 255 characters.

Dan Kaminsky came up with an interesting solution to these problems. He essentially tunneled IP over DNS using Base32 encoding. Such protocol layering handles the limitations of UDP. To increase the size, Kaminsky relied on the EDNS0 extension specified in RFC 2671. He released a proof of concept in the form of the OzymanDNS Perl scripts.

As a side note, the name OzymanDNS had me curious. I did some digging. It is a Watchmen comics reference which, in turn, traces back to an Egyptian pharaoh. Nothing says secret writings like comic books and pharoahs.

Anyways, in sum, covert channels over DNS are practical. With some clever protocol manipulation, binary files and even web browsing can be tunneled over DNS.

Malware Removal Guide for Windows

Posted by

I was at a family event this past weekend. As so often happens at these events, the conversation goes something like:

Them: “Oh, you are in computer security? I got this virus. What should I do?”

Me: “Uhh … Well, that’s not really what I handle.”

Malware infections in the corporate world are easy. First, we keep up on the patches. That prevents a lot of infections. Second, we have anti-virus software with updated signatures. This catches what gets thru. Finally, if computers do get infected, we have a silver bullet. A simple reimaging gets everything back in shape.

People at home are not so fortunate. Reimaging is not a fix for them because that often means losing valuable data and applications.

Until recently, my only advice was to reload. Then Brian @ Select Real Security put up an in-depth guide on removing malware. Now I have a better answer. “I got this virus. What should I do?” Check out this guide.

Malware Removal Guide for Windows
http://www.selectrealsecurity.com/malware-removal-guide

“This guide will help you clean your computer of malware. If you think your computer is infected with a virus or some other malicious software, you may want to use this guide. It contains instructions that, if done correctly and in order, will remove most malware infections on a Windows operating system. It highlights the tools and resources that are necessary to clean your system.”

Cost justifying 10 GbE networking for Hyper-V

Posted by

SearchSMBStorage.com has an article on 10 GbE. My team gets a mention. The link is below and on my Press mentions page.

For J. Wolfgang Goerlich, an IT professional at a 200-employee financial services company, making the switch to 10 Gigabit Ethernet (10 GbE) was a straightforward process. “Like many firms, we have a three-year technology refresh cycle. And last year, with a big push for private cloud, we looked at many things and decided 10 GbE would be an important enabler for those increased bandwidth needs.”

10 Gigabit Ethernet technology: A viable option for SMBs?
http://searchsmbstorage.techtarget.com/news/2240079428/10-Gigabit-Ethernet-technology-A-viable-option-for-SMBs

My team built a Hyper-V grid in 2007-2008 that worked rather nicely at 1 Gbps speeds.We assumed 80% capacity on a network link, a density of 4:1, and an average of 20% (~200 Mbps) per vm. In operation, the spec was close. We had a “server as a Frisbee” model that meant non-redundant networking. This wasn’t a concern because if a Hyper-V host failed (3% per year) it only impacted up to four hosts (%2 of the environment) for about a minute.

When designing the new Hyper-V grid in 2010, we realized this bandwidth was no longer going to cut it. Our working density is 12:1 with our usable density of 40:1. That meant 2.4 Gbps to 8 Gbps per node. Our 2010 model is “fewer pieces, higher reliability” and that translates into redundant network links. This was more important when a good portion of our servers (10-15%) would be impacted by a link failure.

Let’s do a quick back of the napkin sketch. Traditional 1 Gbps Ethernet would require 10 primary and 10 secondary Ethernet connections. That’s ten dual 1 Gbps adapters: 10 x $250 = $2,500. That’s twenty 1 Gbps ports: 20 x $105 = $2,100. Then there’s the time and materials cost for cabling all that up. Let’s call that $500. By contrast, one dual port 10 GbE adapter  is $700. We need two 10 GbE ports: 2 x $930 = $1,860. We need two cables ($120/per) plus installation. Let’s call that $400.

The total cost per Hyper-V host for 10 GbE is $2,960. Compared to the cost of 1 Gbps ($5,100), we are looking at a savings of $2,140. For higher density Hyper-V grids, 10 GbE is easily cost justified.

It took some engineering and re-organizing. We have been able to squeeze quite a bit of functionality and performance from the new technology. Cost savings plus enhancements? Win.

Unified threat management – multi-function firewalls

Posted by

You bought an all-in-one printer. It seemed like a good deal, right? All that multi-function goodness for only a few dollars more than the ink for your current laser printer. Bet it didn’t take long for the good feeling to sour. Jammed paper, smeared faxes, and the like.

Printers gave multi-function a bad name. But firewalls may bring multi-function back in vogue. Specifically, I am looking at the Fortinet Fortigate products. Fortinet has cornered the market on unified threat management (e.g., multi-function firewalls). These devices ship with built-in firewalls, routers, vpns, intrusion detection, WiFi, and more.

 

Consider:

Use case 1: novice who needs to get up and running quick. The unified threat management gateway answers that need. The device is preconfigured and integrated. There are options to set, of course, but the time to get the system online is hours rather than weeks.

Use case 2: the dyed-in-the-wool security people. These folks have the time and budget and knowledge to continue to build dedicated security appliances. Such people have an edge over defending their networks for all these threats. You do the cost benefit and if you’re in a mixed role like mine, doing security operations and network operations, I wonder if it’s worth it.

Use case 3: the pragmatic security people. Compared to dedicated point solutions, the unified threat management gateway provides a majority of the security feature-set at a fraction of the cost. Pragmatic security folks can then redeploy their resources to addressing more pressing security concerns.

 

Needless to say, I am sold on Fortinet’s approach. Consider that every 18 months, silicon is pushing more bytes. We can either get better performance from a piece of hardware, or more functionality from the same hardware. Fortigate means simply doing more with less.

Tip: Google a Domain for Hosts using Python

Posted by

I wrote about using dig to perform a DNS zone transfer earlier this year. Such a transfer returns a complete list of hosts that can be targeted. This is generally used as a sanity check because any DNS administrator worth their salt disables such transfers.

Another option is using Google. While not a complete listing, Google will return a well known listing of hosts. The only downside is that it takes some time.

Well, not any more.

Tim Tomes (LaNMaSteR53) released a tool this month called GXFR. GXFR is a Python script that is available for download on googlecode. “The technique involves making search engine requests which restrict the url and site to the target domain. Then, based on the results of the search, excluding the sub-domains that are returned. Repeat until the search engine returns 0 results. The final search query excludes all of the public facing sub-domains that the search engine is aware of. Conduct a dns look-up of each of the identified sub-domains, and you’ve got yourself a dns zone transfer of all the sub-domains with public facing web servers.”

 

Check it out on Tim’s site. Quite a nifty script.

Everything includes training

Posted by

True story. I worked with a guy maybe a decade ago. We’ve kept in touch. He sees an article on Slashdot and thinks, “wow, that sounds like Wolfgang. I should send him the link.” He clicks the link, only to find that I am in the piece. The guy called me laughing this morning and said he can’t get away from my ideas on training.

Anyways, if you have worked with me, worked for me, or worked within ear shot of me, you’ve heard me say one or more of the following many, many times:

  • In IT, you don’t hire people for what they know. You hire people for what they can learn and what they do.
  • Everything includes a training component. Train during every initiative, every implementation, and every project.
  • Technology is like sports: most of the work is training before the game. High performing teams and high performing techies spend 20% of the time training.
  • Skimping on spending for training because of retention concerns is like saying: “I’m concerned that if people know what they’re doing, they’ll leave. And if they don’t know what they’re doing, they’ll stay.”
  • IT management is a Chinese finger puzzle. You pull too hard, and you can’t get out. You put in too many hours, you get diminishing returns.

Lisa Vaas at Software Quality Connection puts it all into perspective in “I Like My IT Budget Tight and My Developers Stupid”.

 

DNS Intel with Dig in Cygwin

Posted by

Dig, short for domain information groper, is a simple command line utility often used for network reconnaissance.

Dig can be installed under Net -> bind (update: bind-utils) in Cygwin. Dig will use the default DNS settings (check ipconfig /all.) Once installed, if you want to hardcode the dig to a specific DNS server, launch Cygwin and create a resolv.conf file.

$ cat > /etc/resolv.conf
nameserver <your IPv4 address here>

Ctrl-Z and you are good to go. Dig can then be used for intel on a particular domain. For example, the website, mail servers, and DNS name servers.

$ dig www.jwgoerlich.us
$ dig www.jwgoerlich.us MX
$ dig www.jwgoerlich.us NS

Another option is attempting to do a zone transfer, either full (AXFR) or incremental (IXFR).

$ dig www.jwgoerlich.us AXFR
$ dig www.jwgoerlich.us IXFR

Transfers will create a full copy of all the records in the DNS domain. Typically, this command is used simply to validate that zone transfers have been disabled.

That is dig in Cygwin, in a nutshell.

Internet kill switches

Posted by

January 27: the day the Internet died.

Protests have been ongoing this week in Egypt. There has been a significant amount of press coverage on the political situation. The riots began on the January 25 and then, on January 27, President Hosni Mubarak unplugged Egypt from the Internet.

The reasons given for going dark are that the rioters were using the Web and Internet to coordinate. This makes sense, as we have seen Twitter and other social media sites used in recent unrests. The main concern for InfoSec is the precedence that Mubarak has set.

Will other countries, faced with similar situations, choose to unplug? It seems likely. For example, at the same time Egypt was unplugged, the U.S. re-introduced the “Internet kill switch” bill. (Read the bill at Thomas or see Wired’s kill switch coverage.) Of course, killing the Internet will have economic repercussions.

And that’s what I am thinking about today. Should the Internet be disabled, how would my firm continue to do business? How would we send and receive communications? In terms of InfoSec and engineering, what mitigations could be deployed for this risk?

J Wolfgang Goerlich

 

How did they do it? Both Ars Technica and Wired have articles on the technical aspects of unplugging. How are people coping? The old standby is modem dialup, although some are calling others to post information, or faxing information out to the Internet. Wired also posted a Wiki on how to communicate if your government shuts off your Internet.

Tools for converting files to ePub format

Posted by

Wired has a how-to for rolling your own e-books. The following tools are covered for converting digital files to ePub. This could be quite handy for zapping study materials into e-books.

The tools are:

  • ePubBud.com – Find out more online in their FAQ.
  • eCub Cross platform tool
  • eScape ePub Creator – Converts OpenOffice documents to ePub format.
  • ODFtoEpub – Converts OpenOffice files to ePub format.
  • BookGlutton – Converts HTML web pages to ePub format.
  • EasyEPub – Convert from Adobe InDesign or Quark format to ePub

 

Browse the Web over command line with Ncat

Posted by

Ncat is the updated version of Netcat that ships with Nmap. You can use it to connect over TCP ports and send/receive ASCII data. One fun thing to try is to test your knowledge of the HTTP RFC by browsing over a command line. How far can you GET, PUT, and POST your way thru a website? Bonus points for acting as a HTTP server.

Browse a website over HTTP:

C:\Program Files (x86)\Nmap>ncat www.jwgoerlich.us 80
GET / HTTP/1.1
Host:www.jwgoerlich.us
Browse a website over HTTPS:

C:\Program Files (x86)\Nmap>ncat www.jwgoerlich.us 443 –ssl
GET / HTTP/1.1
Host:www.jwgoerlich.us
Create a webserver:

C:\Program Files (x86)\Nmap>ncat -l 127.0.0.1 80
Traditional telnet can be used for browsing over HTTP, but telnet cannot to HTTPS or serve as a webserver.