25 January, 2008

Installing VMware on Slackware

As with many Unix users, I have tried many flavors, but these days there are a small number I actually use daily. The Linux distribution that I used to really get my feet wet when I first started was Slackware. Back when I first discovered Unix-like operating systems, I researched quite a few Linux distributions. One of the things that appealed to me about Slackware was, even back then, it was considered old-school. It seemed like a good choice if I wanted to learn as much as possible about everything under the hood, so to speak.

I still use it on a daily basis. Despite that, for some reason I had never installed VMware on a Slackware system. I don't believe VMware officially supports Slackware, and this can be seen pretty easily when you install it. The second question the installer asks is:

What is the directory that contains the init directories (rc0.d/ to rc6.d/)?

As many will know, the answer on Slackware is null. Slackware does not use SysV-style init by default, instead using a BSD-style layout for init scripts. The above question assumes that you are using a system that has a directory for each run level, which isn't the case on Slackware.

I assumed I could just create the empty directories and point the installer there without problems. A quick search confirmed my assumption. Using the simple for loop from that page:

$ sudo mkdir /etc/init.d
$ cd /etc/init.d
$ for i in {0,1,2,3,4,5,6}; do sudo mkdir rc$i.d; done
$ ls
rc0.d rc1.d rc2.d rc3.d rc4.d rc5.d rc6.d

Once the install was done, I removed the init.d directory and everything below it. One other note about the link is that I don't think you need to use the --compile switch with recent versions of VMware. It seemed to recognize that my Slackware generic kernel could not use the pre-built VMware modules. The only advantage to the --compile switch seems to be that it will skip the confirmation requests prior to compiling the modules, whereas running vmware-config.pl without the compile switch will ask for confirmation before compiling each module.

For the record, I was using slackware-current, kernel-generic-, and VMware Workstation 6.0.2.

On a side note, I think my Slackware experience definitely translated well when I first tried FreeBSD. There are certainly plenty of differences, but it seems to me that going from Slackware to FreeBSD was a smaller learning curve than someone would have coming from a distribution like Ubuntu.

15 January, 2008

SANS Institute Security Menaces of 2008

SANS has a list of their "Top 10 Cyber Security Menaces of 2008". Their list includes descriptions and explanations, but here are the 10 headings.

  1. Increasingly Sophisticated Web Site Attacks That Exploit Browser Vulnerabilities - Especially On Trusted Web Sites
  2. Increasing Sophistication And Effectiveness In Botnets
  3. Cyber Espionage Efforts By Well Resourced Organizations Looking To Extract Large Amounts Of Data - Particularly Using Targeted Phishing
  4. Mobile Phone Threats, Especially Against iPhones And Android-Based Phones; Plus VOIP
  5. Insider Attacks
  6. Advanced Identity Theft from Persistent Bots
  7. Increasingly Malicious Spyware
  8. Web Application Security Exploits
  9. Increasingly Sophisticated Social Engineering Including Blending Phishing with VOIP and Event Phishing
  10. Supply Chain Attacks Infecting Consumer Devices (USB Thumb Drives, GPS Systems, Photo Frames, etc.) Distributed by Trusted Organizations

Call me crazy, but doesn't this list basically look like more of the same? It is basically continuing trends from 2007. The people involved with the list are smart and right in the thick of things, but even the descriptions of each item constantly refer to related events that happened in 2007.

On the other hand, it makes complete sense that the most successful and damaging attacks from 2007 would continue in 2008. The attacks that can't get the job done will fade away in favor of successful methods that will continue and evolve once those on defense adjust to current trends.

I guess I just wish they had gone out on a limb with their look forward to 2008 rather than taking the safe bets of the current trends continuing. The only one where I think they really took a stab is blended phishing from number nine.

I don't have much to say about the individual items on the list. Most of them seem on the money. The only one I might question is number five, insider attacks. They state that "insider risk has sky-rocketed", but is the risk really that much higher than it used to be? Insider attacks may be a problem and they may be costly, but I'm not sure the relative risk has sky-rocketed, particularly if you compare the successful insider attacks to successful attacks from outsiders.

Anecdotally, accidental compromises by insiders seemed to get as much or more coverage recently than purposeful insider attacks. Among other things, telecommuting, portable storage, and proliferation of hand-held devices that are tied to the enterprise don't just make it easier for insider attacks, but also for accidental compromises resulting from insider carelessness. We've all seen the stories of laptops with tens of thousands of personnel records being lost or stolen as a result of poor security practices.

One last note is that I'm curious who they see as their target audience for the Menaces of 2008.

14 January, 2008

JavaScript decoding and more

JavaScript obfuscation is pretty common. There are plenty of places to find out about how to reverse it along with basic malware analysis tips. Here is an example of obfuscated JavaScript I've seen. I will be posting a few malicious code examples in this entry, so caution is advised with any of the code or URL. If you can avoid it, I also would suggest not downloading malicious content on your production network.


-- snipped --

How do I figure out what this exploit attempt is doing? As pointed out on ISC, there are a number of ways to decode JavaScript. Remember the following caveat from the first link above:
For the first two methods mentioned, be mindful that you are actually running hostile code inside a potentially vulnerable web browser. Make sure to apply the usual precautions (VMWare or the like, deployed far away from any production network you might have, and keeping a keen eye on the firewall log, etc).
I chose the lazy method in this case. First, I downloaded the JavaScript file using wget. Then I made a copy, changing the file extension from .js to .html, added the script tag, and changed "eval" to "alert".
<script language=JavaScript>

-- snipped --

Now opening the file with a browser will show the decoded JavaScript. Please remember that the links and code in the below image are malicious and you visit them or run the code at your own risk.

There are a number of references to other scripts and files in the above code. There is also further obfuscation in the form of hexadecimal code. There are a number of quick ways to convert the hexadecimal to ASCII, either online or with your programming language of choice. As examples, the hexadecimal of the "Rising" variable above translates to "classid", the "Kaspersky" variable represents a specific CLSID, and the "KV2008" variable translates to "Adodb.Stream". We also see a reference to MS06-014, and more.

If you're using NSM with session data, you can see whether any systems that were subjected to the initial JavaScript exploit code then connected to any related sites after the exploit attempt, which could indicate the exploit succeeded. If you have full content packet captures, you can even see the data in the activity that followed.

I also decided to download some of the files to see what I was dealing with. The .cab file in particular looked interesting. I downloaded it using wget and then unpacked it using cabextract. This revealed an executable.
$ file cabfile.exe
cabfile.exe: MS-DOS executable, MZ for MS-DOS
I also took a quick look with the strings command, which had a few interesting lines.
$ strings -n 3 -a cabfile.exe | less

-- snipped --

The executable definitely doesn't look like a friendly file. Finally, the results from VirusTotal show that only 18 of the 32 engines detect it as malicious. I would say that 18 out of 32 is ineffective at best, especially considering that one large vendor's product did not detect the file as malicious.

This all goes to show that you can get a lot of information with fairly basic procedures. If anyone has a critique or interesting information to add, please post a comment.

07 January, 2008

Black Hat DC and Shmoocon 2008

Since Richard is a corporate schmuck now, I see the only training he is offering for the year is at Black Hat DC in February. I have heard him as a speaker at Shmoocon and in some smaller groups. I seem to recall that he presented a snippet of TCP/IP Weapons School at one local meet and would definitely recommend it. I think he will be ready to tweak the class to the appropriate skill level of his audience.

The Black Hat training classes seem decent if unsurprising. Metasploit 3.0, Reverse Engineering with IDA Pro, ROOTKIT, TCP/IP Weapons School, and Web Application (In)Security probably would interest me the most. Instructor-led training is expensive!

Speaking of Shmoocon, I plan to be there for all three days. It's an enjoyable event.

06 January, 2008

IDS/IPS placement on home network

A coworker was asking me about setting up Snort at home so he could get some experience breaking things.

I put together some very rough diagrams with Dia. These are just common and inexpensive solutions for running Snort at home, either in passive (IDS) or active (Inline) modes. These configurations are all inexpensive. At most, you require an extra hub or switch. The only one that doesn't require anything other than the network cards on the sensor is an external inline sensor.

The first is a common home network configuration. This is basically how mine was before I installed Snort.

The second diagram shows an external sensor. An advantage to this is that you see everything. A disadvantage is that you see everything. The management interface is inside the firewall while the bridging interfaces are outside the firewall. Seeing all the traffic isn't the only disadvantage. Some other disadvantages:

  • You can't see internal addresses to identify individual systems.
  • You need a higher performance system. This is not usually a problem on a residential service, but it should be noted that a lot more traffic will pass through system since it's not behind the firewall. You will see a ton of automated scanning and exploit attempts even if the traffic won't make it past the firewall.
Placing the sensor outside the firewall is reasonable if you want to find out just how much activity is happening on the external segment, but it can be really noisy and you lose inside visibility.

The third image is using an extra switch. In this example, you will see all traffic going through the firewall, as well as all broadcast traffic on the private network. You won't see unicast traffic between internal hosts, but you will be able to identify which host is associated with any given traffic that is seen. Bridging is enabled on the sensor and you can run Snort inline. This is my preferred configuration unless you have a lot of wireless traffic.

The fourth diagram shows how to run Snort passively. There are two basic options. The first is a hub that will broadcast all traffic to all ports. This can hurt performance depending on how busy the internal network is. The second is with an inexpensive switch that supports port mirroring. I haven't used it, but I've seen an inexpensive Dell switch referenced that supports port mirroring. Note that it only supports monitoring four ports (PDF) at a time. In this configuration, you can see all traffic on the local segment in addition to Internet traffic. If using a switch with a mirror, you will probably need a separate management interface. If using a hub, the management interface can also do the sniffing.

You could also use a hub between the modem and the firewall if you wanted to run an external passive sensor.

EDIT: Based on Victor's comment, I added one other diagram. Diagram five shows the firewall and Snort inline on the same system. Victor uses iptables to filter the traffic first, then traffic that passes through the firewall goes to Snort running inline. He has separate Snort processes for the DMZ and the LAN.

This configuration is slightly more complicated. There are exceptions, but the places I've worked in the past would not have considered using this type of configuration mainly because they were quite large and wanted off-the-shelf networking products rather than rolling their own firewalls or routers. It is still a useful and usable configuration to learn, and setting it up would provide a lot of valuable experience.

03 January, 2008

Puppet and Cfengine for management

Recently I have been reading more about managing multiple systems. One project I've heard good things about is Puppet.

Puppet lets you centrally manage every important aspect of your system using a cross-platform specification language that manages all the separate elements normally aggregated in different files, like users, cron jobs, and hosts, along with obviously discrete elements like packages, services, and files.
I also know another common choice is Cfengine.
Cfengine is an automated suite of programs for configuring and maintaining Unix-like computers.
After reading about each project, I am leaning towards trying Puppet.

Keeping Snort configuration updated

One thing that needs attention when keeping Snort tuned is updating the configuration variables in snort.conf when changes are made to the network. I recently noticed an alert was firing on legitimate DNS traffic because a new mail server was not in $SMTP_SERVERS. It's easy enough to add the IP to the SMTP_SERVERS variable.

In some cases, you may want that variable to hold other ones. For instance, a big company like GE might have many different sites or logical networks. It may be useful to separate the SMTP servers logically in the Snort.conf if the sensor is going to see the traffic from more than one site:

Note that you need the EAST_COAST_TV_SMTP and MICROWAVE_PROGRAMMING_SMTP variables set before they are used in the SMTP_SERVERS variable.