26 December, 2007

Quicktime exploit, infection, and NSM

I've recently seen malware activity that will grab data from HTTP POST requests and send the data to a web server. Reliable signatures for this malware have been part of the Bleeding rules for many months.

I would hope that up-to-date anti-virus will catch this. There is really no excuse for missing it, but it certainly doesn't have to be the only malicious result of the successful exploit attempts. It's always a good idea to remember that whatever method was used to compromise a system could have been used to do more than what is easily observable.

Using network security monitoring (NSM), I've worked backwards from the first check-in to find the infection method was the latest Quicktime exploits. Specifically, I've seen malicious QTL files in web traffic prior to the infection symptoms. I'm actually not positive that the most recent vulnerabilities are being exploited rather than one of the numerous older Quicktime vulnerabilities, but the timing of the activity suggests the more recent.

Even with a reliable IDS signature, you still have to ask whether it is worth risking the loss of data that is in POST requests. I think it's not worth risking, particularly if there is a common external web server that can be blocked to prevent the data extrusion.

There are a few ways to block data in a case like this. An inline IDS is one, but may not catch all outbound data. In this case, the signatures have an extremely low risk of blocking non-malicious traffic, but that doesn't mean they'll catch all the malicious traffic either. The signatures will accurately point out any infected systems even if configured to drop the traffic.

Blocking the IP address at a firewall would work well to prevent data loss, but has two main drawbacks. First, you won't detect infected systems with your IDS, at least not based on current signatures. The TCP handshake won't even be completed, let alone the GET request that triggers the alert. Second, but much less problematic, is that you could be blocking many websites on that particular IP address. I say less problematic because it may be the case that one compromised virtual host on an IP address means you should treat the server as completely compromised, so it is best to block the IP rather than just a FQDN.

Another way to block the server that is receiving the stolen POST data is with an appliance or software for filtering web traffic. The advantage here is that connection attempts may still generate IDS alerts even though the connection actually stops at the web filter rather than the external server. You can potentially block all the traffic while still generating IDS alerts to detect infected systems, depending on your web filter.

If blocking connections to a web server that is known as a check-in location, NSM is very useful. Put together a script or database query that looks at your session data for repeated connection attempts to the blocked site and you'll find out if you have any systems that may be infected and trying to check-in. Session data will show each connection attempt even though there will be no responses.

02 December, 2007

What I read

One of the things I've noticed about a lot of successful information security professionals is the large amount of reading they do. The pace and scope of the field make it very important to read.

As I find one site to read, it often leads to one, two, or many other interesting sites. Reading all these sites has led to me learning a huge amount on a wide variety of topics. Even when I'm just reading opinions rather than technical content, it is often food for thought.

The sites I have listed on the right represent just a portion of the regular reading I do. Some of those sites will also mention or have links to other good reading. I particularly like technical content, explanations or HOWTOs that can be used in the real world.

While it's probably impossible to read too much in this field, it's definitely possible to read too little.

Managing multiple systems

This post is really just my thoughts and notes on managing multiple Sguil sensors. A lot of the thoughts could be applied to managing multiple UNIX-like systems in general, whether or not they are NSM systems. I would love any feedback on what other people recommend for managing Unix deployments.

In the past, most Linux or FreeBSD systems I've managed have been single systems, not part of a larger group of systems. Currently, I'm actually well past the point where I should have come up with a more elegant and automated solution to managing multiple systems of similar configurations.

In my current environment, I have multiple sensors with almost identical configurations. The configuration differences are minor and mainly because of different sensor locations.

Most of my management amounts to patching operating systems, patching software, and updating Snort signatures. To update Snort signatures, I generally run Oinkmaster on a designated system, review rule changes, disable or enable new rules as needed, then run a script to push out the updated rules and restart Snort with the new rules on each sensor.

When I build new sensors, I manually install the operating system. With RHEL systems, I should really be using Kickstart to help automate the installation and configuration of new systems.

After I install the operating system, I have a set of scripts I use to automate configuration and installation of software. These scripts are similar to David Bianco's InstantNSM except mine are much more specifically tailored for my environment and include configuration of the operating system, not just installing NSM software.

User account management is another facet of system management. For NSM sensors, since access to the systems is limited to an extremely small number of people, account management is not a huge issue and I won't be getting into that aspect of system management.

One site I've seen mentioned as a reference for managing numerous systems is Infrastructures.Org. The first thing I noticed on the Infrastructures site was the recommendation to use version control as a way of managing configuration files. This is probably obvious to a lot of people, but I never really thought of version control for that purpose. Subversion, CVS, or your version control of choice may be useful for managing systems, not just software. Normally, when I thought version control I thought of it in reference to software projects, not system management. Another option would be something like rdist, which is a program to maintain identical copies of files over multiple hosts.

One other thing some people do is local software repositories. On RHEL systems this might mean a local RPM repository for software that has been customized or is not available in the official Red Hat repositories. This could also mean considering something like Red Hat Satellite Server, depending on exactly what you want out of it. In my case, I think Satellite Server may be overkill but it certainly has some intersting things to offer.

There are several things I see myself needing to do to make management of my systems better. These are just places to start, and hopefully once I explore the options I will be posting my experiences.

  1. Kickstart for OS installation, which would also probably replace some of my scripts used to configure new systems.
  2. Modify my scripts for NSM setup to work with Kickstart if necessary.
  3. Version control and automating the process of systems synchronizing files.
  4. Local software repositories for customized software, including making RPMs of the modified software instead of compiling the software and then pushing it out.

30 November, 2007

A "for" loop with tcpdump

This is the kind of post to which most people with Linux or Unix experience will say "Duh!", or even offer better ways to do what I'm doing. It's just a short shell script I use to loop through all the packet capture files on a Sguil sensor looking for specific host(s) and/or port(s). Sometimes it is easier to run a script like this and then SCP the files from the sensor to a local system instead of querying and opening the packet captures one by one using the Sguil client. This also allows more leisurely analysis without worrying about the log_packets script deleting any of the data I want to see.

Nigel Houghton mentioned that any shell script of more than 10 or 20 lines should be rewritten in Perl. Despite his advice, I do plan to switch some of my shell scripts to Perl when I get the chance. ;)

The sensorname variable works for me because my sensor uses the hostname as the sensor name. If my sensor name didn't match the actual hostname or I had multiple sensors on one host then I would have to change the variable.

The BPF expression can be changed as needed. Maybe a better way would be to set it to a command line argument so you don't have to edit the file every time you run the script. The following example uses a Google IP address and port 80 for the BPF. I most often use this script to simply grab all traffic associated with one or more IP addresses.

It may also be worth noting that I run the script as a user that has permissions to the packet captures.

#!/bin/sh

sensorname="`hostname`"
bpfexpression="host 64.233.167.99 and port 80"
outdir=/home/nr/scriptdump

if [ ! -d $outdir]; then
mkdir $outdir
fi

# For each dir in the dailylogs dir
for i in $( ls /nsm/$sensorname/dailylogs/ ); do
# For each file in dailylogs/$i dir
for j in $( ls /nsm/$sensorname/dailylogs/$i ); do
# Run tcpdump and look for the host
tcpdump -r /nsm/$sensorname/dailylogs/$i/$j -w $outdir/$j.pcap $bpfexpression
done
done

# For each pcap
cd $outdir
for file in $( ls *.pcap ); do
# If file has size of 24, it has no data so rm file
if [ "`ls -l $file | awk '{print $5}'`" = "24" ]; then
rm -f "$file"
fi
done
One of the things I like about posting even a simple script like this is that it makes me really think about how it could be improved. For instance, it might be nice to add a variable for the write directory so it is easier to change where the output files go.

Also, in Sguil the full content packet captures use the Unix time in the file names. If you ever had a copy of the file where the timestamp wasn't preserved, you could still find it by looking at the epoch time in the file name. For instance, with a file named "snort.log.1194141602", the following would convert the epoch time to local time:
$ date -d @1194141602
Sat Nov 3 22:00:02 EDT 2007

29 November, 2007

Origin of Eating Security

When I started this blog, I couldn't really think of a name for the blog. I figured that maybe someone else could do the job for me, so I searched the Internet for "blog name generator". I didn't really find anything useful, so then I tried searching for a band name generator. I plugged the word "security" into the first hit, and that's how I ended up Eating Security.

16 November, 2007

Bleeding Edge Founder Steps Down

The founder of Bleeding Edge Threats, Matt Jonkman, has stepped down. I don't think this could be considered anything but bad news for the project. Apparently the project now belongs to Sensory Networks, the main sponsor of the last 12 months. I believe the rules are BSD licensed, so while they are currently free, who knows what will happen in the future.

I'm sure Matt Jonkman will continue to do some interesting things in the future, and it sounds like he hopes to share with the rest of us again.

01 November, 2007

Snort Performance and Memory Map Pcap on RHEL

I previously wrote about installing Phil Wood's memory map enabled libpcap as an academic exercise on my home network. As Victor Julien pointed out to me, Snort in inline mode will be using ip_queue rather than the libpcap interface to the kernel. However, I have actually been using mmap libpcap with Snort in IDS mode for quite a while to help reduce packet loss monitoring fairly high bandwidth connections. In testing, I have been able to consistently run Snort in IDS mode and a flow_depth of zero without packet loss on a link doing up to 80Mb/s. Admittedly, this is on pretty high end hardware but it is much better performance than Snort with the regular libpcap. By tuning Snort, like setting flow_depth to something more sensible than zero and disabling rules that are bad for performance, it is easy to see how Snort could be used to monitor even busier links.

There are other ways to increase Snort performance, too. One is using PF_RING. Richard Fifarek has a good write-up about setting up PF_RING with Red Hat Enterprise Linux 4. Maybe I'll get around to testing PF_RING under the same circumstances as I use mmap libpcap, but I'd love to see comments from those that have done it already. I have seen a few comparisons, but not recently. The documentation for PACKET_MMAP, which can be found in the Linux kernel source, has more to say about packet capture performance.

It's fine to use PACKET_MMAP to improve the performance of the capture process, but it isn't everything. At least, if you are capturing at high speeds (this is relative to the cpu speed), you should check if the device driver of your network interface card supports some sort of interrupt load mitigation or (even better) if it supports NAPI, also make sure it is enabled.
I haven't had a chance to play with NAPI yet, but for anyone that is interested there is a NAPI_HOWTO.txt, also in the documentation section of the Linux kernel source.

One last way to improve performance is to use an architecture-specific compiler when building Snort. Although I'm not sure about other architectures, using the Intel compiler for some Intel hardware was mentioned as one of the best ways to improve performance when someone asked a SourceFire employee at Shmoocon a couple years ago about improving Snort performance.

Installing the modified libpcap and reinstalling all the software that depends on it is fairly simple on RHEL. In this case, I'm using RHEL5. Before installing the new libpcap, I stopped any processes that depended on Red Hat's official libpcap package. I removed the official Red Hat libpcap package and any other RPMs that depended on it.
rpm -e libpcap tcpdump
Then I installed the new libpcap.
tar xvzf libpcap-0.9x.20070528.tar.gz
cd libpcap--0.9x.20070528
./configure
make
make install
Now I can go ahead and reinstall the other software using the new libpcap. Just for kicks, the following installs libnet so I can enable flex-resp support even though I don't consider flex-resp very useful in production. Better to just put the box inline if you really need to stop traffic. TCP resets or ICMP unreachable messages leave something to be desired.
cd /usr/src/Libnet-1.0.2a
./configure --prefix=/usr/local/libnet
make
make install

cd /usr/src/tcpdump-3.9.8/
make clean
./configure
make
make install
The following configure statement for Snort should work with 2.6.1.5 or 2.8.0, but I think the --enable-stream4udp is actually now a default for 2.8.0 while it was not for 2.6.x.x.
cd /usr/src/snort-2.8.0
make clean
./configure --without-mysql --without-postgresql --without-oracle
--without-odbc --enable-dynamicplugin --enable-flexresp --enable-stream4udp
--enable-perfprofiling --with-libnet-libraries=/usr/local/libnet/lib
--with-libnet-includes=/usr/local/libnet/include
make
make install
I'm using sancp for session data:
cd /usr/src/sancp-1.6.1/
make clean
make
./install.sh
Finally, I need to set PCAP_FRAMES to a value that the system can use. Finding the maximum can be trial and error, but here are some examples how to set it. One is to set it for each individual application at startup. For instance, sancp doesn't seem to like to use PCAP_FRAMES, so I put the following in the sancpd init script:
export PCAP_FRAMES=0
On the other hand, Snort in IDS mode or packet logging mode can benefit a lot from setting PCAP_FRAMES. For instance,
export PCAP_FRAMES=65535
As I said, it may take experimentation to find the maximum value or the value that results in the best performance. For something like tcpdump, you might want to create an alias something like:
alias tcpdump='PCAP_FRAMES=65535 tcpdump'

30 October, 2007

Better Work Means More Work

Security is one of those fields where, the better a job you do, the more work you have to do. To some extent, all jobs are like this. If you do a terrible job, nobody is going to want to give you more work. If you do a great job, people will give you as much or more work than you can handle. However, this is not really the type of extra work I am referring to.

One example of what I'm talking about is intrusion detection. The better you get at intrusion detection, the more incident response you will end up doing as a result. Getting better at security operations in general will often lead to the discovery of more intrusions as your knowledge increases, new systems are implemented, and security systems are improved. Someone who is good at penetration testing or application fuzzing may be able to find and exploit more vulnerabilities, and in the end do extra work because of that. I'm sure there are many more examples.

On the other hand, better work also means you can streamline processes or reduce the number of security incidents. Increasing the depth of your defense to reduce security incidents, automating processes, more clearly mapping processes, and more efficiently achieving an objective are all possibilities to reduce the amount of work. Being better at penetration testing may mean finding more useful information about the security of your target, but you may also perform the actual penetration test more quickly.

Richard Bejtlich likes to point out that prevention eventually fails. I agree but would like to add that I think there will always be security incidents that are missed. Getting better at detection means more time spent on incidents, which is a good thing by the way. However, no matter how good you get at detection, I firmly believe that nobody will catch everything worth catching. There are probably exceptions, but in the type of enterprise network I'm used to dealing with, catching every single noteworthy security incident seems unlikely.

Someone in operational security can also improve prevention. Just because prevention eventually fails doesn't mean it never works or should be ignored. You might think that getting better at prevention means you will have less work to do when it comes to detection and response. But better prevention generally means more work on design, testing, configuration, maintenance, documentation, and more.

Anyone that does a good job will be in demand, leading to more work. With security, I also think doing a good job means you may discover more work to do along every step of the way.

23 October, 2007

Upgrading to Snort 2.8.0

I finally upgraded my test sensor from Snort 2.6.1.5 to Snort 2.8.0. David Bianco mentioned some of the new features in his blog a couple of months ago, so I won't get into the differences. I am mainly documenting the things I had to do for the upgrade so I have a reference if needed at a future date.

The first thing I did was look through the documentation in the "docs" directory, reading some of the README files for the preprocessors. The README.variables file was of particular interest since Snort 2.8 allows port lists. I also looked at the snort.conf file in the "etc" directory of the source tree to see how it differed from my current configuration file.

Next, I made a copy of my snort.conf from 2.6.1.5 and edited it for the changes in 2.8.0. I changed the HTTP_PORTS variable to list a few other ports besides 80, including 8080 and 8000. The portvar variable was used in the examples of multiple HTTP_PORTS.

portvar HTTP_PORTS [80,8000,8080,8888]
Although I only run HTTP on port 80, the Snort web-client ruleset and a number of Bleeding rules use the $HTTP_PORTS variable to detect attacks against web clients like Internet Explorer, Mozilla Firefox, media players, and more. After that simple change, I configured and installed Snort.
./configure --enable-dynamicplugin --enable-inline --enable-perfprofiling
make
make install
After installing, I tried to start Snort. The first problem I encountered was with my stream5 configuration. I had previously been using the stream4 and the flow preprocessors, but when changing the configuration to use stream5 I had not removed the flow preprocessor configuration. Stream5 handles everything that used to be handled by the combination of flow and stream4, so I removed the flow configuration. I also had to add the stream5_udp and stream5_icmp options.

A check of the configure help will also show that --enable-dynamicplugin is the default with 2.8.0, so it should not actually be needed in the configuration command.

After fixing stream5, I tried again and had one more problem. I was getting the following errors:
Oct 23 19:18:52 sensor snort[3117]: /etc/snort/snort.conf(206) unknown dynamic preprocessor "ftp_telnet"
Oct 23 19:18:52 sensor snort[3117]: /etc/snort/snort.conf(210) unknown dynamic preprocessor "ftp_telnet_protocol"
Oct 23 19:18:52 sensor snort[3117]: /etc/snort/snort.conf(221) unknown dynamic preprocessor "ftp_telnet_protocol"
Oct 23 19:18:52 sensor snort[3117]: /etc/snort/snort.conf(226) unknown dynamic preprocessor "ftp_telnet_protocol"
Oct 23 19:18:52 sensor snort[3117]: /etc/snort/snort.conf(238) unknown dynamic preprocessor "smtp"
Oct 23 19:18:52 sensor snort[3117]: /etc/snort/snort.conf(307) unknown dynamic preprocessor "dcerpc"
Oct 23 19:18:52 sensor snort[3117]: /etc/snort/snort.conf(313) unknown dynamic preprocessor "dns"
Oct 23 19:18:52 sensor snort[3117]: FATAL ERROR: Misconfigured dynamic preprocessor(s)
This was pretty easy to fix. I just needed the proper path to the dynamic preprocessors.
dynamicpreprocessor directory /usr/local/lib/snort_dynamicpreprocessor/
After I fixed the snort.conf, the next try to start Snort 2.8.0 was successful. Now that I have it installed and running with very similar settings to 2.6.1.5, it's time to dig deeper into the differences and possibly test other configuration changes.

16 October, 2007

Sguil 0.7.0 Client and NetBIOS Names

The previous versions of Sguil clients used a different method when doing a reverse DNS lookup. This method must have fallen back to the operating system's name lookup methods at some point, because I used to get NetBIOS names on systems that had them when there was no DNS name returned. The Sguil 0.7.0 client uses tcllib's DNS client to resolve names, which is used to allow DNS proxying through OpenDNS or other DNS servers. However, this method is purely DNS, so any Windows system without a DNS entry would return "Unknown" as the hostname. I decided to play with the client and add NetBIOS name queries in the event the reverse DNS came up empty.

The file that contains the GetHostbyAddr proc in Sguil is extdata.tcl and lives in the lib directory. From the description in the file:

#
# GetHostbyAddr: uses extended tcl (wishx) to get an ips hostname
# May move to a server func in the future
#

The proc does a few things. First, it checks to see whether external DNS is enabled and whether an external server has been configured in sguil.conf. If an external DNS server is set, then the process will see if a HOME_NET is set. If the HOME_NET is set, it is compared to the IP being resolved. A match means that the nameserver is set to the local rather than the external. If HOME_NET is not set or the IP does not match HOME_NET, then the external nameserver is used. If external DNS is not selected in the client, then the local nameserver is used.

If the name resolution returns no value, then the client displays "Unknown" as a result. Just prior to that is where I added the NetBIOS name lookup. Here is the whole GetHostbyAddr proc after I modified it, with a few comments to follow:

proc GetHostbyAddr { ip } {

global EXT_DNS EXT_DNS_SERVER HOME_NET NETBIOS_NET

if { $EXT_DNS } {

if { ![info exists EXT_DNS_SERVER] } {

ErrorMessage "An external name server has not been configured in sguil.conf. Resolution aborted."
return

} else {

set nameserver $EXT_DNS_SERVER

if { [info exists HOME_NET] } {

# Loop thru HOME_NET. If ip matches any networks than use a the locally configured
# name server
foreach homeNet $HOME_NET {

set netMask [ip::mask $homeNet]
if { [ip::equal ${ip}/${netMask} $homeNet] } { set nameserver local }

}

}

}

} else {

set nameserver local

}

if { $nameserver == "local" } {

set tok [dns::resolve $ip]

} else {

set tok [dns::resolve $ip -nameserver $nameserver]

}

set hostname [dns::name $tok]
dns::cleanup $tok

# Added hack to use NetBIOS name lookups if no DNS entry
if { $hostname == "" } {

# Only check NETBIOS_NET addresses for NetBIOS names
if { [info exists NETBIOS_NET] } {

# Loop thru NETBIOS_NET. If ip matches then do NetBIOS lookup
foreach netbiosNet $NETBIOS_NET {

set netMask [ip::mask $netbiosNet]
if { [ip::equal ${ip}/${netMask} $netbiosNet] } {

# NetBIOS for Windows client
if { $::tcl_platform(platform) == "windows" } {

regexp {.+?(.{15})<00>} [ exec nbtstat -a $ip ] \
dummyvar hostname }

# NetBIOS for Unix client but you need samba client tools
if { $::tcl_platform(platform) == "unix" } {

# Match 16 chars because of trailing space w/nmblookup
regexp {.+?(.{16})<00>} [ exec nmblookup -A $ip ] \
dummyvar hostname }

}

}

}

}

if { $hostname == "" } { set hostname "Unknown" }
return $hostname

}

I took Bamm's check for HOME_NET and changed it to a check for NETBIOS_NET. I think people will definitely not want every IP to get a NetBIOS query if it has no DNS entry, nor will they want every IP in HOME_NET to be checked. This change means you need to add a NETBIOS_NET to sguil.conf if you want to resolve NetBIOS names.

Next, I checked for the platform and either perform an nbtstat from a Windows client or an nmblookup from a Unix client.

The regular expression I used to grab the result of the query is sort of weak, but functional. It simply finds the first instance of "<00>" and grabs the preceding 15 characters on Windows or 16 characters when doing a nmblookup on Unix. This is because the output of nmblookup has an extra space after the NetBIOS name, which can be up to 15 characters, while a 15 character name on Windows will have no trailing space. I would like to cut out any trailing whitespace using the regular expression, but I'm not sure about the best way to do it. A word boundary won't work because NetBIOS names can have characters like dashes and dots.

Suggestions and feedback welcome. Is something like this useful? How can it be improved? Is there some much easier way to do this? I can barely spell TCL let alone code it, so be gentle!

Edit: giovani from #snort-gui gave me this regexp that should grab the NetBIOS name without including trailing whitespace. He also pointed out that having spaces between the braces and the regexp, as I did, can alter the regexp.
{([^\s\n\t]+)\s*<00>}

30 September, 2007

Running IDS Inline

When do you run IDS inline?

Although the idea of running an IDS inline to prevent attacks rather than just detect them is appealing, reality is harsh. There are a lot of problems running an IDS inline.

The first and most significant problem with running IDS inline is that most people don't adequately tune IDS. This is a problem whether you run inline or passive IDS. I get the feeling that the general management view towards IDS, perpetuated by marketing, is that you should be able to drop an IDS on the network and it is ready to go. This is far from the truth. Network monitoring requires tuning all involved systems and humans to analyze the information provided by the systems.

When I set up my first Snort deployment, I spent countless hours tuning and tweaking the configuration, rules, performance and more. Along the way, I added Sguil, which was useful both for more thorough analysis and more effective tuning based on those analyses. Even after all that, tuning is a daily requirement and never should stop.

Every time you update rules, you should tune. Every time the network changes, you may need to tune. Every time your IDS itself is updated, you may need to tune. Every time a new attack is discovered, you may need to tune. Tuning does not and should not end unless you want your network security monitoring to become less effective over time. I would be willing to bet that a majority of inline IDS don't come close to taking full advantage of blocking features because of a lack of tuning. Passive IDS suffers from the same problem, but the results can be even worse if an inline IDS starts blocking legitimate traffic.

Another problem, somewhat related to tuning, with running IDS inline is that rules are never perfect. Do you trust your enabled rules to be 100% perfect? If not, what percentage of alerts from a given rule must be accurate to make the rule acceptable in blocking mode? Even good rules will sometimes or often trigger on benign traffic.

For Snort, one of the most reliable rule sets are policy violations. Both VRT rules and Bleeding rules are extremely reliable when detecting policy violations with a minimal number of alerts triggered on unrelated network traffic. Spyware rules are similarly accurate.

Rules designed to detect more serious malicious activity are much less consistent. Most are simply not reliable enough to use in a mode that blocks the traffic the rule is designed to detect. That does not mean the rules are necessarily bad! Good rules still aren't perfect. This is one of many reasons why there will always be work for security analysts. IDS or other NSM systems are no substitute for the analysis that an experienced human can provide. They simply are tools to to be used by the analyst.

Last, don't forget that running anything inline can seriously impact the network. If my Snort process at home dies for some reason, I can survive without Internet access until I get it restarted. This isn't always the case for businesses, so consider whether your traffic should be going through a system that fails open if there are problems. Even at home I don't queue port 22 to snort_inline because I want to be able to SSH into the box from the Internet if there are problems.

The real question that has to be answered is whether the benefits of dropping traffic iare worth the risks of running inline.

21 September, 2007

First Impressions of Sguil 0.7.0

Since first using Sguil on version 0.5.3, I have definitely become a believer in Sguil and network security monitoring. Sguil is much more than just a front-end for looking at Snort alerts. It integrates other features, such as session data and full content packet captures, that make security monitoring much more effective than just looking at IDS alerts.

I recently upgraded my version of Sguil from the release version, 0.6.1, to the new beta version, 0.7.0. I have not even been using it for a week yet, but I do have a few first impressions.

Before talking about Sguil itself, I would like to say thanks to Bamm Visscher, the original author of Sguil, Steve Halligan, a major contributor, and all the others who have contributed. I can say that it is easy to tell that Sguil is written and developed by security analysts, and it is bound to make you look smart to your bosses. Sguil has a thriving community of users and contributors that have been very active in improvements and documentation.

Upgrading Sguil from 0.6.1 to 0.7.0 was less difficult than I had anticipated. Reading the Sguil on RedHat HOWTO is a good idea, even if using a different operating system or distribution. It is not necessary to follow the HOWTO but it does provide a lot of useful information to ease the installation or upgrade process.

The basic procedure for upgrading was to stop sguild and sensor_agent, run the provided script to update the database, upgrade the components, and add PADS. Of course, if you are doing this in a production environment then you probably would want to backup the database, sguild files on the server, and sguil files on the sensors. Since I was upgrading at home, I didn't bother backing up my database.

Under the hood, communications between sensors and the Sguil server has changed. The network and data flow diagrams I contributed to the NSMWiki show the change from one sensor agent to multiple agents. One reason for this change is that it will make it easier to distribute sensor services across multiple systems, allowing people to run packet logging, Snort in IDS mode, or sancp on separate systems. I can see this being extremely useful if you are performance-limited because of old hardware, or if you monitor high-traffic environments.

Another reason for the change in agents was to make it easier to add other data sources. An example of this, the only one I know of so far, is Victor Julien's modsec2sguil. It will be interesting to see if the changes to Sguil lead to other agents to add support for other data sources. I know that David Bianco has discussed writing an OSSEC agent to add a host-based IDS as a Sguil data source.

The changes to the client seem relatively minor but are useful. David Bianco already has written about added support for cloaking investigative activities in Sguil. Sguil 0.7.0 has added support for proxying DNS requests through a third party like OpenDNS. In fact, this feature was enabled by default when I installed my new client.

Another change is PADS, which will display real-time alerts to the client as new assets are detected on the network. An example of a PADS alerts is:

PADS New Asset - smtp Microsoft Exchange SMTP
Though I like the idea of PADS a lot, it currently has some issues that definitely limit its usefulness. In its current form, Sguil's implementation of PADS has a bug where it generates alerts for external services, for instance when my home systems connect to external web or SMTP servers. The goal of PADS in the context of Sguil is really internal service detection so you can see any unusual or non-standard services on systems you are monitoring.

Even once the bug is fixed, I can see it being extremely noisy on a large, diverse network. I may make a separate post in the future with ideas regarding PADS. Profiles for groups of systems and their services is definitely one idea that might be useful, but I'm not sure how hard it would be to implement. PADS has potential, but it will take some time.

Another nice change in the Sguil client is the ability to click the reference URLs or SID when displaying Snort rules, opening your browser to the relevant page. This was a feature that was sorely needed and it will be nice not to need copy and paste for Snort rule references.

I'm looking forward to further testing and the future release of 0.7.0.

14 September, 2007

Pulling IP Addresses and Ranges From a Snort Rule

In my previous post, I included a short Perl script to pull IP addresses from a Snort rule file. The problem with the script was that it simply stripped CIDR notation rather than expanding to all the included IP addresses. For instance, 10.0.0.0/28 would become 10.0.0.0. After a little searching, I found the Perl module NetAddr::IP that can be used to more easily manage and manipulate IP addresses and subnets. I also found an example of how to use the module for subnet expansion, among other things.

The following modification of my previous script will not only grab all the IP addresses from the snort rule, but also expand all CIDR addresses to the individual IP addresses in the CIDR range.

Edit: If you grabbed the script before September 16, then grab it again. While I was trying to make it look pretty, I must have inadvertently altered the substitution to remove the trailing CIDR notation. The current version is corrected.

#!/usr/bin/perl
#
# Script to pull IP address from Snort rules file
# by nr
# 2007-08-30
# 2007-09-14 Added CIDR expansion

# Set filenames
$rulefile = "/home/nr/bleeding-compromised.rules";
$ip_outfile = "iplist.txt";

# Open file to read
die "Can't open rulefile.\n" unless open RULEFILE, "<", "$rulefile";
# Open file to write
die "Can't open outfile.\n" unless open IPLIST, ">", "$ip_outfile";

# Put each rule from rules file into array
chomp(@rule = <RULEFILE>);

# For each rule
foreach $rule (@rule) {
# Match only rules with IP addresses so we don't get comments etc
# This string match does not check for validity of IP addresses
if ( $rule =~ /\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}/ ) {
# Remove everything before [ character
$rule =~ s/.*\[//g;
# Remove everything after ] character
$rule =~ s/\].*//g;
# Split the remaining data using the commas
# and put it into ip_address array
@ip_address = split /\,/, $rule;

# For each IP address in array
foreach $ip_address (@ip_address) {

# Match on slash
if ( $ip_address =~ /.*\/.*/ ) {

# Expand CIDR to all IP addresses in range modified from
# http://www.perlmonks.org/?displaytype=print;node_id=190497
use NetAddr::IP;
my $newip = new NetAddr::IP($ip_address);

# While less than broadcast address
while ( $newip < $newip->broadcast) {
# Strip trailing slash and netmask from IP
$temp_ip = $newip;
$temp_ip =~ s/\/.*//g;
print IPLIST "$temp_ip\n";
# Increment to next IP
$newip ++;
}
}
# For non-CIDR, simply print IP
else {
print IPLIST "$ip_address\n";
}
}
}
}

# Close filehandles
close RULEFILE;
close IPLIST;

One last note. If you prefer to pass the rule file name and output file to the script at the command line every time you run it, change the $rulefile and $ip_outfile variables to equal $ARGV[0] and $ARGV[1].

10 September, 2007

Querying Session Data Based on Snort Rule IPs

Sometimes Snort rules can contain useful information but are not practical to use in production. The Bleeding Snort rules recently added a set of rules to detect connection attempts to known compromised hosts. If you take a look at the rules, you'll see that it is essentially a large list of IP addresses that are known to be compromised.

When I first ran these rules on a Snort sensor, they definitely gave me some alerts requiring action. However, the performance of the sensor really suffered, particularly as the set of IP addresses grew. Since the rules were causing packet loss, I wanted to disable them.

I decided to write a script to grab the IP addresses from the Bleeding rule, then load the addresses into my database to compare with stored sancp data. If you are monitoring your network but not storing session data acquired with tools like sancp or argus, you should be. In my case, I am running Sguil, which uses sancp.

First, I had to write the script to grab the IP addresses. Since I just finished an introduction to Perl class in school and had many projects that required string matching, I figured that was the way to go. Since I'm a beginner, I would be more than happy if anyone can offer improvements to the following script. In particular, one thing I have not worked on yet is a method for expanding CIDR notation to individual IP addresses. The bleeding-compromised.rules do contain some networks in CIDR notation rather than just individual IP addresses, and for now the script simply strips the notation resulting in an incomplete IP list.

Edit: I posted an updated script that replaces the one in this post. I would suggest using the updated version rather than this one.

#!/usr/bin/perl
#
# Script to pull IP address from Snort rules file
# by nr
# 2007-08-30

# Set filenames
$rulefile = "/nsm/rules/bleeding-compromised.rules";
$ip_outfile = "iplist.txt";

# Print error unless successful open of file to read
die "Can't open rulefile.\n" unless open RULEFILE, "<", "$rulefile"; # Open file to write open IPLIST, ">", "$ip_outfile";

# Put each rule from rules file into array
chomp(@rule = <RULEFILE>);

# For each rule
foreach $rule (@rule) {
# Match only rules with IP addresses so we don't get comments etc
# This string match does not check for validity of IP addresses
if ( $rule =~ /\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}/ ) {
# Remove everything before [ character
$rule =~ s/.*\[//g;
# Remove everything after ] character
$rule =~ s/\].*//g;
# Split the remaining data using the commas
# and put it into ip_address array
@ip_address = split /\,/, $rule;

# For each IP address in array
foreach $ip_address (@ip_address) {
# Remove CIDR notation (means those IP ranges are missed - need to fix)
$ip_address =~ s/\/.*//g;
# Print to output file one IP per line
print IPLIST "$ip_address\n";
}
}
}

# Close filehandles
close RULEFILE;
close IPLIST;

Now I have a file called "iplist.txt" with the list of IP addresses, one per line. Next, I log into MySQL and load the list into a temporary database and table. The table really only needs one column of the CHAR or VARCHAR data type. (See the MySQL documentation for creating tables or databases).

LOAD DATA LOCAL INFILE '/home/nr/iplist.txt' INTO TABLE temp.ipaddr;

Then I have to convert the IP addresses to the INT data type using the INET_ATON function so they can be matched against my sancp session data. I created the table "sguildb.ipaddresses" for cases like this where I want to load external IP address and then run a query. The "temp.ipaddr" table has one column called "new_ip". The "sguildb.ipaddresses" table also has one column, but called "dst_ip".

INSERT INTO sguildb.ipaddresses SELECT INET_ATON(new_ip) FROM temp.ipaddr;

In MySQL 5.x, you can combine the previous two steps to add and convert the data in one step. I'm currently running 4.1, so I have not investigated the exact syntax.

Finally, I can query my sancp session data for connections going to any of the IP addresses in sguildb.ipaddresses, which are the IP addresses from the Snort rule.

SELECT sancp.sid,INET_NTOA(sancp.src_ip),sancp.src_port,INET_NTOA(sancp.dst_ip),
sancp.dst_port,sancp.start_time FROM sancp INNER JOIN ipaddresses ON
(sancp.dst_ip = ipaddresses.dst_ip) WHERE sancp.start_time >= DATE_SUB(UTC_DATE(),
INTERVAL 24 HOUR) AND sancp.dst_port = '80';

This query will return the sensor ID, source IP, source port, destination IP, destination port, and session start time from the sancp table wherever the sancp.dst_ip matches the ipaddresses.dst_ip where I stored the IP addresses from the Snort rule. Notice that it will query the last 24 hours for port 80 connections only. Depending on the type of activity you are looking for, you could change ports or remove the port match entirely.

This whole process could be automated further by running the mysql commands directly from the Perl script so the compromised IP addresses are updated in the database when the script is run.

The final SQL query to match the compromised IP addresses with sancp destination IP addresses can easily be turned into a cronjob. For instance, if querying for the past 24 hours then run the cronjob once a day. Once the results are returned, if running Sguil with full content logging it is easy to then query for the individual connections you're interested in and view the ASCII transcripts or the packet captures.

08 September, 2007

Transparent Bridging, MMAP pcap, and Snort Inline

I use Snort and the Sguil Analyst Console for NSM, but there is always room to experiment and/or improve. Up to this point, I have used Snort either on mirrored (SPAN) ports or with a network TAP, both common configurations. After finally getting a second-hand CPU and motherboard to replace a dead CPU, I had a spare system to set up Snort inline at home.

The upgrade from Snort 2.4.x to 2.6.x.x was quite taxing on performance, so I decided it was also time to play with Phil Wood's MMAPped libpcap. The modified libpcap will make drastically fewer system calls when compared to the standard libpcap sniffing on a busy network. Although my home network certainly isn't high bandwidth, I wanted the experience of setting up Snort with the Phil Wood's modified version of libpcap. Since I actually did all this many months ago and am just now posting about it, I can say that I have seen a huge performance improvement when going from the standard libpcap to Phil Wood's libpcap in high bandwidth environments.

How would I implement Snort Inline on a home network? The two choices were to replace one of my routers with a BSD or Linux system configured as a router, or to set up Linux as a transparent bridge. For those that prefer FreeBSD, you would have to configure it as a router since the bridge code in FreeBSD does not support the ipfw divert socket. I am not familiar enough with other BSD versions to say whether their bridge code is the same or not.

I much preferred a bridge rather than a router since it would avoid the time-consuming process of reconfiguring my network topology. This meant that I had to use Linux, and my distribution in this case is Slackware-current. The process should not be much different for any distribution.

Installing Software

Because plugging in an untested bridge between my LAN and the Internet could interrupt my connection, I decided it would be easiest to get and install all the software prior to configuring the bridge and putting it inline.

My first step was to install the modified libpcap, which needs either flex and bison, or yacc. This was essentially a freshly built Slackware system and I didn't have them installed, so I used swaret to install the packages.

swaret --install flex
swaret --install bison
I was now ready to install libpcap.
cd /usr/src/
wget http://public.lanl.gov/cpw/libpcap-current.tar.gz
tar xvzf libpcap-current.tar.gz
ln -s libpcap-0.9.20060417 libpcap
cd libpcap
./configure
make
make install
Snort will need the header files from libpcap and the install did not copy them anywhere, so I manually copied the files to /usr/include/. Another option would be to create a link to the files in the include directory.
cp /usr/src/libpcap/pcap.h /usr/include/
cp /usr/src/libpcap/pcap-bpf.h /usr/include/
cp /usr/src/libpcap/pcap-namedb.h /usr/include/
Because this is a modified libpcap, all the software that depends on libpcap must also be compiled against the version I just installed. I will definitely be using tcpdump when I test the bridge.
wget http://www.tcpdump.org/daily/tcpdump-current.tar.gz
tar xvzf tcpdump-current.tar.gz
ln -s tcpdump-2007.01.07 tcpdump
cd tcpdump
./configure
make
make install
Now I could test that tcpdump worked with the PCAP_FRAMES option available because of the modified libpcap. For some reason, perhaps because of my kernel version, PCAP_FRAMES=max did not work but I was able to use it by manually setting the value. I was able to bump the value of PCAP_FRAMES quite high, above 300000, before it resulted in errors. I have yet to determine what that really means for performance. Here are two commands I used to show that the newly installed tcpdump worked with the modified libpcap.
PCAP_FRAMES=65535 PCAP_VERBOSE=1 PCAP_TO_MS=0 PCAP_PERIOD=10000 /usr/local/sbin/tcpdump \
-i eth0 -s 1514 -w /dev/null -c 100

PCAP_FRAMES=65535 /usr/local/sbin/tcpdump -v -i eth0
Snort needs libnet-1.0.2a when configured with --enable-inline, so I had to install libnet first.
tar xvzf libnet-1.0.2a
cd libnet-1.0.2a
./configure
make
make install
Finally, install Snort 2.6.x.x. (Note: Since this document was written quite a while ago, there are quite a few newer versions of Snort available). Another alternative to using the --enable-inline option with mainline Snort is to download snort_inline, which is maintained by William Metcalf and Victor Julien. There are a number of added features and conveniences when using snort_inline, as highlighted on Victor's blog. However, I used mainline Snort in the following example.
tar xvzf snort-2.6.1.2.tar.gz
cd snort-2.6.1.2
./configure --enable-dynamicplugin --enable-inline
make
make install
I tested snort with -V to check that it would start and was compiled to work inline. It shows that Snort was configured with the inline option.
snort -V

,,_ -*> Snort! <*-
o" )~ Version 2.6.1.2 (Build 34) inline
'''' By Martin Roesch & The Snort Team: http://www.snort.org/team.html
(C) Copyright 1998-2006 Sourcefire Inc., et al.

Configuring Linux for Transparent Bridging

Configuring the bridging is fairly simple. The only module I had to manually insert was ip_queue. Other modules that may be needed are ip_tables, iptable_filter and bridge.

In this case, eth0 is my separate management interface, I named the bridge interface bridge0, and the physical interfaces joining bridge0 were eth1 and eth2. The bridge device can be configured with an IP address if you do not want to use a separate NIC for management. In either case, make sure to secure the management NIC on your Snort box, for example limiting connections to the management IP so only source IP addresses in your private IP space are allowed to connect. Here, I created the bridge interface, added eth1 and eth2 to it, and brought them up:
/sbin/brctl addbr bridge0
/sbin/brctl addif bridge0 eth1
/sbin/brctl addif bridge0 eth2
/sbin/ifconfig eth1 up
/sbin/ifconfig eth2 up
/sbin/ifconfig bridge0 up
After some iptables configuration, bridge0 should work. Assuming the system is dedicated to being a bridge running Snort Inline, the only addition necessary to make bridging work is the following:
iptables -I FORWARD -o bridge0 -j ACCEPT
Now bridge0 should be ready to pass packets.

Once the bridge is connected, iptables can be used to show packet statistics and confirm that the bridge is forwarding. You can also use tcpdump -v -i bridge0 to confirm that traffic is passing if you skip ahead to installing libpcap and tcpdump prior to plugging in the bridge.
iptables -vL
--snip--
Chain FORWARD (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
527 168K ACCEPT all -- any bridge0 anywhere anywhere
--snip--
The bridge is seeing packets! tcpdump showed that they were more than just broadcast packets as I accessed the Internet through the bridge.
PCAP_FRAMES=65535 /usr/local/sbin/tcpdump -v -i bridge0
tcpdump: WARNING: bridge0: no IPv4 address assigned
tcpdump: listening on bridge0, link-type EN10MB (Ethernet), capture
--snip--
57 packets captured
57 packets received by filter
0 packets dropped by kernel

Testing Snort Inline

Finally, I'm ready to configure and test Snort Inline. I am not going to cover Snort configuration, but I did write one test rule to put in local.rules and disabled all the other rule sets in snort.conf. The following rule should drop any outbound connection on the HTTP_PORT set in snort.conf.
drop tcp $HOME_NET any -> $EXTERNAL_NET $HTTP_PORTS (msg:"Test rule outbound HTTP"; \
classtype:misc-activity; sid: 3000000;)
Now I run Snort. The -v will print packets to the console so I can confirm that snort is seeing the traffic and the -Q tells snort to accept input from the iptables queue.
/usr/local/bin/snort -Qvc /etc/snort/snort.conf -l /data
Note that I forgot to add the PCAP_FRAMES value. In production on a busy network, I would add it permanently to my environmental variables and/or to my init scripts for Snort so it would always take advantage of Phil Wood's libpcap.

I add the necessary rule to iptables. I don't want to risk losing all connections by queueing everything, so I just queue port 80. This is what the line in my iptables-save file looks like:
-A FORWARD -i bridge0 -p tcp -m tcp --dport 80 -m state --state NEW -j QUEUE
I try to browse to a web site and get the following alert in /data/alert:
[**] [1:300000:0] Test rule outbound HTTP [**]
[Classification: Misc activity] [Priority: 3]
01/11-22:22:24.458394 xx.xx.xx.xx:60813 -> 66.35.250.209:80
TCP TTL:63 TOS:0x0 ID:52831 IpLen:20 DgmLen:60 DF
******S* Seq: 0xE79DA525 Ack: 0x0 Win: 0x16D0 TcpLen: 40
TCP Options (5) => MSS: 1460 SackOK TS: 604992578 0 NOP WS: 2
Everything is working. Now I have to get everything ready for production by fully configuring and tuning Snort. Once that is done, I will queue all traffic except port 22. Queuing port 22 could result in not being able to connect using SSH if the Snort process were to die or had to be restarted. If snort-inline is not running, all queued traffic is effectively dropped since Snort is required to pass the traffic from the queue back to iptables.

06 September, 2007

The Purpose of This Blog

Here goes nothing.

I have been thinking of creating a blog for quite a while, primarily to store and share small tidbits of information I come across as I muddle my way through the world of information security. Most of what I do is on the operational side of the security house. As I experiment with and work in security, I often find myself wishing I could share some of the information and processes I have used.

Most of the information I am sharing is not unique. I anticipate that many of my posts will aggregate information from a number of sources to help me document what, why, and how I did something. Don't forgot the 'why', because that is important!

Two examples of posts I have planned:

  • Building and configuring a Snort IDS to run inline as a transparent bridge.
  • Pulling IP addresses from Bleeding Snort rules and then querying sancp (session data) for matches.
Neither of the planned posts will be groundbreaking, but the experiences were both practical and useful for me.