30 September, 2007

Running IDS Inline

When do you run IDS inline?

Although the idea of running an IDS inline to prevent attacks rather than just detect them is appealing, reality is harsh. There are a lot of problems running an IDS inline.

The first and most significant problem with running IDS inline is that most people don't adequately tune IDS. This is a problem whether you run inline or passive IDS. I get the feeling that the general management view towards IDS, perpetuated by marketing, is that you should be able to drop an IDS on the network and it is ready to go. This is far from the truth. Network monitoring requires tuning all involved systems and humans to analyze the information provided by the systems.

When I set up my first Snort deployment, I spent countless hours tuning and tweaking the configuration, rules, performance and more. Along the way, I added Sguil, which was useful both for more thorough analysis and more effective tuning based on those analyses. Even after all that, tuning is a daily requirement and never should stop.

Every time you update rules, you should tune. Every time the network changes, you may need to tune. Every time your IDS itself is updated, you may need to tune. Every time a new attack is discovered, you may need to tune. Tuning does not and should not end unless you want your network security monitoring to become less effective over time. I would be willing to bet that a majority of inline IDS don't come close to taking full advantage of blocking features because of a lack of tuning. Passive IDS suffers from the same problem, but the results can be even worse if an inline IDS starts blocking legitimate traffic.

Another problem, somewhat related to tuning, with running IDS inline is that rules are never perfect. Do you trust your enabled rules to be 100% perfect? If not, what percentage of alerts from a given rule must be accurate to make the rule acceptable in blocking mode? Even good rules will sometimes or often trigger on benign traffic.

For Snort, one of the most reliable rule sets are policy violations. Both VRT rules and Bleeding rules are extremely reliable when detecting policy violations with a minimal number of alerts triggered on unrelated network traffic. Spyware rules are similarly accurate.

Rules designed to detect more serious malicious activity are much less consistent. Most are simply not reliable enough to use in a mode that blocks the traffic the rule is designed to detect. That does not mean the rules are necessarily bad! Good rules still aren't perfect. This is one of many reasons why there will always be work for security analysts. IDS or other NSM systems are no substitute for the analysis that an experienced human can provide. They simply are tools to to be used by the analyst.

Last, don't forget that running anything inline can seriously impact the network. If my Snort process at home dies for some reason, I can survive without Internet access until I get it restarted. This isn't always the case for businesses, so consider whether your traffic should be going through a system that fails open if there are problems. Even at home I don't queue port 22 to snort_inline because I want to be able to SSH into the box from the Internet if there are problems.

The real question that has to be answered is whether the benefits of dropping traffic iare worth the risks of running inline.

21 September, 2007

First Impressions of Sguil 0.7.0

Since first using Sguil on version 0.5.3, I have definitely become a believer in Sguil and network security monitoring. Sguil is much more than just a front-end for looking at Snort alerts. It integrates other features, such as session data and full content packet captures, that make security monitoring much more effective than just looking at IDS alerts.

I recently upgraded my version of Sguil from the release version, 0.6.1, to the new beta version, 0.7.0. I have not even been using it for a week yet, but I do have a few first impressions.

Before talking about Sguil itself, I would like to say thanks to Bamm Visscher, the original author of Sguil, Steve Halligan, a major contributor, and all the others who have contributed. I can say that it is easy to tell that Sguil is written and developed by security analysts, and it is bound to make you look smart to your bosses. Sguil has a thriving community of users and contributors that have been very active in improvements and documentation.

Upgrading Sguil from 0.6.1 to 0.7.0 was less difficult than I had anticipated. Reading the Sguil on RedHat HOWTO is a good idea, even if using a different operating system or distribution. It is not necessary to follow the HOWTO but it does provide a lot of useful information to ease the installation or upgrade process.

The basic procedure for upgrading was to stop sguild and sensor_agent, run the provided script to update the database, upgrade the components, and add PADS. Of course, if you are doing this in a production environment then you probably would want to backup the database, sguild files on the server, and sguil files on the sensors. Since I was upgrading at home, I didn't bother backing up my database.

Under the hood, communications between sensors and the Sguil server has changed. The network and data flow diagrams I contributed to the NSMWiki show the change from one sensor agent to multiple agents. One reason for this change is that it will make it easier to distribute sensor services across multiple systems, allowing people to run packet logging, Snort in IDS mode, or sancp on separate systems. I can see this being extremely useful if you are performance-limited because of old hardware, or if you monitor high-traffic environments.

Another reason for the change in agents was to make it easier to add other data sources. An example of this, the only one I know of so far, is Victor Julien's modsec2sguil. It will be interesting to see if the changes to Sguil lead to other agents to add support for other data sources. I know that David Bianco has discussed writing an OSSEC agent to add a host-based IDS as a Sguil data source.

The changes to the client seem relatively minor but are useful. David Bianco already has written about added support for cloaking investigative activities in Sguil. Sguil 0.7.0 has added support for proxying DNS requests through a third party like OpenDNS. In fact, this feature was enabled by default when I installed my new client.

Another change is PADS, which will display real-time alerts to the client as new assets are detected on the network. An example of a PADS alerts is:

PADS New Asset - smtp Microsoft Exchange SMTP
Though I like the idea of PADS a lot, it currently has some issues that definitely limit its usefulness. In its current form, Sguil's implementation of PADS has a bug where it generates alerts for external services, for instance when my home systems connect to external web or SMTP servers. The goal of PADS in the context of Sguil is really internal service detection so you can see any unusual or non-standard services on systems you are monitoring.

Even once the bug is fixed, I can see it being extremely noisy on a large, diverse network. I may make a separate post in the future with ideas regarding PADS. Profiles for groups of systems and their services is definitely one idea that might be useful, but I'm not sure how hard it would be to implement. PADS has potential, but it will take some time.

Another nice change in the Sguil client is the ability to click the reference URLs or SID when displaying Snort rules, opening your browser to the relevant page. This was a feature that was sorely needed and it will be nice not to need copy and paste for Snort rule references.

I'm looking forward to further testing and the future release of 0.7.0.

14 September, 2007

Pulling IP Addresses and Ranges From a Snort Rule

In my previous post, I included a short Perl script to pull IP addresses from a Snort rule file. The problem with the script was that it simply stripped CIDR notation rather than expanding to all the included IP addresses. For instance, 10.0.0.0/28 would become 10.0.0.0. After a little searching, I found the Perl module NetAddr::IP that can be used to more easily manage and manipulate IP addresses and subnets. I also found an example of how to use the module for subnet expansion, among other things.

The following modification of my previous script will not only grab all the IP addresses from the snort rule, but also expand all CIDR addresses to the individual IP addresses in the CIDR range.

Edit: If you grabbed the script before September 16, then grab it again. While I was trying to make it look pretty, I must have inadvertently altered the substitution to remove the trailing CIDR notation. The current version is corrected.

#!/usr/bin/perl
#
# Script to pull IP address from Snort rules file
# by nr
# 2007-08-30
# 2007-09-14 Added CIDR expansion

# Set filenames
$rulefile = "/home/nr/bleeding-compromised.rules";
$ip_outfile = "iplist.txt";

# Open file to read
die "Can't open rulefile.\n" unless open RULEFILE, "<", "$rulefile";
# Open file to write
die "Can't open outfile.\n" unless open IPLIST, ">", "$ip_outfile";

# Put each rule from rules file into array
chomp(@rule = <RULEFILE>);

# For each rule
foreach $rule (@rule) {
# Match only rules with IP addresses so we don't get comments etc
# This string match does not check for validity of IP addresses
if ( $rule =~ /\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}/ ) {
# Remove everything before [ character
$rule =~ s/.*\[//g;
# Remove everything after ] character
$rule =~ s/\].*//g;
# Split the remaining data using the commas
# and put it into ip_address array
@ip_address = split /\,/, $rule;

# For each IP address in array
foreach $ip_address (@ip_address) {

# Match on slash
if ( $ip_address =~ /.*\/.*/ ) {

# Expand CIDR to all IP addresses in range modified from
# http://www.perlmonks.org/?displaytype=print;node_id=190497
use NetAddr::IP;
my $newip = new NetAddr::IP($ip_address);

# While less than broadcast address
while ( $newip < $newip->broadcast) {
# Strip trailing slash and netmask from IP
$temp_ip = $newip;
$temp_ip =~ s/\/.*//g;
print IPLIST "$temp_ip\n";
# Increment to next IP
$newip ++;
}
}
# For non-CIDR, simply print IP
else {
print IPLIST "$ip_address\n";
}
}
}
}

# Close filehandles
close RULEFILE;
close IPLIST;

One last note. If you prefer to pass the rule file name and output file to the script at the command line every time you run it, change the $rulefile and $ip_outfile variables to equal $ARGV[0] and $ARGV[1].

10 September, 2007

Querying Session Data Based on Snort Rule IPs

Sometimes Snort rules can contain useful information but are not practical to use in production. The Bleeding Snort rules recently added a set of rules to detect connection attempts to known compromised hosts. If you take a look at the rules, you'll see that it is essentially a large list of IP addresses that are known to be compromised.

When I first ran these rules on a Snort sensor, they definitely gave me some alerts requiring action. However, the performance of the sensor really suffered, particularly as the set of IP addresses grew. Since the rules were causing packet loss, I wanted to disable them.

I decided to write a script to grab the IP addresses from the Bleeding rule, then load the addresses into my database to compare with stored sancp data. If you are monitoring your network but not storing session data acquired with tools like sancp or argus, you should be. In my case, I am running Sguil, which uses sancp.

First, I had to write the script to grab the IP addresses. Since I just finished an introduction to Perl class in school and had many projects that required string matching, I figured that was the way to go. Since I'm a beginner, I would be more than happy if anyone can offer improvements to the following script. In particular, one thing I have not worked on yet is a method for expanding CIDR notation to individual IP addresses. The bleeding-compromised.rules do contain some networks in CIDR notation rather than just individual IP addresses, and for now the script simply strips the notation resulting in an incomplete IP list.

Edit: I posted an updated script that replaces the one in this post. I would suggest using the updated version rather than this one.

#!/usr/bin/perl
#
# Script to pull IP address from Snort rules file
# by nr
# 2007-08-30

# Set filenames
$rulefile = "/nsm/rules/bleeding-compromised.rules";
$ip_outfile = "iplist.txt";

# Print error unless successful open of file to read
die "Can't open rulefile.\n" unless open RULEFILE, "<", "$rulefile"; # Open file to write open IPLIST, ">", "$ip_outfile";

# Put each rule from rules file into array
chomp(@rule = <RULEFILE>);

# For each rule
foreach $rule (@rule) {
# Match only rules with IP addresses so we don't get comments etc
# This string match does not check for validity of IP addresses
if ( $rule =~ /\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}/ ) {
# Remove everything before [ character
$rule =~ s/.*\[//g;
# Remove everything after ] character
$rule =~ s/\].*//g;
# Split the remaining data using the commas
# and put it into ip_address array
@ip_address = split /\,/, $rule;

# For each IP address in array
foreach $ip_address (@ip_address) {
# Remove CIDR notation (means those IP ranges are missed - need to fix)
$ip_address =~ s/\/.*//g;
# Print to output file one IP per line
print IPLIST "$ip_address\n";
}
}
}

# Close filehandles
close RULEFILE;
close IPLIST;

Now I have a file called "iplist.txt" with the list of IP addresses, one per line. Next, I log into MySQL and load the list into a temporary database and table. The table really only needs one column of the CHAR or VARCHAR data type. (See the MySQL documentation for creating tables or databases).

LOAD DATA LOCAL INFILE '/home/nr/iplist.txt' INTO TABLE temp.ipaddr;

Then I have to convert the IP addresses to the INT data type using the INET_ATON function so they can be matched against my sancp session data. I created the table "sguildb.ipaddresses" for cases like this where I want to load external IP address and then run a query. The "temp.ipaddr" table has one column called "new_ip". The "sguildb.ipaddresses" table also has one column, but called "dst_ip".

INSERT INTO sguildb.ipaddresses SELECT INET_ATON(new_ip) FROM temp.ipaddr;

In MySQL 5.x, you can combine the previous two steps to add and convert the data in one step. I'm currently running 4.1, so I have not investigated the exact syntax.

Finally, I can query my sancp session data for connections going to any of the IP addresses in sguildb.ipaddresses, which are the IP addresses from the Snort rule.

SELECT sancp.sid,INET_NTOA(sancp.src_ip),sancp.src_port,INET_NTOA(sancp.dst_ip),
sancp.dst_port,sancp.start_time FROM sancp INNER JOIN ipaddresses ON
(sancp.dst_ip = ipaddresses.dst_ip) WHERE sancp.start_time >= DATE_SUB(UTC_DATE(),
INTERVAL 24 HOUR) AND sancp.dst_port = '80';

This query will return the sensor ID, source IP, source port, destination IP, destination port, and session start time from the sancp table wherever the sancp.dst_ip matches the ipaddresses.dst_ip where I stored the IP addresses from the Snort rule. Notice that it will query the last 24 hours for port 80 connections only. Depending on the type of activity you are looking for, you could change ports or remove the port match entirely.

This whole process could be automated further by running the mysql commands directly from the Perl script so the compromised IP addresses are updated in the database when the script is run.

The final SQL query to match the compromised IP addresses with sancp destination IP addresses can easily be turned into a cronjob. For instance, if querying for the past 24 hours then run the cronjob once a day. Once the results are returned, if running Sguil with full content logging it is easy to then query for the individual connections you're interested in and view the ASCII transcripts or the packet captures.

08 September, 2007

Transparent Bridging, MMAP pcap, and Snort Inline

I use Snort and the Sguil Analyst Console for NSM, but there is always room to experiment and/or improve. Up to this point, I have used Snort either on mirrored (SPAN) ports or with a network TAP, both common configurations. After finally getting a second-hand CPU and motherboard to replace a dead CPU, I had a spare system to set up Snort inline at home.

The upgrade from Snort 2.4.x to 2.6.x.x was quite taxing on performance, so I decided it was also time to play with Phil Wood's MMAPped libpcap. The modified libpcap will make drastically fewer system calls when compared to the standard libpcap sniffing on a busy network. Although my home network certainly isn't high bandwidth, I wanted the experience of setting up Snort with the Phil Wood's modified version of libpcap. Since I actually did all this many months ago and am just now posting about it, I can say that I have seen a huge performance improvement when going from the standard libpcap to Phil Wood's libpcap in high bandwidth environments.

How would I implement Snort Inline on a home network? The two choices were to replace one of my routers with a BSD or Linux system configured as a router, or to set up Linux as a transparent bridge. For those that prefer FreeBSD, you would have to configure it as a router since the bridge code in FreeBSD does not support the ipfw divert socket. I am not familiar enough with other BSD versions to say whether their bridge code is the same or not.

I much preferred a bridge rather than a router since it would avoid the time-consuming process of reconfiguring my network topology. This meant that I had to use Linux, and my distribution in this case is Slackware-current. The process should not be much different for any distribution.

Installing Software

Because plugging in an untested bridge between my LAN and the Internet could interrupt my connection, I decided it would be easiest to get and install all the software prior to configuring the bridge and putting it inline.

My first step was to install the modified libpcap, which needs either flex and bison, or yacc. This was essentially a freshly built Slackware system and I didn't have them installed, so I used swaret to install the packages.

swaret --install flex
swaret --install bison
I was now ready to install libpcap.
cd /usr/src/
wget http://public.lanl.gov/cpw/libpcap-current.tar.gz
tar xvzf libpcap-current.tar.gz
ln -s libpcap-0.9.20060417 libpcap
cd libpcap
./configure
make
make install
Snort will need the header files from libpcap and the install did not copy them anywhere, so I manually copied the files to /usr/include/. Another option would be to create a link to the files in the include directory.
cp /usr/src/libpcap/pcap.h /usr/include/
cp /usr/src/libpcap/pcap-bpf.h /usr/include/
cp /usr/src/libpcap/pcap-namedb.h /usr/include/
Because this is a modified libpcap, all the software that depends on libpcap must also be compiled against the version I just installed. I will definitely be using tcpdump when I test the bridge.
wget http://www.tcpdump.org/daily/tcpdump-current.tar.gz
tar xvzf tcpdump-current.tar.gz
ln -s tcpdump-2007.01.07 tcpdump
cd tcpdump
./configure
make
make install
Now I could test that tcpdump worked with the PCAP_FRAMES option available because of the modified libpcap. For some reason, perhaps because of my kernel version, PCAP_FRAMES=max did not work but I was able to use it by manually setting the value. I was able to bump the value of PCAP_FRAMES quite high, above 300000, before it resulted in errors. I have yet to determine what that really means for performance. Here are two commands I used to show that the newly installed tcpdump worked with the modified libpcap.
PCAP_FRAMES=65535 PCAP_VERBOSE=1 PCAP_TO_MS=0 PCAP_PERIOD=10000 /usr/local/sbin/tcpdump \
-i eth0 -s 1514 -w /dev/null -c 100

PCAP_FRAMES=65535 /usr/local/sbin/tcpdump -v -i eth0
Snort needs libnet-1.0.2a when configured with --enable-inline, so I had to install libnet first.
tar xvzf libnet-1.0.2a
cd libnet-1.0.2a
./configure
make
make install
Finally, install Snort 2.6.x.x. (Note: Since this document was written quite a while ago, there are quite a few newer versions of Snort available). Another alternative to using the --enable-inline option with mainline Snort is to download snort_inline, which is maintained by William Metcalf and Victor Julien. There are a number of added features and conveniences when using snort_inline, as highlighted on Victor's blog. However, I used mainline Snort in the following example.
tar xvzf snort-2.6.1.2.tar.gz
cd snort-2.6.1.2
./configure --enable-dynamicplugin --enable-inline
make
make install
I tested snort with -V to check that it would start and was compiled to work inline. It shows that Snort was configured with the inline option.
snort -V

,,_ -*> Snort! <*-
o" )~ Version 2.6.1.2 (Build 34) inline
'''' By Martin Roesch & The Snort Team: http://www.snort.org/team.html
(C) Copyright 1998-2006 Sourcefire Inc., et al.

Configuring Linux for Transparent Bridging

Configuring the bridging is fairly simple. The only module I had to manually insert was ip_queue. Other modules that may be needed are ip_tables, iptable_filter and bridge.

In this case, eth0 is my separate management interface, I named the bridge interface bridge0, and the physical interfaces joining bridge0 were eth1 and eth2. The bridge device can be configured with an IP address if you do not want to use a separate NIC for management. In either case, make sure to secure the management NIC on your Snort box, for example limiting connections to the management IP so only source IP addresses in your private IP space are allowed to connect. Here, I created the bridge interface, added eth1 and eth2 to it, and brought them up:
/sbin/brctl addbr bridge0
/sbin/brctl addif bridge0 eth1
/sbin/brctl addif bridge0 eth2
/sbin/ifconfig eth1 up
/sbin/ifconfig eth2 up
/sbin/ifconfig bridge0 up
After some iptables configuration, bridge0 should work. Assuming the system is dedicated to being a bridge running Snort Inline, the only addition necessary to make bridging work is the following:
iptables -I FORWARD -o bridge0 -j ACCEPT
Now bridge0 should be ready to pass packets.

Once the bridge is connected, iptables can be used to show packet statistics and confirm that the bridge is forwarding. You can also use tcpdump -v -i bridge0 to confirm that traffic is passing if you skip ahead to installing libpcap and tcpdump prior to plugging in the bridge.
iptables -vL
--snip--
Chain FORWARD (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
527 168K ACCEPT all -- any bridge0 anywhere anywhere
--snip--
The bridge is seeing packets! tcpdump showed that they were more than just broadcast packets as I accessed the Internet through the bridge.
PCAP_FRAMES=65535 /usr/local/sbin/tcpdump -v -i bridge0
tcpdump: WARNING: bridge0: no IPv4 address assigned
tcpdump: listening on bridge0, link-type EN10MB (Ethernet), capture
--snip--
57 packets captured
57 packets received by filter
0 packets dropped by kernel

Testing Snort Inline

Finally, I'm ready to configure and test Snort Inline. I am not going to cover Snort configuration, but I did write one test rule to put in local.rules and disabled all the other rule sets in snort.conf. The following rule should drop any outbound connection on the HTTP_PORT set in snort.conf.
drop tcp $HOME_NET any -> $EXTERNAL_NET $HTTP_PORTS (msg:"Test rule outbound HTTP"; \
classtype:misc-activity; sid: 3000000;)
Now I run Snort. The -v will print packets to the console so I can confirm that snort is seeing the traffic and the -Q tells snort to accept input from the iptables queue.
/usr/local/bin/snort -Qvc /etc/snort/snort.conf -l /data
Note that I forgot to add the PCAP_FRAMES value. In production on a busy network, I would add it permanently to my environmental variables and/or to my init scripts for Snort so it would always take advantage of Phil Wood's libpcap.

I add the necessary rule to iptables. I don't want to risk losing all connections by queueing everything, so I just queue port 80. This is what the line in my iptables-save file looks like:
-A FORWARD -i bridge0 -p tcp -m tcp --dport 80 -m state --state NEW -j QUEUE
I try to browse to a web site and get the following alert in /data/alert:
[**] [1:300000:0] Test rule outbound HTTP [**]
[Classification: Misc activity] [Priority: 3]
01/11-22:22:24.458394 xx.xx.xx.xx:60813 -> 66.35.250.209:80
TCP TTL:63 TOS:0x0 ID:52831 IpLen:20 DgmLen:60 DF
******S* Seq: 0xE79DA525 Ack: 0x0 Win: 0x16D0 TcpLen: 40
TCP Options (5) => MSS: 1460 SackOK TS: 604992578 0 NOP WS: 2
Everything is working. Now I have to get everything ready for production by fully configuring and tuning Snort. Once that is done, I will queue all traffic except port 22. Queuing port 22 could result in not being able to connect using SSH if the Snort process were to die or had to be restarted. If snort-inline is not running, all queued traffic is effectively dropped since Snort is required to pass the traffic from the queue back to iptables.

06 September, 2007

The Purpose of This Blog

Here goes nothing.

I have been thinking of creating a blog for quite a while, primarily to store and share small tidbits of information I come across as I muddle my way through the world of information security. Most of what I do is on the operational side of the security house. As I experiment with and work in security, I often find myself wishing I could share some of the information and processes I have used.

Most of the information I am sharing is not unique. I anticipate that many of my posts will aggregate information from a number of sources to help me document what, why, and how I did something. Don't forgot the 'why', because that is important!

Two examples of posts I have planned:

  • Building and configuring a Snort IDS to run inline as a transparent bridge.
  • Pulling IP addresses from Bleeding Snort rules and then querying sancp (session data) for matches.
Neither of the planned posts will be groundbreaking, but the experiences were both practical and useful for me.