Showing posts with label sguil. Show all posts
Showing posts with label sguil. Show all posts

01 May, 2013

Installing OSSEC agent

With the recent news about the latest Apache backdoor on systems using cPanel, I thought it would be pertinent to show the process of adding an OSSEC agent that connects to a Security Onion server. Why is this relevant? Because OSSEC and other file integrity checkers can detect changes to binaries like Apache's httpd.

"OSSEC is an Open Source Host-based Intrusion Detection System that performs log analysis, file integrity checking, policy monitoring, rootkit detection, real-time alerting and active response."
Many systems include integrity checking programs in their default installs these days, for instance Red Hat with AIDE. AIDE is also available in repositories for a number of other Linux distributions plus FreeBSD.

This case in particular would require using something other than the default options for integrity checking because cPanel installs Apache httpd in /usr/local/apache/bin, a non-standard directory that may not be automatically included when computing file hashes and doing subsequent integrity checks.

The reason I'm demonstrating OSSEC here is that it easily integrates with the Sguil console, and in Security Onion the sensors and server already have OSSEC configured to send alerts to Sguild. OSSEC also has additional functionality compared to AIDE. In this case, I'm installing the agent on a Slackware server.
$ wget http://www.ossec.net/files/ossec-hids-2.7.tar.gz

---snipped---

 $ openssl sha1 ossec-hids-2.7.tar.gz
SHA1(ossec-hids-2.7.tar.gz)= 721aa7649d5c1e37007b95a89e685af41a39da43
 $ tar xvzf ossec-hids-2.7.tar.gz

---snipped---

 $ sudo ./install.sh

  OSSEC HIDS v2.7 Installation Script - http://www.ossec.net

 You are about to start the installation process of the OSSEC HIDS.
 You must have a C compiler pre-installed in your system.
 If you have any questions or comments, please send an e-mail
 to dcid@ossec.net (or daniel.cid@gmail.com).

  - System: Linux webserver 3.8.4
  - User: root
  - Host: webserver
 
   -- Press ENTER to continue or Ctrl-C to abort. --

 
 1- What kind of installation do you want (server, agent, local, hybrid or help)? agent
 
  - Agent(client) installation chosen.

 2- Setting up the installation environment.

  - Choose where to install the OSSEC HIDS [/var/ossec]:
 
 3- Configuring the OSSEC HIDS.

   3.1- What's the IP Address or hostname of the OSSEC HIDS server?: 192.168.1.20
 
  - Adding Server IP 192.168.1.20

   3.2- Do you want to run the integrity check daemon? (y/n) [y]:

    - Running syscheck (integrity check daemon).

   3.3- Do you want to run the rootkit detection engine? (y/n) [y]:
 
 - Running rootcheck (rootkit detection).

   3.4 - Do you want to enable active response? (y/n) [y]:

3.5- Setting the configuration to analyze the following logs:
    -- /var/log/messages
    -- /var/log/auth.log
    -- /var/log/syslog
    -- /var/adm/syslog
    -- /var/adm/auth.log
    -- /var/adm/messages
    -- /var/log/xferlog
    -- /var/log/proftpd.log
    -- /var/log/apache/error_log (apache log)
    -- /var/log/apache/access_log (apache log)
    -- /var/log/httpd/error_log (apache log)
    -- /var/log/httpd/access_log (apache log)

  - If you want to monitor any other file, just change
   the ossec.conf and add a new localfile entry.
   Any questions about the configuration can be answered
   by visiting us online at http://www.ossec.net .

   --- Press ENTER to continue ---

---snip---

- Init script modified to start OSSEC HIDS during boot.
 - Configuration finished properly.
 - To start OSSEC HIDS:
                /var/ossec/bin/ossec-control start
 - To stop OSSEC HIDS:
                /var/ossec/bin/ossec-control stop
 - The configuration can be viewed or modified at /var/ossec/etc/ossec.conf

    Thanks for using the OSSEC HIDS.
    If you have any question, suggestion or if you find any bug,
    contact us at contact@ossec.net or using our public maillist at

    ossec-list@ossec.net
    ( http://www.ossec.net/main/support/ ).

    More information can be found at http://www.ossec.net

    ---  Press ENTER to finish (maybe more information below). ---

 - You first need to add this agent to the server so they 
   can communicate with each other. When you have done so,
   you can run the 'manage_agents' tool to import the 
   authentication key from the server.

   /var/ossec/bin/manage_agents

   More information at: 
   http://www.ossec.net/en/manual.html#ma

Next, I add the agent to my Security Onion server.
$ sudo /var/ossec/bin/manage_agents 

****************************************
* OSSEC HIDS v2.6 Agent manager.     *
* The following options are available: *
****************************************

   (A)dd an agent (A).
   (E)xtract key for an agent (E).
   (L)ist already added agents (L).
   (R)emove an agent (R).
   (Q)uit.

Choose your action: A,E,L,R or Q: A

- Adding a new agent (use '\q' to return to the main menu).

  Please provide the following:
   * A name for the new agent: webserver
   * The IP Address of the new agent: 192.168.1.5
   * An ID for the new agent[001]: 

Agent information:

   ID:001
   Name:webserver
   IP Address:192.168.1.5

Confirm adding it?(y/n): y

****************************************
* OSSEC HIDS v2.6 Agent manager.     *
* The following options are available: *
****************************************

   (A)dd an agent (A).
   (E)xtract key for an agent (E).
   (L)ist already added agents (L).
   (R)emove an agent (R).
   (Q)uit.

Choose your action: A,E,L,R or Q: e

Available agents: 

   ID: 001, Name: webserver, IP: 192.168.1.5

Provide the ID of the agent to extract the key (or '\q' to quit): 001

Agent key information for '001' is: 

---snip---

** Press ENTER to return to the main menu.

Now copy the key, go back to the web server, paste and import the key.
$ sudo /var/ossec/bin/manage_agents 

****************************************
* OSSEC HIDS v2.7 Agent manager.     *
* The following options are available: *
****************************************

   (I)mport key from the server (I).
   (Q)uit.

Choose your action: I or Q: i

* Provide the Key generated by the server.
* The best approach is to cut and paste it.
*** OBS: Do not include spaces or new lines.

Paste it here (or '\q' to quit): ---snip---

Agent information:
   ID:001
   Name:webserver
   IP Address:192.168.1.5

Confirm adding it?(y/n): y

If I was running a system with cPanel that was vulnerable to Cdorked.A then I would want to make sure OSSEC is monitoring the directories with the Apache httpd files. The OSSEC default configuration from my recent install is /var/ossec/etc/ossec.conf and the relevant lines are below:
<syscheck>
    <!-- Frequency that syscheck is executed - default to every 22 hours -->
    <frequency>79200</frequency>
    
    <!-- Directories to check  (perform all possible verifications) -->
    <directories check_all="yes">/etc,/usr/bin,/usr/sbin</directories>
    <directories check_all="yes">/bin,/sbin</directories>

So by default OSSEC would apparently not be checking the integrity of cPanel's Apache installation and I would need to add /usr/local/apache to the directory checks. After making any changes for my particular system, I check the status of OSSEC and it is not yet running.
$ sudo /etc/rc.d/rc.ossec status
ossec-logcollector not running...
ossec-syscheckd not running...
ossec-agentd not running...
ossec-execd not running...
$ sudo /etc/rc.d/rc.ossec start 
Starting OSSEC HIDS v2.7 (by Trend Micro Inc.)...
Started ossec-execd...
Started ossec-agentd...
Started ossec-logcollector...
Started ossec-syscheckd...
Completed.

Note after adding the OSSEC agent on the remote system then adding it on the OSSEC server, you must restart ossec-hids-server in order for the ossec-remoted process to start listening on 1514/udp for remote agents.
$ sudo /etc/init.d/ossec-hids-server status
ossec-monitord is running...
ossec-logcollector is running...
ossec-remoted not running...
ossec-syscheckd is running...
ossec-analysisd is running...
ossec-maild not running...
ossec-execd is running...
$ sudo /etc/init.d/ossec-hids-server restart
Killing ossec-monitord .. 
Killing ossec-logcollector .. 
ossec-remoted not running ..
Killing ossec-syscheckd .. 
Killing ossec-analysisd .. 
ossec-maild not running ..
Killing ossec-execd .. 
OSSEC HIDS v2.6 Stopped
Starting OSSEC HIDS v2.6 (by Trend Micro Inc.)...
OSSEC analysisd: Testing rules failed. Configuration error. Exiting.
2013/04/30 23:13:59 ossec-maild: INFO: E-Mail notification disabled. Clean Exit.
Started ossec-maild...
Started ossec-execd...
Started ossec-analysisd...
Started ossec-logcollector...
Started ossec-remoted...
Started ossec-syscheckd...
Started ossec-monitord...
Completed.

$ sudo /etc/init.d/ossec-hids-server status
ossec-monitord is running...
ossec-logcollector is running...
ossec-remoted is running...
ossec-syscheckd is running...
ossec-analysisd is running...
ossec-maild not running...
ossec-execd is running...

$ netstat -l | grep 1514
udp        0      0 *:1514                  *:*    

Note the error corresponding to the FAQ entry about getting an error when starting OSSEC. However, since I'm running OSSEC 2.7 this did not seem to apply. Poking around, I realized the ossec-logtest executable had not been copied to /var/ossec/bin when I ran the install script. After I manually copied it to the directory, restarting OSSEC no longer caused the "Testing rules failed" error.

Once you have installed OSSEC on the system to be monitored, added the agent on the server, imported the key on the system to be monitored, restarted the server process, and started the client process, you will start getting alerts from the newly added system in Sguil. For example, the content of Sguil alerts will look like this after updating gcc:
Integrity checksum changed for: '/usr/bin/gcc'
Old md5sum was: '764a405824275d806ab5c441516b2d79'
New md5sum is : '6ab74628cd8a0cdf84bb3329333d936e'
Old sha1sum was: '230a4c09010f9527f2b3d6e25968d5c7c735eb4e'
New sha1sum is : 'b931ceb76570a9ac26f86c12168b109becee038b'

In the Sguil console, if I wanted to view all the recent OSSEC alerts I could perform a query as pictured below. Note you need to escape the brackets or remove them in favor of the MySQL wildcards '%%'.


Finally, to show an example of the various types of alerting that OSSEC can do in addition to checksum changes, here is a query and output directly from the MySQL console.
mysql> SELECT count(signature),signature FROM event WHERE signature LIKE '%%OSSEC%%' GROUP BY signature ORDER BY count(signature) DESC;
+------------------+---------------------------------------------------------------------------------------+
| count(signature) | signature                                                                             |
+------------------+---------------------------------------------------------------------------------------+
|              388 | [OSSEC] Integrity checksum changed.                                                   |
|              149 | [OSSEC] Host-based anomaly detection event (rootcheck).                               |
|               46 | [OSSEC] Integrity checksum changed again (2nd time).                                  |
|               39 | [OSSEC] IP Address black-listed by anti-spam (blocked).                               |
|               12 | [OSSEC] Integrity checksum changed again (3rd time).                                  |
|                4 | [OSSEC] Web server 400 error code.                                                    |
|                3 | [OSSEC] Receipent address must contain FQDN (504: Command parameter not implemented). |
+------------------+---------------------------------------------------------------------------------------+
7 rows in set (0.00 sec)
The highest count alert, plus the alerts indicating "2nd time" and "3rd time", are the basic functionality needed to detect changes to a file, my original use case. The "rootcheck" is alerting on files owned by root but writable by everyone. The balance of the alerts are from reading the system logs and detecting the system rejecting emails (anti-spam, 504) or web server error codes.

Back to the original problem of Cdorked.A, the blog posts on the subject also indicate that NSM could detect unusually long HTTP sessions, and there are no doubt other network behaviors that could be used to create signatures or network analytics resulting in detection. File integrity checks are just one possible way to detect a compromised server. Remember you need to have known good checksums for this to work! You ideally install something like OSSEC prior to the system being live on the network or at the least prior to it running any listening services that could be compromised before computing the checksums.

13 March, 2010

Customizing Slackware Tcl Package for Sguil

Most distributions these days are configuring their Tcl packages with --enable-threads as a default. Slackware-current switched some months back with the following in the ChangeLog.txt.

+--------------------------+
Mon Dec  7 02:13:13 UTC 2009
d/ruby-1.9.1_p243-i486-3.txz:  Rebuilt.
  Added an explicit --enable-pthread.  This is mostly to make sure that we get
  the expected option set from future releases of Ruby -- it appears that not
  only is --enable-pthread the default in ruby-1.9.1, but trying to use
  --disable-pthread doesn't work.  Furthermore, Ruby and Tcl/Tk no longer work
  together unless both Ruby and Tcl/Tk are compiled with thread support.
  Compiling Tcl/Tk with thread support has caused some problems in the past.
  If a threaded Tcl app tries to fork(), it will hang, but by now most affected
  Tcl apps (such as eggdrop) should have patches available.
  Anyway, this should fix the issues with Ruby and Tk.  Please test it, and
  report any other problems that arise.
tcl/tcl-8.5.8-i486-1.txz:  Upgraded.
  Compiled using --enable-threads, since Ruby requires it to work with Tk.
tcl/tclx-8.4-i486-3.txz:  Rebuilt.
  Recompiled using --enable-threads.
tcl/tix-8.4.3-i486-2.txz:  Rebuilt.
  Recompiled using --enable-threads.
tcl/tk-8.5.8-i486-1.txz:  Upgraded.
  Compiled using --enable-threads, since Ruby requires it to work with Tk.
+--------------------------+ 
The Sguil daemon will not work with threaded Tcl, so to fix this you need to build a package for the distribution of your choice with the --disable-threads configure option. In Slackware and most other distributions, it is fairly simple to customize a package.

Download Tcl from the source directory on the Slackware mirror of your choice. It should include a slack-desc file, a tcl.SlackBuild file, and the Tcl source. Modify the tcl.SlackBuild file to replace --enable-threads with --disable-threads.
./configure \
  --prefix=/usr \
  --libdir=/usr/lib${LIBDIRSUFFIX} \
  --enable-shared \
  --disable-threads \
  --enable-man-symlinks \
  --enable-man-compression=gzip \
  ${CONFARGS} \
  --build=$ARCH-slackware-linux
You may also want to modify the slack-desc to note that this is a non-threaded version. Then build the new package.
$ sh tcl.SlackBuild
---snip--- 
Slackware package /tmp/tcl-8.5.8-i486-1.txz created.
As you see, the package will get written to /tmp by default. Now replace the threaded version with the new non-threaded version.
$ sudo upgradepkg --reinstall /tmp/tcl-8.5.8-i486-1.txz
+==============================================================================
| Upgrading tcl-8.5.8-i486-1 package using /tmp/tcl-8.5.8-i486-1.txz
+==============================================================================

Pre-installing package tcl-8.5.8-i486-1...

Removing package /var/log/packages/tcl-8.5.8-i486-1-upgraded-2010-03-13,20:03:22...

Verifying package tcl-8.5.8-i486-1.txz.
Installing package tcl-8.5.8-i486-1.txz:
PACKAGE DESCRIPTION:
# tcl (Tool Command Language)
#
# Tcl, developed by Dr. John Ousterhout, is a simple to use text-based
# script language with many built-in features which make it especially
# nice for writing interactive scripts.
#
# This is a version customized by nr that uses --disable-threads.
#
Executing install script for tcl-8.5.8-i486-1.txz.
Package tcl-8.5.8-i486-1.txz installed.

Package tcl-8.5.8-i486-1 upgraded with new package /tmp/tcl-8.5.8-i486-1.txz.

12 October, 2009

Adding GeoIP to the Sguil Client

This is a post I meant to publish months ago, but for some reason slipped off my radar. I was reading the Sguil wishlist on NSMWiki and saw something that looked simple to implement. Here are a couple diffs I created after adding a menu item for GeoIP in sguil.tk and a proc for it in lib/extdata.tcl. All I did was copy the existing DShield proc and menu items, then edit as needed to change the URL and menu listings.

I think it should work and I downloaded a pristine copy of the files before running diff since I've hacked Sguil files previously, but no warranty is assumed or implied, et cetera.

Ideally, I would love to help out and tackle some of the other items on the wishlist. My time constraints make it hard, but at least I now have a Tcl book.

sguil.tk

2865a2866
> .ipQueryMenu add cascade -label "GeoIP Lookup" -menu $ipQueryMenu.geoIPMenu
2873a2875,2876
> menu $ipQueryMenu.geoIPMenu -tearoff 0 -background $SELECTBACKGROUND -foreground $SELECTFOREGROUND \
> -activeforeground $SELECTBACKGROUND -activebackground $SELECTFOREGROUND
2917a2921,2922
> $ipQueryMenu.geoIPMenu add command -label "SrcIP" -command "GetGeoIP srcip"
> $ipQueryMenu.geoIPMenu add command -label "DstIP" -command "GetGeoIP dstip"
lib/extdata.tcl
211a212,243
> proc GetGeoIP { arg } {
>
>     global DEBUG BROWSER_PATH CUR_SEL_PANE ACTIVE_EVENT MULTI_SELECT
>
>     if { $ACTIVE_EVENT && !$MULTI_SELECT} {
>
>         set selectedIndex [$CUR_SEL_PANE(name) curselection]
>
>         if { $arg == "srcip" } {
>             set ipAddr [$CUR_SEL_PANE(name) getcells $selectedIndex,srcip]
>         } else {
>             set ipAddr [$CUR_SEL_PANE(name) getcells $selectedIndex,dstip]
>         }
>
>         if {[file exists $BROWSER_PATH] && [file executable $BROWSER_PATH]} {
>
>             # Launch browser
>             exec $BROWSER_PATH http://www.geoiptool.com/?IP=$ipAddr &
>
>         } else {
>
>             tk_messageBox -type ok -icon warning -message\
>               "$BROWSER_PATH does not exist or is not executable. Please update the BROWSER_PATH variable\
>               to point your favorite browser."
>             puts "Error: $BROWSER_PATH does not exist or is not executable."
>
>         }
>
>     }
>
> }
>

09 May, 2009

Extracting emails from archived Sguil transcripts

Here is a Perl script I wrote to extract emails and attachments from archived Sguil transcripts. It's useful for grabbing suspicious attachments for analysis.

In Sguil, whenever you view a transcript it will archive the packet capture on the Sguil server. You can then easily use that packet capture to pull out data with tools like tcpxtract or tcpflow along with Perl's MIME::Parser in this case. The MIME::Parser code is modified from David Bianco's blog.

As always with Perl or other scripts, I welcome constructive feedback. The first regular expression is fairly long and may scroll off the page, so make sure you get it all if you copy it.

#!/usr/bin/perl

# by nr
# 2009-05-04
# A perl script to read tcpflow output files of SMTP traffic.
# Written to run against a pcap archived by Sguil after viewing the transcript.
# 2009-05-07
# Updated to use David Bianco's code with MIME::Parser.
# http://blog.vorant.com/2006/06/extracting-email-attachements-from.html

use strict;
use MIME::Parser;

my $fileName; # var for tcpflow output file that we need to read
my $outputDir = "/var/tmp"; # directory for email+attachments output

if (@ARGV != 1) {
print "\nOnly one argument allowed. Usage:\n";
die "./emailDecode.pl /path/archive/192.168.1.13\:62313_192.168.1.8\:25-6.raw\n\n";
}

$ARGV[0] =~ m
/.+\/(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}):(\d{1,5})_(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}):(25)-\d{1,3}\.raw/
or die "\nIncorrect file name format or dst port is not equal to 25. Try again.\n\n";

system("tcpflow -r $ARGV[0]"); # run tcpflow w/argument for path to sguil pcap

my $srcPort = sprintf("%05d", $2); # pad srcPort with zeros
my $dstPort = sprintf("%05d", $4); # pad dstPort with zeros

# Put the octest and ports into array to manipulate into tcpflow fileName
my @octet = split(/\./, "$1\." . "$srcPort\." . "$3\." . "$dstPort");

foreach my $octet(@octet) {
my $octetLength = length($octet); # get string length
if ($octetLength < 5) { # if not a port number
$octet = sprintf("%03d", $octet); # pad with zeros
}
$fileName = $fileName . "$octet\."; # concatenate into fileName
}

$fileName =~ s/(.+\d{5})\.(.+\d{5})\./$1-$2/; # replace middle dot with hyphen
my $unusedFile = "$2-$1"; # this is the other tcpflow output file

# open the file and put it in array
open INFILE, "<$fileName" or die "Unable to open $fileName $!\n";
my @email = <INFILE>;
close INFILE;

my $count = 0;
# skip extra data at beginning
foreach my $email(@email) {
if ($email =~ m/^Received:/i) {
last;
}
else {
delete @email[$count];
$count ++;
}
}

my $parser = new MIME::Parser;
$parser->output_under("$outputDir");
my $entity = $parser->parse_data(\@email); # parse the tcpflow data
$entity->dump_skeleton; # be verbose when dumping

unlink($fileName, $unusedFile); # delete tcpflow output files

17 November, 2008

'Tis the season for E-card Trojans

IP addresses and hostnames have been changed. Anyway, this is from a few days ago and it looks like the malware is no longer on the server. A user received an email with a link to a supposed holiday card...

Src IP:         10.1.1.18       (Unknown)
Dst IP: 192.168.31.250 (Unknown)
Src Port: 1461
Dst Port: 80
OS Fingerprint: 10.1.1.18:1461 - Windows XP SP1+, 2000 SP3 (2)
OS Fingerprint: -> 192.168.31.250:80 (distance 7, link: ethernet/modem)

SRC: GET /ecard.exe HTTP/1.0
SRC: Accept: */*
SRC: Accept-Language: en-us
SRC: User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; InfoPath.1; .NET CLR 1.
1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30)
SRC: Host: fakeurl.info
SRC: Connection: Keep-Alive
SRC:
SRC:
DST: HTTP/1.1 200 OK
DST: Date: Wed, 12 Nov 2008 11:41:08 GMT
DST: Server: Apache/1.3.36 (Unix) mod_jk/1.2.14 mod_auth_passthrough/1.8 mod_log_bytes/1.2 mod_b
wlimited/1.4 PHP/4.3.9 FrontPage/5.0.2.2635.SR1.2 mod_ssl/2.8.27 OpenSSL/0.9.7a
DST: Last-Modified: Wed, 12 Nov 2008 10:14:10 GMT
DST: ETag: "944cd-8688-491aac72"
DST: Accept-Ranges: bytes
DST: Content-Length: 34440
DST: Content-Type: application/octet-stream
DST: Connection: Keep-Alive
DST:
DST:
DST: MZ..............@.......@........L.!........................@...PE..L...UB.I...............
..........p................@.......................... .........................................
................................................................................................
.............................UPX0.....p..............................UPX1.......................
.........@...UPX2................................@..............................................
3.03.UPX!
The above is part of a Sguil transcript. I downloaded the file, took a brief look, then submitted it to VirusTotal.
$ cd malware
$ wget http://fakeurl.info/ecard.exe
---snip---
$ strings -n 3 -a ecard.exe | less
UPX0
UPX1
UPX2
3.03
UPX!
---snip---
XPTPSW
KERNEL32.DLL
LoadLibraryA
GetProcAddress
VirtualProtect
VirtualAlloc
VirtualFree
ExitProcess
This is pretty run-of-the-mill malware that will get detected by Emerging Threats SID 2006434 when the executable is downloaded, but I guess people still fall for it. As Shirkdog so eloquently stated:
Do not click on unsolicited URLs, including those received in email, instant messages, web forums, or internet relay chat (IRC) channels.

People will never get it through their skulls that they SHOULD NOT click links. It is the reason we are all employed, because the user will always be there.
It's always amazing how few anti-virus engines will catch known malware. A system compromised this way also brings to my mind a comparison between common malware and novel or custom exploits that are not widely available. I plan to flesh out thoughts comparing the two at a later date.

29 October, 2008

Snort shared object rules with Sguil

More and more Sourcefire VRT Snort rules are being released in the shared object (SO) rules format. As examples, Sourcefire released rules on 14 October that were designed to detect attacks targeting MS08-057, MS08-058, MS08-059, MS08-060, MS08-062, MS08-063 and MS08-065 vulnerabilities. On 23 October, Sourcefire released rules related to MS08-067, a vulnerability that has garnered a lot of attention.

It looks like all these recently released rules are SO rules. It is easy to tell a SO rule from a traditional Snort rule using the Generator ID, because SO rules use GID 3 while the old rule format uses GID 1.

Because Sourcefire is issuing all these rules in SO format, if you don't want to miss rules for recent vulnerabilities it is definitely important to test and implement the new format and keep the rules updated. When I implemented the rules in production, I used How to use shared object rules in Snort by Richard Bejtlich as a guide for adding shared object rules to Snort. It seems fairly complete and I was able to get the precompiled rules working on RHEL/CentOS.

Unfortunately, getting the rules working and updating them is not as simple and certainly not identical to updating rules that use the old format. Oinkmaster will edit the stub files to enable and disable the SO rules, but it requires running Oinkmaster a second time with a separate configuration file. From the Oinkmaster FAQ:

Q34: Can Oinkmaster update the shared object rules (so_rules)?

A34: Yes, but you have to run Oinkmaster separately with its own
configuration file. Copy your regular oinkmaster.conf file
to oinkmaster-so-rules.conf (or create a new one) and set
"rules_dir = so_rules". Then run Oinkmaster with
-C <path tooinkmaster-so-rules.conf> and use an output directory
(-o <dir>) different than your regular rules directory. This is
important as the "rules" and "so_rules" directories contains
files with identical filenames. See the Snort documentation on how
to use shared object rules. The shared object rules are currently
disabled by default so you have to use "enablesid" or "modifysid"
to activate the ones you want to use.
To update my rules, I first download the rules and run Oinkmaster to update my GID 1 rules. Then I extract the so_rules so I can run Snort with the '--dump-dynamic-rules' option. This will generate the required stub files, but they will not contain any changes you made to enable or disable specific rules. To change which are enabled or disabled, I run Oinkmaster again with the oinkmaster-so-rules.conf.

For Sguil to properly show the rule msg in the 'Event Message' column on the client, you must generate a new gen-msg.map file. David Bianco gave me a couple shell commands, one of which uses 'create-sidmap.pl' from the Oinkmaster contrib directory, to update the gen-msg.map file.
create-sidmap.pl ruledir/so_rules | perl -ne 'm/(\d+) \|\| ([^\|]+) \|\|/g; print "3 || $1 || $2\n";' > ruledir/so_rules/gen-msg.map

cat ruledir/gen-msg.map ruledir/so_rules/gen-msg.map | sort -n | uniq > ruledir/so_rules/gen-msg-merged.map
After you check that the new file is correct, it can be moved to overwrite the old gen-msg.map. The alert name will be found based on the GID and SID, so without the update you would only see the numerical GID and SID when viewing alerts in Sguil instead of the actual text of the alert message.

One last thing to do is get the "Show Rule" checkbox in Sguil to show the rule stub when viewing SO rules. A quick temporary fix is fairly simple. Either rename and place all the rule stub files in the directory on your sguild server with all the other rules, or use a symbolic link. Either way, you have to use alternate names since the SO rules stub files use the same naming scheme as the standard rule files.

Once that is done, it just requires a simple change to 'extdata.tcl' in the 'lib' directory of the Sguil client. Change the following line:
 if { $genID != "1" } {
to
 if { $genID != "1" && $genID != "3" } {
David Bianco pointed out that there is nothing to prevent SO rules and standard rules from sharing a SID since they have a different GID, so the above solution could get the wrong rule message if there are identical SIDs. He said he will probably add code to look in the ruledir for GID 1, and look in ruledir/so_rules for GID 3. This is definitely the better way and not too complicated, so hopefully Sguil CVS will have the code added in the near future.

05 September, 2008

Modified: Pulling IP addresses from Snort rules

This is a modification of another script I wrote to pull IP addresses from Snort rules. Since the last time I posted it, I added support for multiple rules files and also automatically insert the IP addresses into the database. After that, I can run a query to compare the IP addresses with sancp data to see what systems connected or attempted connections to the suspicious hosts.

I think this version will only work with MySQL 5.x, not 4.x, because of the section that adds the IP addresses to the database. In the older versions, the IP addresses had to be added manually. Note that the table used for the $tablename variable is not part of the Sguil database schema and must be created before running the script. Make sure not to use a table that is part of the Sguil DB schema because this script deletes all data from the table!

The script is fairly slow since it expands CIDR blocks to individual IP addresses and then puts them in the database one by one. If anyone has ideas for a more efficient method, I'd love to hear it, but since I'm using Sguil I need to be able to compare the suspicious IP addresses with individual IP addresses in the Sguil sancp table.

#
# Script to pull IP address from Snort rules file
# by nr
# 2007-08-30
# 2007-09-14 Added CIDR expansion
# 2007-12-19 Added support for multiple rule files
# 2008-05-22 Added MySQL support to insert and convert IPs

use strict;
use Mysql;

my $dir = "/nsm/rules"; # rules directory
my @rulefile = qw( emerging-compromised.rules emerging-rbn.rules );
my @rule; # unprocessed rules

foreach my $rulefile(@rulefile) {
# Open file to read
die "Can't open rulefile.\n" unless open RULEFILE, "<", "$dir/$rulefile";
chomp(@rule = (@rule,<RULEFILE>)); # put each rules into array
close RULEFILE; # close current rule file
}

# Open mysql connection
my $host = "localhost";
my $database = "sguildb";
my $tablename = "suspicious_hosts";
my $colname = "dst_ip";
my $user = "sguil";
my $pw = "PASSWORD";

# perl mysql connect()
my $sql_connect = Mysql->connect($host, $database, $user, $pw);

# clear out old IP addresses first
my $sql_delete = "DELETE FROM $tablename";
my $execute = $sql_connect->query($sql_delete) or die "$!";

# For each rule
foreach my $rule (@rule) {
# Match only rules with IP addresses so we don't get comments etc
# This string match does not check for validity of IP addresses
if ( $rule =~ /\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}/ ) {
$rule =~ s/.*\[//g; # Remove [ character and before
$rule =~ s/\].*//g; # Remove ] character and after
# Split the remaining data using the commas
# and put it into ip_address array
my @ip_address = split /\,/, $rule;

# For each IP address in array
foreach my $ip_address (@ip_address) {

# Match on slash
if ( $ip_address =~ /.*\/.*/ ) {

# Expand CIDR to all IP addresses in range modified from
# http://www.perlmonks.org/?displaytype=print;node_id=190497
use NetAddr::IP;
my $newip = new NetAddr::IP($ip_address);

# While less than broadcast address
while ( $newip < $newip->broadcast) {
# Strip trailing slash and netmask from IP
my $temp_ip = $newip;
$temp_ip =~ s/\/.*//g;
# sql statement to insert IP
my $sql_statement = "INSERT INTO $tablename SET $colname = INET_ATON('$temp_ip')";
# execute statement
my $execute = $sql_connect->query($sql_statement); # Run statement
$newip ++; # Increment to next IP
}
}
# For non-CIDR, simply print IP
else {
# sql statement to insert IP
my $sql_statement = "INSERT INTO $tablename SET $colname = INET_ATON('$ip_address')";
# execute statement. maybe make this a function or otherwise clean
# since it is repeated inside the if
my $execute = $sql_connect->query($sql_statement); # Run statement
}
}
}
}
To search for connections or attempted connections to the suspicious IP addresses, you could use a query like the following. Obviously, the query should change based on criteria like the the ports that interest you, time of activity, protocol, etc.
SELECT sancp.sid,INET_NTOA(sancp.src_ip),sancp.src_port,INET_NTOA(sancp.dst_ip),sancp.dst_port, \
sancp.start_time FROM sancp INNER JOIN suspicious_hosts ON (sancp.dst_ip = suspicious_hosts.dst_ip) WHERE \
sancp.start_time >= DATE_SUB(UTC_DATE(),INTERVAL 24 HOUR) AND sancp.dst_port = '80';

16 June, 2008

Using afterglow to make pretty pictures

I recently had a need to visualize some network connections and thought there were probably plenty of existing tools to draw me a picture based on data in a CSV file since Sguil can export query results to a CSV. MySQL can also output to a CSV file, so the following could be scripted even more easily on a sguild server than a client. The client requires more manual steps, but I decided to try it that way first.

C.S. Lee recommended trying afterglow. Looking at his blog, I saw that he had a short write-up on afterglow. I followed similar steps to install everything that was needed.

$ sudo apt-get install libgraphviz-perl libtext-csv-perl
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
graphviz libio-pty-perl libipc-run-perl
libparse-recdescent-perl
libversion-perl libxml-twig-perl
Suggested packages:
graphviz-doc msttcorefonts libunicode-map8-perl
libunicode-string-perl
xml-twig-tools
Recommended packages:
libtie-ixhash-perl libxml-xpath-perl
The following NEW packages will be installed:
graphviz libgraphviz-perl libio-pty-perl libipc-run-perl
libparse-recdescent-perl libversion-perl libxml-twig-perl
libtext-csv-perl
Next, I downloaded the afterglow source and extracted it.
$ tar xvzf afterglow-1.5.9.tar.gz
For these examples, I connected to the Sguil demo server and exported one sancp query result. Afterglow expects three columns, which include the source IP, destination IP, and destination port. This is where running queries on the sguild server could make more sense since I could just select src_ip, dst_ip and dst_port and write the results to a CSV file. With results from the Sguil client, I have to take the CSV and remove everything but the desired columns.

To remove the other columns, I used sed. Perl or awk would work fine, too, and it is pretty easy to script. Here is an example I used without scripting. I am writing to a new file so I keep the original file intact. If you exported from Sguil and included the column names on the first line, the following command will delete the first and last lines to clean up the data before removing the extra columns.
$ sed '1d' sancpquery_1.csv |  sed 's/^\([^,]*,[^,]*,[^,]*,[^,]*,\)\([^,]*,\)\([^,]*,\)
\([^,]*,[^,]*\)\(,[^,]*,[^,]*,[^,]*,[^,]*\)/\2\4/' > sancpquery_1_3col.csv
With a CSV exported from the results of a sancp query, that will leave you with the required three columns. Next, from the directory where I extracted afterglow, I can feed it the CSV. The results are using different values for "-p", zero being the default. Values of "-p 1" and "-p 3" made identical images.
$ cat /home/nr/sancpquery_1_3col.csv | src/perl/graph/afterglow.pl -v -p 0 -e 1.5
-c src/perl/parsers/color.properties | neato -Tgif -o ~/sancpquery_p0.gif
Afterglow is an interesting tool. I can definitely see how it could help when looking at data on a spreadsheet isn't enough to visualize where and systems were connecting and on which ports. I can definitely think of some more features that might be useful, like showing timestamps or time ranges on connections, or even animating the images to show a sequence of events, but adding features could also make it unnecessarily complex.

16 May, 2008

Query sguildb for alert name then do name lookup

I wrote a Perl script to query for a specific alert name and then get the NetBIOS or DNS name of the systems that triggered the alert. This script is useful to me mainly as an automated reporting tool. An example would be finding a list of systems associated with a specific spyware alert that is auto-categorized within Sguil. The alert will never show up in the RealTime alerts but I use the script to get a daily email on the systems triggering the alert.

Some alerts aren't important enough to require a real-time response but may need remediation or at least to be included in statistics. I don't know if this would be useful for others as it is written, but parts of it might be.

As always with Perl, I welcome suggestions for improvement since I'm by no means an expert. (By the way, does anyone else have problems with Blogger formatting when using <pre> tags?)

#!/usr/bin/perl
#
# by nr
# 2008-05-11
# Script to query db for a specific alert
# and do name lookup based on source IP address in results

use strict;

# Requires MySQL, netbios and DNS modules
use Mysql;
use Net::NBName;
use Net::DNS;

my $host = "localhost";
my $database = "sguildb";
my $tablename = "event";
my $user = "sguil";
my $pw = "PASSWORD";
my $alert = 'ALERT NAME Here';

# Set the query
my $sql_query = "SELECT INET_NTOA(src_ip) as src_ip,count(signature) as count FROM $tablename \
WHERE $tablename.timestamp > DATE_SUB(UTC_DATE(),INTERVAL 24 HOUR) AND $tablename.signature \
= '$alert' GROUP BY src_ip";

# perl mysql connect()
my $sql_connect = Mysql->connect($host, $database, $user, $pw);

print "\"$alert\" alerts in the past 24 hours: \n\n";
print "Count IP Address Hostname\n\n";

my $execute = $sql_connect->query($sql_query); # Run query

# Fetch query results and loop
while (my @result = $execute->fetchrow) {
my @hostname; # 1st element of this array is used later for name queries
my $ipresult = $result[0]; # Set IP using query result
my $count = $result[1]; # Set alert count using query result
my $nb_query = Net::NBName->new; # Setup new netbios query
my $nb = $nb_query->node_status($result[0]);
# If there is an answer to netbios query
if ($nb) {
my $nbresults = $nb->as_string; # Get query result
# Split at < will make $hostname[0] the netbios name
# Is there a better way to do this using a substitution?
@hostname = split /</, $nbresults;
} else {
# Do a reverse DNS lookup if no netbios response
# May want to add checks to make sure external IPs are ignored
my $res = Net::DNS::Resolver->new;
my $namequery = $res->query("$result[0]","PTR");
if ($namequery) {
my $dnsname = ($namequery->answer)[0];
$hostname[0] = $dnsname->rdatastr;
} else {
$hostname[0] = "UNKNOWN"; # If no reverse DNS result
}
}
format STDOUT =
@>>> @<<<<<<<<<<<<<< @*
$count, $ipresult, $hostname[0]
.
write;
}

27 March, 2008

Upgrading from Sguil 0.7.0 CVS to RELEASE

Sguil 0.7.0 was released this week. The upgrade from 0.6.1 takes a little more effort because of the change to multiple agents, but since I was already running 0.7.0 from CVS, upgrading was fairly easy.

The Sguil overview page at NSMWiki notes some of the differences between 0.6.1 and 0.7.0. It also has some nifty diagrams someone (***cough***me***cough***) contributed that may help people visualize the data flow in Sguil 0.7.0.

Here is how I upgraded from CVS to the release version.

First, I pre-staged the systems by copying the appropriate Sguil components to each system. Then I shut down the agents on all sensors and stop sguild on the server.

Looking in my sguild directory, there is not that much that will actually need to be replaced.

$ ls /etc/sguild/
CVS/ certs/ sguild* sguild.email sguild.users
archive_sguildb.tcl* contrib/ sguild.access sguild.queries sql_scripts/
autocat.conf lib/ sguild.conf sguild.reports xscriptd*
I start by making a backup of this whole directory. The files or directories that I don't want to lose are:

sguild.conf: sguild configuration file
sguild.users: sguild's user and password hash file
sguild.reports: I have some custom reports, including some based on the reporting and data mining page of NSMWiki.
autocat.conf: used to automatically categorize alerts based on specific criteria, and most people that have done any tuning will hopefully have taken advantage of autocat.conf
certs/: sguild cert directory

Some people may also have added standard global queries in sguild.queries, or access controls in sguild.access. These are all basically configuration files, so if you have changed them you may want to keep them or include the changes in the new files.

After deciding what I need to keep, I upgrade the server.
$ mv -v /etc/sguild/server ~/sguild-old
$ cp -R ~/src/sguil-0.7.0/server /etc/sguild/
$ cp -R ~/sguild-old/certs /etc/sguild/server/
$ cp ~/sguild-old/sguild.users /etc/sguild/server/
$ cp ~/sguild-old/sguild.conf /etc/sguild/server/
$ cp ~/sguild-old/sguild.reports /etc/sguild/server/
$ cp ~/sguild-old/autocat.conf /etc/sguild/server/
Then I edit my sguild init script to remove "-o" and "-s" since encryption is now required instead of optional. The new version of sguild and the agents will give errors if you start them without removing the switches.

I start sguild and see that it is working, so next is the sensor. On the sensor, I backed up the conf files first.
$ cp -v /etc/sguil-0.7.0/sensor/*.conf ~/
$ rm -Rf /etc/sguil-0.7.0/sensor/
$ cp ~/src/sguil-0.7.0/sensor /etc/sguil-0.7.0/
$ cp ~/*.conf /etc/sguil-0.7.0/sensor/
Then, I edited all the agent init scripts to remove the "-o" switch. The agents are pads, pcap, sancp and snort. Now I can reconnect the agents to the server and the only thing left to do is upgrade my client. For the client upgrade, I replace everything except the sguil.conf file. If I made any modifications to my client, I would also need to incorporate those into the new client.

21 March, 2008

Passive Tools

I love passive tools, what I like to think of as the "M" in NSM.

I recently posted about PADS. Sguil also uses p0f for operating system fingerprinting, and sancp for session-logging.

Even the IDS and packet-logging components of Sguil are passive. There are plenty of other good passive tools available.

You can learn a lot just by listening.

You can also run Snort inline and active, which goes a little beyond monitoring, for better or worse.

18 March, 2008

Using DJ Bernstein's daemontools

I use DJ Bernstein's daemontools to monitor Barnyard, making sure the barnyard process will restart if it dies for any reason. Barnyard is an output spooler for Snort and is probably the the least stable of all the software that is used when running Sguil. When Barnyard encounters errors and exits, it needs to be restarted.

Daemontools is useful because it will watch a process and restart it when needed. For anyone that has used other DJ Bernstein software like djbdns or qmail, you may also have used daemontools. I think daemontools has a reputation as difficult to install and configure, but I've used it on a number of systems with barnyard or djbdns without any major issues. (As for qmail, I prefer postfix).

Here is how I installed it, which only has one small change from the install instructions.

mkdir -p /package
chmod 1755 /package
tar xzvpf install/daemontools-0.76.tar.gz -C /package/
cd /package/admin/daemontools-0.76/
Before running the install script, note the "errno" section on DJ Bernstein's Unix portability notes. On Linux, since I'm installing from source I need to replace one line in the src/error.h file, as shown in this patch snippet.
-extern int errno;
+#include <errno.h>
After changing error.h, I can run the installer.
./package/install
I configure daemontools to work with barnyard.
mkdir /etc/barnyard
vim /etc/barnyard/run
The "run" file simply is a script that runs barnyard. For example, the contents of mine:
#!/bin/sh
exec /bin/barnyard -c /etc/snort/barnyard.conf -d /nsm -f unified.log \
-w /nsm/waldo.file -a /nsm/by_archive
Next, I link the new barnyard directory to make it a subdirectory of daemontool's service directory.
ln -s /etc/barnyard /service/barnyard
When installing, daemontools automatically adds this entry to /etc/inittab:
SV:123456:respawn:/command/svscanboot
svscanboot starts a svscan process that then starts a supervise process for each subdirectory, which in this case would only be the barnyard directory. I can have the inittab file re-parsed with telinit q after daemontools is installed rather than rebooting.

If the barnyard process dies, daemontools will automatically try and restart it based on the contents of the "run" file.

Now, even if I kill the barnyard process on purpose then it will be restarted automatically. If I need to manage the process, I can use the svc command. For instance, to send barnyard a HUP or a KILL:
svc -h /service/barnyard
svc -k /service/barnyard
To add another process for daemontools to manage, just create a directory, create a run file, then link the new directory to daemontools' service directory.
mkdir /etc/someprocess
vim /etc/someprocess/run
ln -s /etc/someprocess /service/someprocess

30 November, 2007

A "for" loop with tcpdump

This is the kind of post to which most people with Linux or Unix experience will say "Duh!", or even offer better ways to do what I'm doing. It's just a short shell script I use to loop through all the packet capture files on a Sguil sensor looking for specific host(s) and/or port(s). Sometimes it is easier to run a script like this and then SCP the files from the sensor to a local system instead of querying and opening the packet captures one by one using the Sguil client. This also allows more leisurely analysis without worrying about the log_packets script deleting any of the data I want to see.

Nigel Houghton mentioned that any shell script of more than 10 or 20 lines should be rewritten in Perl. Despite his advice, I do plan to switch some of my shell scripts to Perl when I get the chance. ;)

The sensorname variable works for me because my sensor uses the hostname as the sensor name. If my sensor name didn't match the actual hostname or I had multiple sensors on one host then I would have to change the variable.

The BPF expression can be changed as needed. Maybe a better way would be to set it to a command line argument so you don't have to edit the file every time you run the script. The following example uses a Google IP address and port 80 for the BPF. I most often use this script to simply grab all traffic associated with one or more IP addresses.

It may also be worth noting that I run the script as a user that has permissions to the packet captures.

#!/bin/sh

sensorname="`hostname`"
bpfexpression="host 64.233.167.99 and port 80"
outdir=/home/nr/scriptdump

if [ ! -d $outdir]; then
mkdir $outdir
fi

# For each dir in the dailylogs dir
for i in $( ls /nsm/$sensorname/dailylogs/ ); do
# For each file in dailylogs/$i dir
for j in $( ls /nsm/$sensorname/dailylogs/$i ); do
# Run tcpdump and look for the host
tcpdump -r /nsm/$sensorname/dailylogs/$i/$j -w $outdir/$j.pcap $bpfexpression
done
done

# For each pcap
cd $outdir
for file in $( ls *.pcap ); do
# If file has size of 24, it has no data so rm file
if [ "`ls -l $file | awk '{print $5}'`" = "24" ]; then
rm -f "$file"
fi
done
One of the things I like about posting even a simple script like this is that it makes me really think about how it could be improved. For instance, it might be nice to add a variable for the write directory so it is easier to change where the output files go.

Also, in Sguil the full content packet captures use the Unix time in the file names. If you ever had a copy of the file where the timestamp wasn't preserved, you could still find it by looking at the epoch time in the file name. For instance, with a file named "snort.log.1194141602", the following would convert the epoch time to local time:
$ date -d @1194141602
Sat Nov 3 22:00:02 EDT 2007

16 October, 2007

Sguil 0.7.0 Client and NetBIOS Names

The previous versions of Sguil clients used a different method when doing a reverse DNS lookup. This method must have fallen back to the operating system's name lookup methods at some point, because I used to get NetBIOS names on systems that had them when there was no DNS name returned. The Sguil 0.7.0 client uses tcllib's DNS client to resolve names, which is used to allow DNS proxying through OpenDNS or other DNS servers. However, this method is purely DNS, so any Windows system without a DNS entry would return "Unknown" as the hostname. I decided to play with the client and add NetBIOS name queries in the event the reverse DNS came up empty.

The file that contains the GetHostbyAddr proc in Sguil is extdata.tcl and lives in the lib directory. From the description in the file:

#
# GetHostbyAddr: uses extended tcl (wishx) to get an ips hostname
# May move to a server func in the future
#

The proc does a few things. First, it checks to see whether external DNS is enabled and whether an external server has been configured in sguil.conf. If an external DNS server is set, then the process will see if a HOME_NET is set. If the HOME_NET is set, it is compared to the IP being resolved. A match means that the nameserver is set to the local rather than the external. If HOME_NET is not set or the IP does not match HOME_NET, then the external nameserver is used. If external DNS is not selected in the client, then the local nameserver is used.

If the name resolution returns no value, then the client displays "Unknown" as a result. Just prior to that is where I added the NetBIOS name lookup. Here is the whole GetHostbyAddr proc after I modified it, with a few comments to follow:

proc GetHostbyAddr { ip } {

global EXT_DNS EXT_DNS_SERVER HOME_NET NETBIOS_NET

if { $EXT_DNS } {

if { ![info exists EXT_DNS_SERVER] } {

ErrorMessage "An external name server has not been configured in sguil.conf. Resolution aborted."
return

} else {

set nameserver $EXT_DNS_SERVER

if { [info exists HOME_NET] } {

# Loop thru HOME_NET. If ip matches any networks than use a the locally configured
# name server
foreach homeNet $HOME_NET {

set netMask [ip::mask $homeNet]
if { [ip::equal ${ip}/${netMask} $homeNet] } { set nameserver local }

}

}

}

} else {

set nameserver local

}

if { $nameserver == "local" } {

set tok [dns::resolve $ip]

} else {

set tok [dns::resolve $ip -nameserver $nameserver]

}

set hostname [dns::name $tok]
dns::cleanup $tok

# Added hack to use NetBIOS name lookups if no DNS entry
if { $hostname == "" } {

# Only check NETBIOS_NET addresses for NetBIOS names
if { [info exists NETBIOS_NET] } {

# Loop thru NETBIOS_NET. If ip matches then do NetBIOS lookup
foreach netbiosNet $NETBIOS_NET {

set netMask [ip::mask $netbiosNet]
if { [ip::equal ${ip}/${netMask} $netbiosNet] } {

# NetBIOS for Windows client
if { $::tcl_platform(platform) == "windows" } {

regexp {.+?(.{15})<00>} [ exec nbtstat -a $ip ] \
dummyvar hostname }

# NetBIOS for Unix client but you need samba client tools
if { $::tcl_platform(platform) == "unix" } {

# Match 16 chars because of trailing space w/nmblookup
regexp {.+?(.{16})<00>} [ exec nmblookup -A $ip ] \
dummyvar hostname }

}

}

}

}

if { $hostname == "" } { set hostname "Unknown" }
return $hostname

}

I took Bamm's check for HOME_NET and changed it to a check for NETBIOS_NET. I think people will definitely not want every IP to get a NetBIOS query if it has no DNS entry, nor will they want every IP in HOME_NET to be checked. This change means you need to add a NETBIOS_NET to sguil.conf if you want to resolve NetBIOS names.

Next, I checked for the platform and either perform an nbtstat from a Windows client or an nmblookup from a Unix client.

The regular expression I used to grab the result of the query is sort of weak, but functional. It simply finds the first instance of "<00>" and grabs the preceding 15 characters on Windows or 16 characters when doing a nmblookup on Unix. This is because the output of nmblookup has an extra space after the NetBIOS name, which can be up to 15 characters, while a 15 character name on Windows will have no trailing space. I would like to cut out any trailing whitespace using the regular expression, but I'm not sure about the best way to do it. A word boundary won't work because NetBIOS names can have characters like dashes and dots.

Suggestions and feedback welcome. Is something like this useful? How can it be improved? Is there some much easier way to do this? I can barely spell TCL let alone code it, so be gentle!

Edit: giovani from #snort-gui gave me this regexp that should grab the NetBIOS name without including trailing whitespace. He also pointed out that having spaces between the braces and the regexp, as I did, can alter the regexp.
{([^\s\n\t]+)\s*<00>}

30 September, 2007

Running IDS Inline

When do you run IDS inline?

Although the idea of running an IDS inline to prevent attacks rather than just detect them is appealing, reality is harsh. There are a lot of problems running an IDS inline.

The first and most significant problem with running IDS inline is that most people don't adequately tune IDS. This is a problem whether you run inline or passive IDS. I get the feeling that the general management view towards IDS, perpetuated by marketing, is that you should be able to drop an IDS on the network and it is ready to go. This is far from the truth. Network monitoring requires tuning all involved systems and humans to analyze the information provided by the systems.

When I set up my first Snort deployment, I spent countless hours tuning and tweaking the configuration, rules, performance and more. Along the way, I added Sguil, which was useful both for more thorough analysis and more effective tuning based on those analyses. Even after all that, tuning is a daily requirement and never should stop.

Every time you update rules, you should tune. Every time the network changes, you may need to tune. Every time your IDS itself is updated, you may need to tune. Every time a new attack is discovered, you may need to tune. Tuning does not and should not end unless you want your network security monitoring to become less effective over time. I would be willing to bet that a majority of inline IDS don't come close to taking full advantage of blocking features because of a lack of tuning. Passive IDS suffers from the same problem, but the results can be even worse if an inline IDS starts blocking legitimate traffic.

Another problem, somewhat related to tuning, with running IDS inline is that rules are never perfect. Do you trust your enabled rules to be 100% perfect? If not, what percentage of alerts from a given rule must be accurate to make the rule acceptable in blocking mode? Even good rules will sometimes or often trigger on benign traffic.

For Snort, one of the most reliable rule sets are policy violations. Both VRT rules and Bleeding rules are extremely reliable when detecting policy violations with a minimal number of alerts triggered on unrelated network traffic. Spyware rules are similarly accurate.

Rules designed to detect more serious malicious activity are much less consistent. Most are simply not reliable enough to use in a mode that blocks the traffic the rule is designed to detect. That does not mean the rules are necessarily bad! Good rules still aren't perfect. This is one of many reasons why there will always be work for security analysts. IDS or other NSM systems are no substitute for the analysis that an experienced human can provide. They simply are tools to to be used by the analyst.

Last, don't forget that running anything inline can seriously impact the network. If my Snort process at home dies for some reason, I can survive without Internet access until I get it restarted. This isn't always the case for businesses, so consider whether your traffic should be going through a system that fails open if there are problems. Even at home I don't queue port 22 to snort_inline because I want to be able to SSH into the box from the Internet if there are problems.

The real question that has to be answered is whether the benefits of dropping traffic iare worth the risks of running inline.

21 September, 2007

First Impressions of Sguil 0.7.0

Since first using Sguil on version 0.5.3, I have definitely become a believer in Sguil and network security monitoring. Sguil is much more than just a front-end for looking at Snort alerts. It integrates other features, such as session data and full content packet captures, that make security monitoring much more effective than just looking at IDS alerts.

I recently upgraded my version of Sguil from the release version, 0.6.1, to the new beta version, 0.7.0. I have not even been using it for a week yet, but I do have a few first impressions.

Before talking about Sguil itself, I would like to say thanks to Bamm Visscher, the original author of Sguil, Steve Halligan, a major contributor, and all the others who have contributed. I can say that it is easy to tell that Sguil is written and developed by security analysts, and it is bound to make you look smart to your bosses. Sguil has a thriving community of users and contributors that have been very active in improvements and documentation.

Upgrading Sguil from 0.6.1 to 0.7.0 was less difficult than I had anticipated. Reading the Sguil on RedHat HOWTO is a good idea, even if using a different operating system or distribution. It is not necessary to follow the HOWTO but it does provide a lot of useful information to ease the installation or upgrade process.

The basic procedure for upgrading was to stop sguild and sensor_agent, run the provided script to update the database, upgrade the components, and add PADS. Of course, if you are doing this in a production environment then you probably would want to backup the database, sguild files on the server, and sguil files on the sensors. Since I was upgrading at home, I didn't bother backing up my database.

Under the hood, communications between sensors and the Sguil server has changed. The network and data flow diagrams I contributed to the NSMWiki show the change from one sensor agent to multiple agents. One reason for this change is that it will make it easier to distribute sensor services across multiple systems, allowing people to run packet logging, Snort in IDS mode, or sancp on separate systems. I can see this being extremely useful if you are performance-limited because of old hardware, or if you monitor high-traffic environments.

Another reason for the change in agents was to make it easier to add other data sources. An example of this, the only one I know of so far, is Victor Julien's modsec2sguil. It will be interesting to see if the changes to Sguil lead to other agents to add support for other data sources. I know that David Bianco has discussed writing an OSSEC agent to add a host-based IDS as a Sguil data source.

The changes to the client seem relatively minor but are useful. David Bianco already has written about added support for cloaking investigative activities in Sguil. Sguil 0.7.0 has added support for proxying DNS requests through a third party like OpenDNS. In fact, this feature was enabled by default when I installed my new client.

Another change is PADS, which will display real-time alerts to the client as new assets are detected on the network. An example of a PADS alerts is:

PADS New Asset - smtp Microsoft Exchange SMTP
Though I like the idea of PADS a lot, it currently has some issues that definitely limit its usefulness. In its current form, Sguil's implementation of PADS has a bug where it generates alerts for external services, for instance when my home systems connect to external web or SMTP servers. The goal of PADS in the context of Sguil is really internal service detection so you can see any unusual or non-standard services on systems you are monitoring.

Even once the bug is fixed, I can see it being extremely noisy on a large, diverse network. I may make a separate post in the future with ideas regarding PADS. Profiles for groups of systems and their services is definitely one idea that might be useful, but I'm not sure how hard it would be to implement. PADS has potential, but it will take some time.

Another nice change in the Sguil client is the ability to click the reference URLs or SID when displaying Snort rules, opening your browser to the relevant page. This was a feature that was sorely needed and it will be nice not to need copy and paste for Snort rule references.

I'm looking forward to further testing and the future release of 0.7.0.

10 September, 2007

Querying Session Data Based on Snort Rule IPs

Sometimes Snort rules can contain useful information but are not practical to use in production. The Bleeding Snort rules recently added a set of rules to detect connection attempts to known compromised hosts. If you take a look at the rules, you'll see that it is essentially a large list of IP addresses that are known to be compromised.

When I first ran these rules on a Snort sensor, they definitely gave me some alerts requiring action. However, the performance of the sensor really suffered, particularly as the set of IP addresses grew. Since the rules were causing packet loss, I wanted to disable them.

I decided to write a script to grab the IP addresses from the Bleeding rule, then load the addresses into my database to compare with stored sancp data. If you are monitoring your network but not storing session data acquired with tools like sancp or argus, you should be. In my case, I am running Sguil, which uses sancp.

First, I had to write the script to grab the IP addresses. Since I just finished an introduction to Perl class in school and had many projects that required string matching, I figured that was the way to go. Since I'm a beginner, I would be more than happy if anyone can offer improvements to the following script. In particular, one thing I have not worked on yet is a method for expanding CIDR notation to individual IP addresses. The bleeding-compromised.rules do contain some networks in CIDR notation rather than just individual IP addresses, and for now the script simply strips the notation resulting in an incomplete IP list.

Edit: I posted an updated script that replaces the one in this post. I would suggest using the updated version rather than this one.

#!/usr/bin/perl
#
# Script to pull IP address from Snort rules file
# by nr
# 2007-08-30

# Set filenames
$rulefile = "/nsm/rules/bleeding-compromised.rules";
$ip_outfile = "iplist.txt";

# Print error unless successful open of file to read
die "Can't open rulefile.\n" unless open RULEFILE, "<", "$rulefile"; # Open file to write open IPLIST, ">", "$ip_outfile";

# Put each rule from rules file into array
chomp(@rule = <RULEFILE>);

# For each rule
foreach $rule (@rule) {
# Match only rules with IP addresses so we don't get comments etc
# This string match does not check for validity of IP addresses
if ( $rule =~ /\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}/ ) {
# Remove everything before [ character
$rule =~ s/.*\[//g;
# Remove everything after ] character
$rule =~ s/\].*//g;
# Split the remaining data using the commas
# and put it into ip_address array
@ip_address = split /\,/, $rule;

# For each IP address in array
foreach $ip_address (@ip_address) {
# Remove CIDR notation (means those IP ranges are missed - need to fix)
$ip_address =~ s/\/.*//g;
# Print to output file one IP per line
print IPLIST "$ip_address\n";
}
}
}

# Close filehandles
close RULEFILE;
close IPLIST;

Now I have a file called "iplist.txt" with the list of IP addresses, one per line. Next, I log into MySQL and load the list into a temporary database and table. The table really only needs one column of the CHAR or VARCHAR data type. (See the MySQL documentation for creating tables or databases).

LOAD DATA LOCAL INFILE '/home/nr/iplist.txt' INTO TABLE temp.ipaddr;

Then I have to convert the IP addresses to the INT data type using the INET_ATON function so they can be matched against my sancp session data. I created the table "sguildb.ipaddresses" for cases like this where I want to load external IP address and then run a query. The "temp.ipaddr" table has one column called "new_ip". The "sguildb.ipaddresses" table also has one column, but called "dst_ip".

INSERT INTO sguildb.ipaddresses SELECT INET_ATON(new_ip) FROM temp.ipaddr;

In MySQL 5.x, you can combine the previous two steps to add and convert the data in one step. I'm currently running 4.1, so I have not investigated the exact syntax.

Finally, I can query my sancp session data for connections going to any of the IP addresses in sguildb.ipaddresses, which are the IP addresses from the Snort rule.

SELECT sancp.sid,INET_NTOA(sancp.src_ip),sancp.src_port,INET_NTOA(sancp.dst_ip),
sancp.dst_port,sancp.start_time FROM sancp INNER JOIN ipaddresses ON
(sancp.dst_ip = ipaddresses.dst_ip) WHERE sancp.start_time >= DATE_SUB(UTC_DATE(),
INTERVAL 24 HOUR) AND sancp.dst_port = '80';

This query will return the sensor ID, source IP, source port, destination IP, destination port, and session start time from the sancp table wherever the sancp.dst_ip matches the ipaddresses.dst_ip where I stored the IP addresses from the Snort rule. Notice that it will query the last 24 hours for port 80 connections only. Depending on the type of activity you are looking for, you could change ports or remove the port match entirely.

This whole process could be automated further by running the mysql commands directly from the Perl script so the compromised IP addresses are updated in the database when the script is run.

The final SQL query to match the compromised IP addresses with sancp destination IP addresses can easily be turned into a cronjob. For instance, if querying for the past 24 hours then run the cronjob once a day. Once the results are returned, if running Sguil with full content logging it is easy to then query for the individual connections you're interested in and view the ASCII transcripts or the packet captures.