27 March, 2008

Upgrading from Sguil 0.7.0 CVS to RELEASE

Sguil 0.7.0 was released this week. The upgrade from 0.6.1 takes a little more effort because of the change to multiple agents, but since I was already running 0.7.0 from CVS, upgrading was fairly easy.

The Sguil overview page at NSMWiki notes some of the differences between 0.6.1 and 0.7.0. It also has some nifty diagrams someone (***cough***me***cough***) contributed that may help people visualize the data flow in Sguil 0.7.0.

Here is how I upgraded from CVS to the release version.

First, I pre-staged the systems by copying the appropriate Sguil components to each system. Then I shut down the agents on all sensors and stop sguild on the server.

Looking in my sguild directory, there is not that much that will actually need to be replaced.

$ ls /etc/sguild/
CVS/ certs/ sguild* sguild.email sguild.users
archive_sguildb.tcl* contrib/ sguild.access sguild.queries sql_scripts/
autocat.conf lib/ sguild.conf sguild.reports xscriptd*
I start by making a backup of this whole directory. The files or directories that I don't want to lose are:

sguild.conf: sguild configuration file
sguild.users: sguild's user and password hash file
sguild.reports: I have some custom reports, including some based on the reporting and data mining page of NSMWiki.
autocat.conf: used to automatically categorize alerts based on specific criteria, and most people that have done any tuning will hopefully have taken advantage of autocat.conf
certs/: sguild cert directory

Some people may also have added standard global queries in sguild.queries, or access controls in sguild.access. These are all basically configuration files, so if you have changed them you may want to keep them or include the changes in the new files.

After deciding what I need to keep, I upgrade the server.
$ mv -v /etc/sguild/server ~/sguild-old
$ cp -R ~/src/sguil-0.7.0/server /etc/sguild/
$ cp -R ~/sguild-old/certs /etc/sguild/server/
$ cp ~/sguild-old/sguild.users /etc/sguild/server/
$ cp ~/sguild-old/sguild.conf /etc/sguild/server/
$ cp ~/sguild-old/sguild.reports /etc/sguild/server/
$ cp ~/sguild-old/autocat.conf /etc/sguild/server/
Then I edit my sguild init script to remove "-o" and "-s" since encryption is now required instead of optional. The new version of sguild and the agents will give errors if you start them without removing the switches.

I start sguild and see that it is working, so next is the sensor. On the sensor, I backed up the conf files first.
$ cp -v /etc/sguil-0.7.0/sensor/*.conf ~/
$ rm -Rf /etc/sguil-0.7.0/sensor/
$ cp ~/src/sguil-0.7.0/sensor /etc/sguil-0.7.0/
$ cp ~/*.conf /etc/sguil-0.7.0/sensor/
Then, I edited all the agent init scripts to remove the "-o" switch. The agents are pads, pcap, sancp and snort. Now I can reconnect the agents to the server and the only thing left to do is upgrade my client. For the client upgrade, I replace everything except the sguil.conf file. If I made any modifications to my client, I would also need to incorporate those into the new client.

21 March, 2008

Passive Tools

I love passive tools, what I like to think of as the "M" in NSM.

I recently posted about PADS. Sguil also uses p0f for operating system fingerprinting, and sancp for session-logging.

Even the IDS and packet-logging components of Sguil are passive. There are plenty of other good passive tools available.

You can learn a lot just by listening.

You can also run Snort inline and active, which goes a little beyond monitoring, for better or worse.

18 March, 2008

Using DJ Bernstein's daemontools

I use DJ Bernstein's daemontools to monitor Barnyard, making sure the barnyard process will restart if it dies for any reason. Barnyard is an output spooler for Snort and is probably the the least stable of all the software that is used when running Sguil. When Barnyard encounters errors and exits, it needs to be restarted.

Daemontools is useful because it will watch a process and restart it when needed. For anyone that has used other DJ Bernstein software like djbdns or qmail, you may also have used daemontools. I think daemontools has a reputation as difficult to install and configure, but I've used it on a number of systems with barnyard or djbdns without any major issues. (As for qmail, I prefer postfix).

Here is how I installed it, which only has one small change from the install instructions.

mkdir -p /package
chmod 1755 /package
tar xzvpf install/daemontools-0.76.tar.gz -C /package/
cd /package/admin/daemontools-0.76/
Before running the install script, note the "errno" section on DJ Bernstein's Unix portability notes. On Linux, since I'm installing from source I need to replace one line in the src/error.h file, as shown in this patch snippet.
-extern int errno;
+#include <errno.h>
After changing error.h, I can run the installer.
./package/install
I configure daemontools to work with barnyard.
mkdir /etc/barnyard
vim /etc/barnyard/run
The "run" file simply is a script that runs barnyard. For example, the contents of mine:
#!/bin/sh
exec /bin/barnyard -c /etc/snort/barnyard.conf -d /nsm -f unified.log \
-w /nsm/waldo.file -a /nsm/by_archive
Next, I link the new barnyard directory to make it a subdirectory of daemontool's service directory.
ln -s /etc/barnyard /service/barnyard
When installing, daemontools automatically adds this entry to /etc/inittab:
SV:123456:respawn:/command/svscanboot
svscanboot starts a svscan process that then starts a supervise process for each subdirectory, which in this case would only be the barnyard directory. I can have the inittab file re-parsed with telinit q after daemontools is installed rather than rebooting.

If the barnyard process dies, daemontools will automatically try and restart it based on the contents of the "run" file.

Now, even if I kill the barnyard process on purpose then it will be restarted automatically. If I need to manage the process, I can use the svc command. For instance, to send barnyard a HUP or a KILL:
svc -h /service/barnyard
svc -k /service/barnyard
To add another process for daemontools to manage, just create a directory, create a run file, then link the new directory to daemontools' service directory.
mkdir /etc/someprocess
vim /etc/someprocess/run
ln -s /etc/someprocess /service/someprocess

08 March, 2008

Testing PADS

Before putting PADS into production in a new environment, here is how I tested it.

First, I installed the version needed for integration with Sguil by applying the pads.patch. Note that there is also a PADS VLAN patch. The patching and installing is described in the NSMWiki, but I didn't need to change LDFLAGS or CFLAGS for my installation.

$ patch -p0 < ../patches/pads.patch
$ ./configure
$ make
$ sudo make install
Now I can test it.
# pads -i bridge0 -n 192.168.1.0/24
pads - Passive Asset Detection System
v1.2 - 06/17/05
Matt Shelton

[-] Processing Existing assets.csv
[-] WARNING: pcap_lookupnet (bridge0: no IPv4 address assigned)
[-] Filter: (null)
[-] Listening on interface bridge0

[*] Asset Found: Port - 80 / Host - 192.168.1.3 / Service - www / Application - Apache 2.2.8 (Unix)
[*] Asset Found: Port - 25 / Host - 192.168.1.3 / Service - smtp / Application - Generic SMTP - Possible Postfix (localhost.localdomain)
Now I try without defining a network.
# pads -i bridge0 -c /usr/local/etc/pads.conf
pads - Passive Asset Detection System
v1.2 - 06/17/05
Matt Shelton

[-] WARNING: pcap_lookupnet (bridge0: no IPv4 address assigned)
[-] Filter: (null)
[-] Listening on interface bridge0

[*] Asset Found: Port - 80 / Host - 64.233.179.191 / Service - www / Application - GFE/1.3
The Google IP address pops up while I'm logged in and editing this post. When you run PADS, you don't want to monitor all traffic or you'll be detecting services on systems outside your network.

Once I began testing PADS, I realized that I needed to add some signatures because I had some unknown services. This signature that comes with PADS was the one that detected the SMTP from the first test.
smtp,v/Generic SMTP - Possible Postfix//$1/,220 ([-.\w]+) ESMTP\r\n
PADS uses PCRE to test matches.. In this signature, the match inside the () is the host and domain name, and gets printed by using $1. If there was a second match in parentheses, it could be printed with $2. The whole signature is everything after the second comma.

I've already played around with adding or modifying some signatures and I'll probably post those once I get done testing in some different environments. There is an option to dump banner data to a pcap file that is useful to help write new signatures.
pads -i bridge0 -c /usr/local/etc/pads.conf -d bannerdump.pcap

05 March, 2008

make buildworld from 6.3-STABLE to 7.0-RELEASE

$ uname -r
6.3-STABLE
I am updating from FreeBSD 6.3-STABLE to 7.0-RELEASE. First I ran cvsup to synchronize my source with RELENG_7_0. When I started through the steps of rebuilding world, I had problems when running make buildworld.
cc1: out of memory allocating 97582896 bytes

Stop in /usr/src/gnu/usr.bin/cc/cc_int.
*** Error code 1

Stop in /usr/src/gnu/usr.bin/cc.
*** Error code 1

Stop in /usr/src.
*** Error code 1

Stop in /usr/src.
*** Error code 1

Stop in /usr/src.
I fixed it by installing ccache and editing /etc/make.conf (see ccache-howto-freebsd.txt).
cd /usr/ports/devel/ccache
make install clean

vim /etc/make.conf
.if !defined(NOCCACHE)
CC=/usr/local/libexec/ccache/world-cc
CXX=/usr/local/libexec/ccache/world-c++
.endif
:wq
Now I can run make cleandir and then make buildworld in /usr/src and don't get errors. I finish up the steps for rebuilding world after that.
$ uname -r
7.0-RELEASE

04 March, 2008

Using parted and LVM2 for large partitions

I wanted to spread a partition across two RAID cards, one drive partition on each card. Here is the server's physical drive configuration.

|-RAID0-|-RAID5---------|
| 00 01 | 02 03 04 05 06|

|-RAID5------------------------------------|
| 00 01 02 03 04 05 06 07 08 09 10 11 12 13|
The second RAID5 is a few TB and fdisk won't work on partitions larger than 2TB, so I use parted to create a partition that fills the free space.
# parted /dev/sdc
GNU Parted 1.8.1
Using /dev/sdc
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print

Model: DELL PERC 5/E Adapter (scsi)
Disk /dev/sdc: 3893GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Flags

(parted) mklabel gpt

(parted) mkpart primary 0 3893G
(parted) print

Model: DELL PERC 5/E Adapter (scsi)
Disk /dev/sdc: 3893GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Flags
1 17.4kB 3893GB 3893GB primary

(parted) quit
Then I use LVM2 to combine the two RAIDs, sdb1 and sdc1, into one logical volume.
# pvcreate /dev/sdb1 /dev/sdc1
Physical volume "/dev/sdb1" successfully created
Physical volume "/dev/sdc1" successfully created
# vgcreate nsm_vg /dev/sdb1 /dev/sdc1
Volume group "nsm_vg" successfully created
# pvscan
PV /dev/sdb1 VG nsm_vg lvm2 [272.25 GB / 632.00 GB free]
PV /dev/sdc1 VG nsm_vg lvm2 [3.54 TB / 3.54 TB free]
Total: 2 [1.94 TB] / in use: 2 [1.94 TB] / in no VG: 0 [0 ]

# lvcreate -L 3897G -n nsm_lv nsm_vg
Logical volume "nsm_lv" created

# mkfs.ext3 -m 1 /dev/nsm_vg/nsm_lv
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
528482304 inodes, 1056964608 blocks
10569646 blocks (1.00%) reserved for the super user
First nsm block=0
Maximum filesystem blocks=0
32256 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632,
2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 34 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.

# mount /dev/mapper/nsm_vg-nsm_lv /nsm

$ df -h
Filesystem Size Used Avail Use% Mounted on
---snip---
/dev/mapper/nsm_vg-nsm_lv
3.8T 196M 3.8T 1% /nsm
Now I can put the entry for mounting /nsm into /etc/fstab.