Showing posts with label ids. Show all posts
Showing posts with label ids. Show all posts

01 May, 2013

Installing OSSEC agent

With the recent news about the latest Apache backdoor on systems using cPanel, I thought it would be pertinent to show the process of adding an OSSEC agent that connects to a Security Onion server. Why is this relevant? Because OSSEC and other file integrity checkers can detect changes to binaries like Apache's httpd.

"OSSEC is an Open Source Host-based Intrusion Detection System that performs log analysis, file integrity checking, policy monitoring, rootkit detection, real-time alerting and active response."
Many systems include integrity checking programs in their default installs these days, for instance Red Hat with AIDE. AIDE is also available in repositories for a number of other Linux distributions plus FreeBSD.

This case in particular would require using something other than the default options for integrity checking because cPanel installs Apache httpd in /usr/local/apache/bin, a non-standard directory that may not be automatically included when computing file hashes and doing subsequent integrity checks.

The reason I'm demonstrating OSSEC here is that it easily integrates with the Sguil console, and in Security Onion the sensors and server already have OSSEC configured to send alerts to Sguild. OSSEC also has additional functionality compared to AIDE. In this case, I'm installing the agent on a Slackware server.
$ wget http://www.ossec.net/files/ossec-hids-2.7.tar.gz

---snipped---

 $ openssl sha1 ossec-hids-2.7.tar.gz
SHA1(ossec-hids-2.7.tar.gz)= 721aa7649d5c1e37007b95a89e685af41a39da43
 $ tar xvzf ossec-hids-2.7.tar.gz

---snipped---

 $ sudo ./install.sh

  OSSEC HIDS v2.7 Installation Script - http://www.ossec.net

 You are about to start the installation process of the OSSEC HIDS.
 You must have a C compiler pre-installed in your system.
 If you have any questions or comments, please send an e-mail
 to dcid@ossec.net (or daniel.cid@gmail.com).

  - System: Linux webserver 3.8.4
  - User: root
  - Host: webserver
 
   -- Press ENTER to continue or Ctrl-C to abort. --

 
 1- What kind of installation do you want (server, agent, local, hybrid or help)? agent
 
  - Agent(client) installation chosen.

 2- Setting up the installation environment.

  - Choose where to install the OSSEC HIDS [/var/ossec]:
 
 3- Configuring the OSSEC HIDS.

   3.1- What's the IP Address or hostname of the OSSEC HIDS server?: 192.168.1.20
 
  - Adding Server IP 192.168.1.20

   3.2- Do you want to run the integrity check daemon? (y/n) [y]:

    - Running syscheck (integrity check daemon).

   3.3- Do you want to run the rootkit detection engine? (y/n) [y]:
 
 - Running rootcheck (rootkit detection).

   3.4 - Do you want to enable active response? (y/n) [y]:

3.5- Setting the configuration to analyze the following logs:
    -- /var/log/messages
    -- /var/log/auth.log
    -- /var/log/syslog
    -- /var/adm/syslog
    -- /var/adm/auth.log
    -- /var/adm/messages
    -- /var/log/xferlog
    -- /var/log/proftpd.log
    -- /var/log/apache/error_log (apache log)
    -- /var/log/apache/access_log (apache log)
    -- /var/log/httpd/error_log (apache log)
    -- /var/log/httpd/access_log (apache log)

  - If you want to monitor any other file, just change
   the ossec.conf and add a new localfile entry.
   Any questions about the configuration can be answered
   by visiting us online at http://www.ossec.net .

   --- Press ENTER to continue ---

---snip---

- Init script modified to start OSSEC HIDS during boot.
 - Configuration finished properly.
 - To start OSSEC HIDS:
                /var/ossec/bin/ossec-control start
 - To stop OSSEC HIDS:
                /var/ossec/bin/ossec-control stop
 - The configuration can be viewed or modified at /var/ossec/etc/ossec.conf

    Thanks for using the OSSEC HIDS.
    If you have any question, suggestion or if you find any bug,
    contact us at contact@ossec.net or using our public maillist at

    ossec-list@ossec.net
    ( http://www.ossec.net/main/support/ ).

    More information can be found at http://www.ossec.net

    ---  Press ENTER to finish (maybe more information below). ---

 - You first need to add this agent to the server so they 
   can communicate with each other. When you have done so,
   you can run the 'manage_agents' tool to import the 
   authentication key from the server.

   /var/ossec/bin/manage_agents

   More information at: 
   http://www.ossec.net/en/manual.html#ma

Next, I add the agent to my Security Onion server.
$ sudo /var/ossec/bin/manage_agents 

****************************************
* OSSEC HIDS v2.6 Agent manager.     *
* The following options are available: *
****************************************

   (A)dd an agent (A).
   (E)xtract key for an agent (E).
   (L)ist already added agents (L).
   (R)emove an agent (R).
   (Q)uit.

Choose your action: A,E,L,R or Q: A

- Adding a new agent (use '\q' to return to the main menu).

  Please provide the following:
   * A name for the new agent: webserver
   * The IP Address of the new agent: 192.168.1.5
   * An ID for the new agent[001]: 

Agent information:

   ID:001
   Name:webserver
   IP Address:192.168.1.5

Confirm adding it?(y/n): y

****************************************
* OSSEC HIDS v2.6 Agent manager.     *
* The following options are available: *
****************************************

   (A)dd an agent (A).
   (E)xtract key for an agent (E).
   (L)ist already added agents (L).
   (R)emove an agent (R).
   (Q)uit.

Choose your action: A,E,L,R or Q: e

Available agents: 

   ID: 001, Name: webserver, IP: 192.168.1.5

Provide the ID of the agent to extract the key (or '\q' to quit): 001

Agent key information for '001' is: 

---snip---

** Press ENTER to return to the main menu.

Now copy the key, go back to the web server, paste and import the key.
$ sudo /var/ossec/bin/manage_agents 

****************************************
* OSSEC HIDS v2.7 Agent manager.     *
* The following options are available: *
****************************************

   (I)mport key from the server (I).
   (Q)uit.

Choose your action: I or Q: i

* Provide the Key generated by the server.
* The best approach is to cut and paste it.
*** OBS: Do not include spaces or new lines.

Paste it here (or '\q' to quit): ---snip---

Agent information:
   ID:001
   Name:webserver
   IP Address:192.168.1.5

Confirm adding it?(y/n): y

If I was running a system with cPanel that was vulnerable to Cdorked.A then I would want to make sure OSSEC is monitoring the directories with the Apache httpd files. The OSSEC default configuration from my recent install is /var/ossec/etc/ossec.conf and the relevant lines are below:
<syscheck>
    <!-- Frequency that syscheck is executed - default to every 22 hours -->
    <frequency>79200</frequency>
    
    <!-- Directories to check  (perform all possible verifications) -->
    <directories check_all="yes">/etc,/usr/bin,/usr/sbin</directories>
    <directories check_all="yes">/bin,/sbin</directories>

So by default OSSEC would apparently not be checking the integrity of cPanel's Apache installation and I would need to add /usr/local/apache to the directory checks. After making any changes for my particular system, I check the status of OSSEC and it is not yet running.
$ sudo /etc/rc.d/rc.ossec status
ossec-logcollector not running...
ossec-syscheckd not running...
ossec-agentd not running...
ossec-execd not running...
$ sudo /etc/rc.d/rc.ossec start 
Starting OSSEC HIDS v2.7 (by Trend Micro Inc.)...
Started ossec-execd...
Started ossec-agentd...
Started ossec-logcollector...
Started ossec-syscheckd...
Completed.

Note after adding the OSSEC agent on the remote system then adding it on the OSSEC server, you must restart ossec-hids-server in order for the ossec-remoted process to start listening on 1514/udp for remote agents.
$ sudo /etc/init.d/ossec-hids-server status
ossec-monitord is running...
ossec-logcollector is running...
ossec-remoted not running...
ossec-syscheckd is running...
ossec-analysisd is running...
ossec-maild not running...
ossec-execd is running...
$ sudo /etc/init.d/ossec-hids-server restart
Killing ossec-monitord .. 
Killing ossec-logcollector .. 
ossec-remoted not running ..
Killing ossec-syscheckd .. 
Killing ossec-analysisd .. 
ossec-maild not running ..
Killing ossec-execd .. 
OSSEC HIDS v2.6 Stopped
Starting OSSEC HIDS v2.6 (by Trend Micro Inc.)...
OSSEC analysisd: Testing rules failed. Configuration error. Exiting.
2013/04/30 23:13:59 ossec-maild: INFO: E-Mail notification disabled. Clean Exit.
Started ossec-maild...
Started ossec-execd...
Started ossec-analysisd...
Started ossec-logcollector...
Started ossec-remoted...
Started ossec-syscheckd...
Started ossec-monitord...
Completed.

$ sudo /etc/init.d/ossec-hids-server status
ossec-monitord is running...
ossec-logcollector is running...
ossec-remoted is running...
ossec-syscheckd is running...
ossec-analysisd is running...
ossec-maild not running...
ossec-execd is running...

$ netstat -l | grep 1514
udp        0      0 *:1514                  *:*    

Note the error corresponding to the FAQ entry about getting an error when starting OSSEC. However, since I'm running OSSEC 2.7 this did not seem to apply. Poking around, I realized the ossec-logtest executable had not been copied to /var/ossec/bin when I ran the install script. After I manually copied it to the directory, restarting OSSEC no longer caused the "Testing rules failed" error.

Once you have installed OSSEC on the system to be monitored, added the agent on the server, imported the key on the system to be monitored, restarted the server process, and started the client process, you will start getting alerts from the newly added system in Sguil. For example, the content of Sguil alerts will look like this after updating gcc:
Integrity checksum changed for: '/usr/bin/gcc'
Old md5sum was: '764a405824275d806ab5c441516b2d79'
New md5sum is : '6ab74628cd8a0cdf84bb3329333d936e'
Old sha1sum was: '230a4c09010f9527f2b3d6e25968d5c7c735eb4e'
New sha1sum is : 'b931ceb76570a9ac26f86c12168b109becee038b'

In the Sguil console, if I wanted to view all the recent OSSEC alerts I could perform a query as pictured below. Note you need to escape the brackets or remove them in favor of the MySQL wildcards '%%'.


Finally, to show an example of the various types of alerting that OSSEC can do in addition to checksum changes, here is a query and output directly from the MySQL console.
mysql> SELECT count(signature),signature FROM event WHERE signature LIKE '%%OSSEC%%' GROUP BY signature ORDER BY count(signature) DESC;
+------------------+---------------------------------------------------------------------------------------+
| count(signature) | signature                                                                             |
+------------------+---------------------------------------------------------------------------------------+
|              388 | [OSSEC] Integrity checksum changed.                                                   |
|              149 | [OSSEC] Host-based anomaly detection event (rootcheck).                               |
|               46 | [OSSEC] Integrity checksum changed again (2nd time).                                  |
|               39 | [OSSEC] IP Address black-listed by anti-spam (blocked).                               |
|               12 | [OSSEC] Integrity checksum changed again (3rd time).                                  |
|                4 | [OSSEC] Web server 400 error code.                                                    |
|                3 | [OSSEC] Receipent address must contain FQDN (504: Command parameter not implemented). |
+------------------+---------------------------------------------------------------------------------------+
7 rows in set (0.00 sec)
The highest count alert, plus the alerts indicating "2nd time" and "3rd time", are the basic functionality needed to detect changes to a file, my original use case. The "rootcheck" is alerting on files owned by root but writable by everyone. The balance of the alerts are from reading the system logs and detecting the system rejecting emails (anti-spam, 504) or web server error codes.

Back to the original problem of Cdorked.A, the blog posts on the subject also indicate that NSM could detect unusually long HTTP sessions, and there are no doubt other network behaviors that could be used to create signatures or network analytics resulting in detection. File integrity checks are just one possible way to detect a compromised server. Remember you need to have known good checksums for this to work! You ideally install something like OSSEC prior to the system being live on the network or at the least prior to it running any listening services that could be compromised before computing the checksums.

26 March, 2012

Updating to Snort 2.9.2 and Barnyard2

After fixing hardware problems that had my home network sensor out of commission for the better part of a year, I recently got the system inline again. Because the sensor had been down for so long, I was running a fairly old version of Snort, 2.9.0.3, along with barnyard 0.2.0. I decided the first thing I should do after updating the OS itself was update Snort and Barnyard.

I won't go through the process in detail since there are many resources online for installing and configuring Snort. The main thing I will point out is that you should always look in the docs/ directory for information on installing and upgrading. If you're updating from a previous version, pay particular attention to changes and new features. Another important thing to do is look closely at the snort.conf provided with a given version in etc/ since it will have a lot of information on defaults and configuration directives that may be required. These won't always be the same as previous versions. It's also important to update to the latest rule sets, check for new rules files, and do all the other normal tuning to make sure certain rules are turned off or on.

I had two main problems when I updated, one with Snort and one with Barnyard2. Since Snort is the main piece of the puzzle here, I updated it prior to Barnyard. After updating to Snort-2.9.2.1 and fixing the configuration, I was able to run Snort successfully using the options I normally had previously. However, as soon as I put the sensor back inline and Snort started processing packets, Snort would exit with an error.

Can't acquire (-1) - ipq_daq_acquire: ipq_read=-1 error Failed to receive netlink message!

A quick search revealed that I had to remove the ip_queue module. JJ Cummings on the #snort channel pointed out to me that NFQ is the more recent option than IPQ. I am using Slackware-current, so even though it is a maintained distribution it is also not surprising that I was using an older option. Slackware also did not have a couple of the required libraries to compile DAQ with support for NFQ, so I went to Slackbuilds.org to get the files allowing me to create Slackware packages for libnetfilter_queue and libnfnetlink.

Once I got the new packages installed, made sure the ip_queue module wasn't loaded, recompiled DAQ to support NFQ, and changed my Snort init to use --daq nfq, my inline Snort was working once again.

Next, I updated from Barnyard-0.2.0.

$ barnyard2 -V

  ______   -*> Barnyard2 <*-
 / ,,_  \  Version 2.1.10-beta2 (Build 266) TCL
 |o"  )~|  By Ian Firns (SecurixLive): http://www.securixlive.com/
 + '''' +  (C) Copyright 2008-2011 Ian Firns


Barnyard2 is needed to process Snort's newer output mode, unified2. My snort.conf changed from:

output log_unified: filename unified.log, limit 128

to:

output unified2: filename unified.log, limit 128

When I got Barnyard2 up and running, it was obviously not successfully processing the unified2 files from Snort. Barnyard2 kept repeating the following error as it tried to process the files.

WARNING: No function defined to read header.

I found a thread on the snort-users list that indicated Barnyard2 was getting a file type it wasn't expecting, which made sense considering the warning message. This issue gave me more problems than it should have and I eventually realized it was because of an error in my barnyard.conf file. The input is supposed to read "input unified2" but I had somehow managed to include a colon after "input". Once I fixed that line, Barnyard2 started working, with alerts being properly processed and showing up in Sguil once again.

The next update will be to go from Sguil-0.7.0 to Sguil-0.8.0.

10 April, 2009

Upgrading to Snort 2.8.4

Hopefully, everyone running Snort has been paying attention and noticed that Snort 2.8.4 was released on 07 April and the corresponding new rule sets were released on 08 April. If you pay for Snort rules, the new netbios.rules will not work with Snort versions prior to 2.8.4, so you need to upgrade.

Upgrading was mostly painless just to get Snort running, though the dcerpc2 preprocessor settings definitely may require tweaking beyond the defaults. Generally, when I'm upgrading and there is something like a new preprocessor, I will start by getting Snort upgraded and successfully running with the defaults for the preprocessor, then will tune the settings as needed after I'm sure the new version is working.

The first thing to do after downloading and extracting is read the RELEASENOTES and also any applicable READMEs. Since the dcerpc2 preprocessor is new, the README.dcerpc2 may be of particular interest. A lot of people don't realize how much documentation is included.

$ wget http://www.snort.org/dl/snort-2.8.4.tar.gz
$ tar xzf snort-2.8.4.tar.gz
$ cd snort-2.8.4/doc
$ ls
AUTHORS README.alert_order README.ppm
BUGS README.asn1 README.sfportscan
CREDITS README.csv README.ssh
INSTALL README.database README.ssl
Makefile.am README.dcerpc README.stream5
Makefile.in README.dcerpc2 README.tag
NEWS README.decode* README.thresholding
PROBLEMS README.decoder_preproc_rules README.variables
README README.dns README.wireless
README.ARUBA README.event_queue TODO
README.FLEXRESP README.flowbits USAGE
README.FLEXRESP2 README.frag3 WISHLIST
README.INLINE README.ftptelnet faq.pdf
README.PLUGINS README.gre faq.tex
README.PerfProfiling README.http_inspect generators
README.SMTP README.ipip snort_manual.pdf
README.UNSOCK README.ipv6 snort_manual.tex
README.WIN32 README.pcap_readmode snort_schema_v106.pdf
The following is the default configuration as listed in the README.dcerpc2.
preprocessor dcerpc2_server: default, policy WinXP, \
detect [smb [139,445], tcp 135, udp 135, rpc-over-http-server 593], \
autodetect [tcp 1025:, udp 1025:, rpc-over-http-server 1025:], \
smb_max_chain 3
For some environments, it may be useful to turn off the preprocessor alerts or turn them off for specific systems.
preprocessor dcerpc2: memcap 102400, events none
Or:
preprocessor dcerpc2_server: default, policy WinXP, \
detect [smb [139,445], tcp 135, udp 135, rpc-over-http-server 593], \
autodetect [tcp 1025:, udp 1025:, rpc-over-http-server 1025:], \
smb_max_chain 3
preprocessor dcerpc2_server: net $SCANNERS, detect none
No matter what, the dcerpc configuration should be removed from the snort.conf and replaced with a dcerpc2 configuration for Snort 2.8.4.

After that, I'll configure and install Snort.
$  ./configure --enable-dynamicplugin --enable-inline --enable-perfprofiling
$ make
$ sudo /etc/rc.d/rc.snort-inline stop
$ sudo make install
$ which snort
/usr/local/bin/snort
$ snort -V

,,_ -*> Snort! <*-
o" )~ Version 2.8.4 (Build 26) inline
'''' By Martin Roesch & The Snort Team: http://www.snort.org/team.html
Copyright (C) 1998-2009 Sourcefire, Inc., et al.
Using PCRE version: 7.7 2008-05-07

$ sudo /etc/rc.d/rc.snort-inline start
Starting snort-inline
Initializing Inline mode
building cached socket reset packets
Although the old netbios.rules will still work with the new version, it's also time to update to the latest rules to take advantage of the smaller number of netbios.rules. Note that this only applies for subscription rules since the free ones won't reflect the changes for 30 days.

Some of the rules in the new netbios.rules file have SIDs that were unchanged, so I decided to re-enable any of those that I had previously disabled in my oinkmaster.conf. I can always disable them again, but with the new preprocessor and possible changes to the remaining SIDs, I decided it was best to reevaluate them.

Since I'm also using precompiled shared object rules, I also update from the ones compiled for 2.8.3.2 to the ones compiled for 2.8.4. I keep the rule stubs in a directory along with symlinks to the corresponding SO files, so I simply remove the symlink and recreate it to the new SO rules.
$ rm /etc/snort/so_rules/*so
$ ln -s /etc/snort/so_rules/precompiled/CentOS-5.0/i386/2.8.4/*so /etc/snort/so_rules/

23 May, 2008

Snort 2.8.1 changes and upgrading

Snort 2.8.1 has been out since April, so this post is a little late. I wanted to upgrade some Red Hat/CentOS systems from Snort 2.8.0.2 to 2.8.1. When I write RHEL or Red Hat, it will include CentOS since what is applies to one should apply to the other.

I quickly found that the upgrade on RHEL4 was not exactly straightforward because the version of pcre used on RHEL4 is pcre 4.5, which is years old. Snort 2.8.1 requires at least pcre 6.0. For reference, RHEL5 is using pcre 6.6.

PCRE Changes

I had assumed an upgrade from 2.8.0.2 to 2.8.1 was minor, but the pcre change indicated there were definitely significant changes between versions. A few of the pcre changes made it into the Snort ChangeLog, and there is more in the Snort manual. I also had a brief discussion about the pcre changes with a few SourceFire developers that work on Snort and some other highly technical Snort users.

The most significant changes seems to be adding limits to pcre matching to help prevent performance problems and denial of service through pcre overload. At regular expression compile time, there will be a maximum number of state changes that can be passed and on evaluation libpcre will only change states that many times. There is a global config option that sets the maximum number of state changes and the limit can also be disabled per regular expression if needed.

The related configuration options for snort.conf are pcre_match_limit:

Restricts the amount of backtracking a given PCRE option. For example, it will limit the number of nested repeats within a pattern. A value of -1 allows for unlimited PCRE, up to the PCRE library compiled limit (around 10 million). A value of 0 results in no PCRE evaluation. The snort default value is 1500.
and pcre_match_limit_recursion:
Restricts the amount of stack used by a given PCRE option. A value of -1 allows for unlimited PCRE, up to the PCRE library compiled limit (around 10 million). A value of 0 results in no PCRE evaluation. The snort default value is 1500. This option is only useful if the value is less than the pcre_match_limit
The main discussion between the SourceFire folks and everyone else was whether it was wise to have the limits turned on by default and where the default of 1500 came from. I think leaving the pcre limits on by default makes sense because those that don't fiddle much with the Snort configuration probably need the protection of the limits more than someone that would notice when Snort performance was suffering.

The argument against having the limits on by default is that it could make certain rules ineffective. By cutting short the pcre, the rule may not trigger on traffic that should cause a Snort alert. I suspect that not many rules will hit the default pcre limit.

Other Changes Included in Snort 2.8.1

From the release notes:
[*] New Additions
* Target-Based support to allow rules to use an attribute table
describing services running on various hosts on the network.
Eliminates reliance on port-based rules.

* Support for GRE encapsulation for both IPv4 & IPv6.

* Support for IP over IP tunneling for both IPv4 & IPv6.

* SSL preprocessor to allow ability to not inspect encrypted traffic.

* Ability to read mulitple PCAPs from the command line.
The SSL/TLS preprocessor helps performance by allowing Snort to ignore encrypted traffic rather than inspecting it. I haven't looked at the target-based support yet, but it definitely sounds interesting.

I also noticed something from 2007-12-10 in the ChangeLog:
 * configure.in:
Add check for Phil Woods pcap so that pcap stats are computed
correctly. Thanks to John Hally for bringing this to our
attention.
That should be good for those running Phil Wood's pcap who may not have been seeing accurate statistics about packet loss.

A last note of interest about Snort 2.8.1 is that it fixes a vulnerability related to improper reassembly of fragmented IP packets.

Upgrading

Upgrading on RHEL5 was pretty simple, but upgrading on RHEL4 required downloading the source RPM for the pcre included with RHEL5 and building a RPM for RHEL4.

First, I installed compilers and the rpm-build package and the RPM source package for pcre-6.6. (Note that I'm using CentOS4 in the example, but the procedures for Red Hat should be nearly identical except for URLs). Next I built the pcre-6.6 RPMs and installed both pcre and pcre-devel.
# up2date -i cpp gcc gcc-c++ rpm-build
# rpm -Uvh http://isoredirect.centos.org/centos/5/updates/SRPMS/pcre-6.6-2.el5_1.7.src.rpm
$ rpmbuild -bb /usr/src/redhat/SPECS/pcre.spec
# rpm -U /usr/src/redhat/RPMS/i386/pcre-*rpm
# rpm -q pcre pcre-devel
pcre-6.6-2.7
pcre-devel-6.6-2.7
Now I can configure, make and make install Snort as usual. The snort.conf file will also need to be updated to reflect new options like the SSL/TLS preprocessor and non-default pcre checks. Note that the README.ssl also recommends changes to the stream5 preprocessor based on the SSL configuration.
$ ./configure --enable-dynamicplugin --enable-stream4udp --enable-perfprofiling
$ make
# make install
If you are building on a production box instead of a development box, some would recommend removing the compilers afterwards.

Here is an example of a SSL/TLS and stream5 configuration using default ports. By adding the default SSL/TLS ports to stream5_tcp settings, you also have to enumerate the default stream5_tcp ports or they will not be included. I bolded the default SSL/TLS ports, and the rest are default stream5_tcp ports:
preprocessor stream5_global: track_tcp yes, track_udp yes, track_icmp yes
preprocessor stream5_tcp: policy first, ports both 21 23 25 42 53 80 \
110 111 135 136 137 139 143 443 445 465 513 563 636 \
989 992 993 994 995 1433 1521 3306
preprocessor stream5_udp
preprocessor stream5_icmp
preprocessor ssl: \
noinspect_encrypted
I have tested to make sure Snort runs with this configuration for these preprocessors, but don't just copy it without checking my work. It is simply an example after reading the related README documentation. Note that the Stream 5 TCP Policy Reassembly Ports output to your message log when starting Snort will be truncated after the first 20 ports but all the ports listed will be included by Stream 5.

21 March, 2008

Passive Tools

I love passive tools, what I like to think of as the "M" in NSM.

I recently posted about PADS. Sguil also uses p0f for operating system fingerprinting, and sancp for session-logging.

Even the IDS and packet-logging components of Sguil are passive. There are plenty of other good passive tools available.

You can learn a lot just by listening.

You can also run Snort inline and active, which goes a little beyond monitoring, for better or worse.

06 January, 2008

IDS/IPS placement on home network

A coworker was asking me about setting up Snort at home so he could get some experience breaking things.

I put together some very rough diagrams with Dia. These are just common and inexpensive solutions for running Snort at home, either in passive (IDS) or active (Inline) modes. These configurations are all inexpensive. At most, you require an extra hub or switch. The only one that doesn't require anything other than the network cards on the sensor is an external inline sensor.

The first is a common home network configuration. This is basically how mine was before I installed Snort.

The second diagram shows an external sensor. An advantage to this is that you see everything. A disadvantage is that you see everything. The management interface is inside the firewall while the bridging interfaces are outside the firewall. Seeing all the traffic isn't the only disadvantage. Some other disadvantages:

  • You can't see internal addresses to identify individual systems.
  • You need a higher performance system. This is not usually a problem on a residential service, but it should be noted that a lot more traffic will pass through system since it's not behind the firewall. You will see a ton of automated scanning and exploit attempts even if the traffic won't make it past the firewall.
Placing the sensor outside the firewall is reasonable if you want to find out just how much activity is happening on the external segment, but it can be really noisy and you lose inside visibility.

The third image is using an extra switch. In this example, you will see all traffic going through the firewall, as well as all broadcast traffic on the private network. You won't see unicast traffic between internal hosts, but you will be able to identify which host is associated with any given traffic that is seen. Bridging is enabled on the sensor and you can run Snort inline. This is my preferred configuration unless you have a lot of wireless traffic.

The fourth diagram shows how to run Snort passively. There are two basic options. The first is a hub that will broadcast all traffic to all ports. This can hurt performance depending on how busy the internal network is. The second is with an inexpensive switch that supports port mirroring. I haven't used it, but I've seen an inexpensive Dell switch referenced that supports port mirroring. Note that it only supports monitoring four ports (PDF) at a time. In this configuration, you can see all traffic on the local segment in addition to Internet traffic. If using a switch with a mirror, you will probably need a separate management interface. If using a hub, the management interface can also do the sniffing.

You could also use a hub between the modem and the firewall if you wanted to run an external passive sensor.

EDIT: Based on Victor's comment, I added one other diagram. Diagram five shows the firewall and Snort inline on the same system. Victor uses iptables to filter the traffic first, then traffic that passes through the firewall goes to Snort running inline. He has separate Snort processes for the DMZ and the LAN.

This configuration is slightly more complicated. There are exceptions, but the places I've worked in the past would not have considered using this type of configuration mainly because they were quite large and wanted off-the-shelf networking products rather than rolling their own firewalls or routers. It is still a useful and usable configuration to learn, and setting it up would provide a lot of valuable experience.