Showing posts with label system administration. Show all posts
Showing posts with label system administration. Show all posts

01 May, 2013

Installing OSSEC agent

With the recent news about the latest Apache backdoor on systems using cPanel, I thought it would be pertinent to show the process of adding an OSSEC agent that connects to a Security Onion server. Why is this relevant? Because OSSEC and other file integrity checkers can detect changes to binaries like Apache's httpd.

"OSSEC is an Open Source Host-based Intrusion Detection System that performs log analysis, file integrity checking, policy monitoring, rootkit detection, real-time alerting and active response."
Many systems include integrity checking programs in their default installs these days, for instance Red Hat with AIDE. AIDE is also available in repositories for a number of other Linux distributions plus FreeBSD.

This case in particular would require using something other than the default options for integrity checking because cPanel installs Apache httpd in /usr/local/apache/bin, a non-standard directory that may not be automatically included when computing file hashes and doing subsequent integrity checks.

The reason I'm demonstrating OSSEC here is that it easily integrates with the Sguil console, and in Security Onion the sensors and server already have OSSEC configured to send alerts to Sguild. OSSEC also has additional functionality compared to AIDE. In this case, I'm installing the agent on a Slackware server.
$ wget http://www.ossec.net/files/ossec-hids-2.7.tar.gz

---snipped---

 $ openssl sha1 ossec-hids-2.7.tar.gz
SHA1(ossec-hids-2.7.tar.gz)= 721aa7649d5c1e37007b95a89e685af41a39da43
 $ tar xvzf ossec-hids-2.7.tar.gz

---snipped---

 $ sudo ./install.sh

  OSSEC HIDS v2.7 Installation Script - http://www.ossec.net

 You are about to start the installation process of the OSSEC HIDS.
 You must have a C compiler pre-installed in your system.
 If you have any questions or comments, please send an e-mail
 to dcid@ossec.net (or daniel.cid@gmail.com).

  - System: Linux webserver 3.8.4
  - User: root
  - Host: webserver
 
   -- Press ENTER to continue or Ctrl-C to abort. --

 
 1- What kind of installation do you want (server, agent, local, hybrid or help)? agent
 
  - Agent(client) installation chosen.

 2- Setting up the installation environment.

  - Choose where to install the OSSEC HIDS [/var/ossec]:
 
 3- Configuring the OSSEC HIDS.

   3.1- What's the IP Address or hostname of the OSSEC HIDS server?: 192.168.1.20
 
  - Adding Server IP 192.168.1.20

   3.2- Do you want to run the integrity check daemon? (y/n) [y]:

    - Running syscheck (integrity check daemon).

   3.3- Do you want to run the rootkit detection engine? (y/n) [y]:
 
 - Running rootcheck (rootkit detection).

   3.4 - Do you want to enable active response? (y/n) [y]:

3.5- Setting the configuration to analyze the following logs:
    -- /var/log/messages
    -- /var/log/auth.log
    -- /var/log/syslog
    -- /var/adm/syslog
    -- /var/adm/auth.log
    -- /var/adm/messages
    -- /var/log/xferlog
    -- /var/log/proftpd.log
    -- /var/log/apache/error_log (apache log)
    -- /var/log/apache/access_log (apache log)
    -- /var/log/httpd/error_log (apache log)
    -- /var/log/httpd/access_log (apache log)

  - If you want to monitor any other file, just change
   the ossec.conf and add a new localfile entry.
   Any questions about the configuration can be answered
   by visiting us online at http://www.ossec.net .

   --- Press ENTER to continue ---

---snip---

- Init script modified to start OSSEC HIDS during boot.
 - Configuration finished properly.
 - To start OSSEC HIDS:
                /var/ossec/bin/ossec-control start
 - To stop OSSEC HIDS:
                /var/ossec/bin/ossec-control stop
 - The configuration can be viewed or modified at /var/ossec/etc/ossec.conf

    Thanks for using the OSSEC HIDS.
    If you have any question, suggestion or if you find any bug,
    contact us at contact@ossec.net or using our public maillist at

    ossec-list@ossec.net
    ( http://www.ossec.net/main/support/ ).

    More information can be found at http://www.ossec.net

    ---  Press ENTER to finish (maybe more information below). ---

 - You first need to add this agent to the server so they 
   can communicate with each other. When you have done so,
   you can run the 'manage_agents' tool to import the 
   authentication key from the server.

   /var/ossec/bin/manage_agents

   More information at: 
   http://www.ossec.net/en/manual.html#ma

Next, I add the agent to my Security Onion server.
$ sudo /var/ossec/bin/manage_agents 

****************************************
* OSSEC HIDS v2.6 Agent manager.     *
* The following options are available: *
****************************************

   (A)dd an agent (A).
   (E)xtract key for an agent (E).
   (L)ist already added agents (L).
   (R)emove an agent (R).
   (Q)uit.

Choose your action: A,E,L,R or Q: A

- Adding a new agent (use '\q' to return to the main menu).

  Please provide the following:
   * A name for the new agent: webserver
   * The IP Address of the new agent: 192.168.1.5
   * An ID for the new agent[001]: 

Agent information:

   ID:001
   Name:webserver
   IP Address:192.168.1.5

Confirm adding it?(y/n): y

****************************************
* OSSEC HIDS v2.6 Agent manager.     *
* The following options are available: *
****************************************

   (A)dd an agent (A).
   (E)xtract key for an agent (E).
   (L)ist already added agents (L).
   (R)emove an agent (R).
   (Q)uit.

Choose your action: A,E,L,R or Q: e

Available agents: 

   ID: 001, Name: webserver, IP: 192.168.1.5

Provide the ID of the agent to extract the key (or '\q' to quit): 001

Agent key information for '001' is: 

---snip---

** Press ENTER to return to the main menu.

Now copy the key, go back to the web server, paste and import the key.
$ sudo /var/ossec/bin/manage_agents 

****************************************
* OSSEC HIDS v2.7 Agent manager.     *
* The following options are available: *
****************************************

   (I)mport key from the server (I).
   (Q)uit.

Choose your action: I or Q: i

* Provide the Key generated by the server.
* The best approach is to cut and paste it.
*** OBS: Do not include spaces or new lines.

Paste it here (or '\q' to quit): ---snip---

Agent information:
   ID:001
   Name:webserver
   IP Address:192.168.1.5

Confirm adding it?(y/n): y

If I was running a system with cPanel that was vulnerable to Cdorked.A then I would want to make sure OSSEC is monitoring the directories with the Apache httpd files. The OSSEC default configuration from my recent install is /var/ossec/etc/ossec.conf and the relevant lines are below:
<syscheck>
    <!-- Frequency that syscheck is executed - default to every 22 hours -->
    <frequency>79200</frequency>
    
    <!-- Directories to check  (perform all possible verifications) -->
    <directories check_all="yes">/etc,/usr/bin,/usr/sbin</directories>
    <directories check_all="yes">/bin,/sbin</directories>

So by default OSSEC would apparently not be checking the integrity of cPanel's Apache installation and I would need to add /usr/local/apache to the directory checks. After making any changes for my particular system, I check the status of OSSEC and it is not yet running.
$ sudo /etc/rc.d/rc.ossec status
ossec-logcollector not running...
ossec-syscheckd not running...
ossec-agentd not running...
ossec-execd not running...
$ sudo /etc/rc.d/rc.ossec start 
Starting OSSEC HIDS v2.7 (by Trend Micro Inc.)...
Started ossec-execd...
Started ossec-agentd...
Started ossec-logcollector...
Started ossec-syscheckd...
Completed.

Note after adding the OSSEC agent on the remote system then adding it on the OSSEC server, you must restart ossec-hids-server in order for the ossec-remoted process to start listening on 1514/udp for remote agents.
$ sudo /etc/init.d/ossec-hids-server status
ossec-monitord is running...
ossec-logcollector is running...
ossec-remoted not running...
ossec-syscheckd is running...
ossec-analysisd is running...
ossec-maild not running...
ossec-execd is running...
$ sudo /etc/init.d/ossec-hids-server restart
Killing ossec-monitord .. 
Killing ossec-logcollector .. 
ossec-remoted not running ..
Killing ossec-syscheckd .. 
Killing ossec-analysisd .. 
ossec-maild not running ..
Killing ossec-execd .. 
OSSEC HIDS v2.6 Stopped
Starting OSSEC HIDS v2.6 (by Trend Micro Inc.)...
OSSEC analysisd: Testing rules failed. Configuration error. Exiting.
2013/04/30 23:13:59 ossec-maild: INFO: E-Mail notification disabled. Clean Exit.
Started ossec-maild...
Started ossec-execd...
Started ossec-analysisd...
Started ossec-logcollector...
Started ossec-remoted...
Started ossec-syscheckd...
Started ossec-monitord...
Completed.

$ sudo /etc/init.d/ossec-hids-server status
ossec-monitord is running...
ossec-logcollector is running...
ossec-remoted is running...
ossec-syscheckd is running...
ossec-analysisd is running...
ossec-maild not running...
ossec-execd is running...

$ netstat -l | grep 1514
udp        0      0 *:1514                  *:*    

Note the error corresponding to the FAQ entry about getting an error when starting OSSEC. However, since I'm running OSSEC 2.7 this did not seem to apply. Poking around, I realized the ossec-logtest executable had not been copied to /var/ossec/bin when I ran the install script. After I manually copied it to the directory, restarting OSSEC no longer caused the "Testing rules failed" error.

Once you have installed OSSEC on the system to be monitored, added the agent on the server, imported the key on the system to be monitored, restarted the server process, and started the client process, you will start getting alerts from the newly added system in Sguil. For example, the content of Sguil alerts will look like this after updating gcc:
Integrity checksum changed for: '/usr/bin/gcc'
Old md5sum was: '764a405824275d806ab5c441516b2d79'
New md5sum is : '6ab74628cd8a0cdf84bb3329333d936e'
Old sha1sum was: '230a4c09010f9527f2b3d6e25968d5c7c735eb4e'
New sha1sum is : 'b931ceb76570a9ac26f86c12168b109becee038b'

In the Sguil console, if I wanted to view all the recent OSSEC alerts I could perform a query as pictured below. Note you need to escape the brackets or remove them in favor of the MySQL wildcards '%%'.


Finally, to show an example of the various types of alerting that OSSEC can do in addition to checksum changes, here is a query and output directly from the MySQL console.
mysql> SELECT count(signature),signature FROM event WHERE signature LIKE '%%OSSEC%%' GROUP BY signature ORDER BY count(signature) DESC;
+------------------+---------------------------------------------------------------------------------------+
| count(signature) | signature                                                                             |
+------------------+---------------------------------------------------------------------------------------+
|              388 | [OSSEC] Integrity checksum changed.                                                   |
|              149 | [OSSEC] Host-based anomaly detection event (rootcheck).                               |
|               46 | [OSSEC] Integrity checksum changed again (2nd time).                                  |
|               39 | [OSSEC] IP Address black-listed by anti-spam (blocked).                               |
|               12 | [OSSEC] Integrity checksum changed again (3rd time).                                  |
|                4 | [OSSEC] Web server 400 error code.                                                    |
|                3 | [OSSEC] Receipent address must contain FQDN (504: Command parameter not implemented). |
+------------------+---------------------------------------------------------------------------------------+
7 rows in set (0.00 sec)
The highest count alert, plus the alerts indicating "2nd time" and "3rd time", are the basic functionality needed to detect changes to a file, my original use case. The "rootcheck" is alerting on files owned by root but writable by everyone. The balance of the alerts are from reading the system logs and detecting the system rejecting emails (anti-spam, 504) or web server error codes.

Back to the original problem of Cdorked.A, the blog posts on the subject also indicate that NSM could detect unusually long HTTP sessions, and there are no doubt other network behaviors that could be used to create signatures or network analytics resulting in detection. File integrity checks are just one possible way to detect a compromised server. Remember you need to have known good checksums for this to work! You ideally install something like OSSEC prior to the system being live on the network or at the least prior to it running any listening services that could be compromised before computing the checksums.

22 April, 2013

Home Lab Part 2: VMware ESXi, Security Onion, and More

As I stated in my previous post about a new home lab configuration, I decided to try VMware ESXi 5.1 on my new Shuttle SH67H. ESXi is free for uses like this, presumably because it clearly benefits VMware if professionals can use it in a lab setting and that encourages use of their paid products in production. I have seen some conflicting accounts, but it appears that the main limit on the free version of ESXi 5.1 is 32GB of RAM.

I won't go into too much detail about the installation since it is adequately covered by a couple of other posts I found prior to purchasing my system.

I will mainly cover details that stood out and things I discovered as someone new to ESXi.

I had already planned to get a Shuttle for the small form factor, low noise, and low power usage. Finding out that the SH67H could be used as a white box for ESXi made it easy to pick an initial project once I built the system. (Okay, we could quibble over whether a Shuttle counts as a white box). Additionally, since my previous home network sensor running Sguil had died, I figured that the first VM to build would be Security Onion but that I'd still be able to run multiple other VMs without impacting my home lab NSM.

Getting ESXi installed on the Shuttle was pretty simple. After booting to CD, I just followed the prompts and made sane choices. The one thing to note is that I installed ESXi to an external USB flash drive. Since the OS is so small, it gets loaded primarily to RAM at boot anyway. Using a flash drive has some advantages and some disadvantages, as shown in many discussions on the VMware and other discussion boards. For my home lab I decided to install to the flash drive, but chances are that it will actually make no difference to me. Some ESXi servers have no local storage, so I imagine it is particularly common for those systems to use a USB flash drive.

After using directly connected keyboard and monitor, I moved the system into my home "server closet" and booted it headless. I installed the vSphere Client on my local Windows VM since I don't have a non-VM Windows installation. The vSphere Client was surprisingly easy and I might even go as far as user-friendly. You can see in the screenshot below that it is relatively straightforward.
The error states "System logs on host vmshuttle are stored on non-persistent storage."
The first thing I noticed was, because of installing ESXi to the flash drive, I got the error shown in my screenshot.

This error was only temporary. I am not sure when it was resolved, most likely after a reboot or I created the initial guest VM, but the system created a ".locker" directory in which I can clearly see all the logs. I am assuming they are persistent since vmstore0 is the internal 1TB hard drive, not the USB flash drive.

# cd /vmfs/volumes/vmstore0/
# ls -l .locker/
drwxr-xr-x    1 root     root           280 Apr  7 16:01 core
drwxr-xr-x    1 root     root           280 Apr  7 16:01 downloads
drwxr-xr-x    1 root     root          4340 Apr 16 02:15 log
drwxr-xr-x    1 root     root           420 Apr  7 16:01 var


I believe another option for fixing the error would be to manually set the scratch partition as detailed in VMware's Knowledge Base. Note that I haven't actually tried that to date.

Before being able to SSH into the ESXi host and look at the above directories and files, I had to enable SSH. The configuration for SSH and a number of other security-related services is available in vSphere by highlighting the host (since in the workplace you may use vSphere to manage multiple ESXi systems), then going to the Configuration tab, Security Profile, and finally SSH Properties. If you haven't noticed already, ESXi defaults to using root for everything. I haven't yet investigated the feasibility of locking down the ESXi host, but I think it's safe to say most people will rely on keeping the host as isolated as possible since the host OS is not particularly flexible or configurable outside options VMware provides.

I decided the best way to use vSphere would be to copy my Windows 7 VM from my laptop to the ESXi host. Trying to scp the VM then adding it to the inventory never worked properly. I had similar problems when trying to scp a CentOS VM from my laptop. When I tried browsing the datastore in vSphere and adding a local machine to the remote inventory, it would get partway through and then fail with an I/O error. I believe this was all actually a case of a flaky wireless access point, but even in cases where I successfully copied the CentOS VM I got errors when trying to add it to the inventory.

I eventually got it to work by converting the VM locally using ovftool then deploying it to ESXi. OVF is the Open Virtualization Format, an open standard for packaging virtual machines. The syntax to convert an existing VM is simple. First, make sure the VM is powered down rather than just paused. On OSX running VMware Fusion, you can export a VM easily.

~ nr$ cd  /Applications/VMware\ Fusion.app/Contents/Library/VMware\ OVF\ Tool/ovftool
~ nr$ ./ovftool -dm=thin ~/Documents/Virtual\ Machines.localized/Windows\ 7\ x64.vmwarevm/Windows\ 7\ x64.vmx ~/Documents/VM-OVF/Windows\ 7\ x64/Windows\ 7\ x64.ovf

After the conversion, the VM still needed to be exported to the ESXi host. I plugged my laptop into a wired connection to speed the process and eliminate any issues I was having over wireless, then sent the VM to ESXi. The options I used are to set the datastore, disk mode, and network.

~ nr$ ./ovftool -ds=vmstore0 -dm=thin --network="VM Network 2" ~/Documents/VM-OVF/Windows\ 7\ x64/Windows\ 7\ x64.ovf vi://192.168.1.10/

Once the VM is copied to the host, you will need to browse the datastore and add the VM to the ESXi inventory. Other ways to move a VM to ESXi are not endorsed by VMware. They officially recommend using OVF to import VMs.

All things considered, getting ESXi installed and configured was relatively easy. There are certainly drawbacks to using unsupported hardware. For example, vSphere does not display CPU temperature and other health or status information. I believe ESXi expects to use IPMI for hardware status like voltages, temperatures, and more. There are options to consider for anyone wanting a home lab using supported hardware. VMware maintains a lengthy HCL and I presume systems on their list support all the health status information in vSphere. I did find several possibilities to buy used servers like a Dell PowerEdge 2950 at reputable sites for about $650. Since I didn't want the noise, don't have a rack, and may not keep the system as a dedicated ESXi host, I did not go that route for a lab system.

Building a Security Onion VM


As stated, the first VM I built was Security Onion. I did this through the vSphere client and include some screenshots here. Most of this applies to building any VM using vSphere.

After choosing the option to create a new VM, I selected a custom configuration. I named the VM simply "Security Onion" and chose my only datastore, vmstore0, as the storage location. I am not concerned with backwards compatibility, so chose "Virtual Machine Version 8." I chose only one core and one socket for the CPU, but allocated 4GB of RAM since I knew the combination of Suricata, Bro, and other monitoring processes would eat a lot of RAM. I was installing 64-bit, so I chose 64-bit Ubuntu as the Linux version. 
Choosing the number of NICs, network, and
adapter to use when initially configuring the VM
I selected two NICs both using VMXNET 3, which was probably the first non-standard selection in my custom configuration. I wanted to make sure I had separate management and promiscuous mode NICs since this will be a sensor. The option for VMXNET 3 should not be available as a choice if you previously selected an OS that doesn't support it when you created the VM.

I next chose the LSI Logic SAS for the SCSI Controller. Although I think it won't matter for Ubuntu, note the following from VMware's local help files.
"The LSI Logic Parallel adapter and the LSI Logic SAS adapter offer equivalent performance. Some guest operating system vendors are phasing our support for parallel SCSI in favor of SAS, so if your virtual machine and guest operating system support SAS, choose LSI SAS to maintain future compatibility."
This is a good time to point out that hitting the "Help" button in vSphere will open the local help files in your browser, and they contain actual useful information about the differences in choices when configuring the VM. In the case of the help button during the process of creating a new VM, it will actually open the specific page that is contextually useful for the options on the current step of the process. In general, both their help files and the Knowledge Base seem quite useful.

Finally, I created the virtual disk. This includes deciding whether to thin provision, thick provision lazy zeroed, or thick provision eager zeroed, meaning prepare the disk ahead of time. The documentation states that eager zeroed supports clustering for fault tolerance. I chose thick provisioned for my Security Onion since I knew with certainty that the virtual disk would get filled with NSM data like packet captures and logs. There are a number of KB and blog posts on the VMware site that detail advantages and disadvantages of the different provisioning methods.
The final settings for my Security Onion VM

Once the VM was configured on ESXi, I still needed to actually install Security Onion. You can do it the old-fashioned way by burning a disc and using the CD/DVD drive, but I used mounted the ISO. To do this, you just need to start the VM, which doesn't yet have an OS, then open a console in vSphere and click the button to mount the ISO in the virtual CD drive so it will boot to the disc image and start the installation process. The vSphere console is a similar view and interface to Fusion or Workstation and mounting the ISO works essentially the same way.

The time it took from hitting the button to create a VM to the time I had a running Security Onion sensor was quite short. I had a couple small problems after the initial installation. First, in ESXi you have to manually go to the NIC settings and check a box that allows it to sniff all the traffic. My sniffing interface was initially not seeing the traffic when I checked with tcpdump, which made me realize it was probably not yet in promiscuous mode.

Second, the 4GB RAM and one CPU I had initially allocated was insufficient. When the sensor was running and I tried to update Ubuntu, the system became very unresponsive. I eventually doubled the RAM to 8GB and the number of cores to two, which resolved the issue. I think at this point that I could probably actually drop back down to 4GB of RAM, but since the system has 32GB I don't need to worry about it yet.

Other ESXi Notes


Although ESXi is stripped pretty bare of common Linux utilities and commands, there is plenty you can do from a command line through SSH instead of using vSphere. For example, to list all VMs on the system, power on my Windows 7 VM, and find the IP address so I can connect through RDP:

# vim-cmd vmsvc/getallvms
Vmid        Name                            File                           Guest OS       Version   Annotation
1      Security Onion   [vmstore0] Security Onion/Security Onion.vmx   ubuntu64Guest      vmx-08              
13     Windows 7 x64    [vmstore0] Windows 7 x64/Windows 7 x64.vmx     windows7_64Guest   vmx-09              
6      CentOS 64-bit    [vmstore0] CentOS 64-bit/CentOS 64-bit.vmx     centos64Guest      vmx-08 
~ # vim-cmd vmsvc/power.on 13
~ # vim-cmd vmsvc/get.guest 13 | grep -m 1 ipAddress
   ipAddress = "192.168.1.160",

I can get smartd information from my hard drive if needed.

~ # esxcli storage core device list
t10.ATA_____ST1000DM0032D1CH162__________________________________Z1D3GHKF
   Display Name: Local ATA Disk (t10.ATA_____ST1000DM0032D1CH162__________________________________Z1D3GHKF)
   Has Settable Display Name: true
   Size: 953869
   Device Type: Direct-Access 
   Multipath Plugin: NMP
   Devfs Path: /vmfs/devices/disks/t10.ATA_____ST1000DM0032D1CH162__________________________________Z1D3GHKF
   Vendor: ATA     
   Model: ST1000DM003-1CH1
   Revision: CC44
   SCSI Level: 5
   Is Pseudo: false
   Status: on
   Is RDM Capable: false
   Is Local: true
   Is Removable: false
   Is SSD: false
   Is Offline: false
   Is Perennially Reserved: false
   Queue Full Sample Size: 0
   Queue Full Threshold: 0
   Thin Provisioning Status: unknown
   Attached Filters: 
   VAAI Status: unknown
   Other UIDs: vml.01000000002020202020202020202020205a31443347484b46535431303030
   Is Local SAS Device: false
   Is Boot USB Device: false

~ # esxcli storage core device smart get -d t10.ATA_____ST1000DM0032D1CH162__________________________________Z1D3GHKF
Parameter                     Value  Threshold  Worst
----------------------------  -----  ---------  -----
Health Status                 OK     N/A        N/A  
Media Wearout Indicator       N/A    N/A        N/A  
Write Error Count             N/A    N/A        N/A  
Read Error Count              115    6          99   
Power-on Hours                100    0          100  
Power Cycle Count             100    20         100  
Reallocated Sector Count      100    10         100  
Raw Read Error Rate           115    6          99   
Drive Temperature             32     0          40   
Driver Rated Max Temperature  68     45         65   
Write Sectors TOT Count       200    0          200  
Read Sectors TOT Count        N/A    N/A        N/A  
Initial Bad Block Count       100    99         100  

There is a lot more you can do from the ESXi command line interface, but I should emphasize again that it is stripped fairly bare and does not have a lot of commands you expect if you come from a Linux or Unix background. Even some of the utilities that are available do not have some of the options or functionality you would expect. The CLI commands will generally list options or help when run without arguments. You can also get plenty of CLI documentation from VMware.


Next Steps


I now have a number of VMs installed, including a CentOS snapshot, FreeBSD, and my Windows 7 VM. My next steps will include setting up some VLANs to have some fun with a vulnerable network and an attacker network that will include KaliLinux. I am intimately familiar with Sguil and some of the other tools in Security Onion, but also hope to dig into Suricata and Bro more than I have in the past.

I also hope that my lab will provide some interesting material for future blog posts.

04 April, 2013

New Home Lab Configuration

I received all my new equipment for my home lab a couple of days ago. After setting up the hardware in less than a day, I'm quite happy with it so far.

I was lucky enough to have two 12-year-olds assist me when I assembled the computer. This was their first time assembling a computer from parts and they really enjoyed it.

The first component was the Shuttle SH67H3. My friend Richard recommended the DS61, but I had two main problems with that barebones system. First, it only has two RAM slots for a maximum of 16GB. That's not bad, but I decided I wanted to get a desktop that supported more RAM without going to server components while keeping the form factor small. I actually may have a second system on my purchase list for sometime this year, and in that case I would definitely consider the DS61.

Second, I had read that the SH67H3 worked as an ESXi whitebox. Overall, I am a fan of Shuttle barebones. The SH67H3 is essentially the same chassis my coworkers and I used on our lab network at a previous job, just with a new motherboard and other improvements. I used very similar or identical parts for my Shuttle as the ones listed in the ESXi whitebox link above.

When we popped the case open, it all looked pretty familiar and I explained the various pieces to the 12-year-olds. We removed the fan and heat sink array, which is a pretty nice low-noise setup. The case fan actually slides over the second heat sink so air blows over it on the way out the back of the chassis.
Don't forget to remove both the sticker from the heatsink and the plastic film that is on the CPU load plate before putting the CPU in the socket. After we inserted the Intel Core i7 2600, we applied thermal paste, reattached the passive cooling, and finally reattached the fan including plugging it back into the motherboard. We also inserted the four 8GB RAM sticks.
Shuttle SH76H3 motherboard with CPU and RAM installed
The fan slides over the heat sink at the rear of the chassis on the left
Next, we put the DVD/CD drive and hard drive into the tray, attached the tray to the chassis, and connected the SATA and power cables. I also added a dual Intel PRO/1000 PT NIC to give a total of three physical network interfaces. We finally tested and everything appeared to be working.


New Network Architecture


Going to all this trouble for a relatively powerful computer compared to my three old Pentium III servers, I decided to take the opportunity to make a couple of other network changes. I used to run my network sensor inline, but along with the new computer I purchased a Netgear GS108T-200 smart switch. This switch has an abundance of features, including VLANs and port mirroring. Along with the new switch, all I needed was an extra WAP to reconfigure home network as shown below.
The router/firewall also works as a WAP, but to see most client traffic
I disabled it and connected an access point behind the mirroring switch
With this configuration, the switch will mirror traffic to a dedicated network interface on my network sensor. Only traffic that doesn't make it to the switch will not be seen on the mirror port. I can also configure VLANs on the switch if I want to segment the network based on functions like management interfaces and WiFi clients.

I plan to write more about my lab setup as I continue to redevelop it. The first thing I did after testing the new box was install ESXi and create a network sensor VM using Security Onion. I may have a post about it soon.

26 March, 2012

Updating to Snort 2.9.2 and Barnyard2

After fixing hardware problems that had my home network sensor out of commission for the better part of a year, I recently got the system inline again. Because the sensor had been down for so long, I was running a fairly old version of Snort, 2.9.0.3, along with barnyard 0.2.0. I decided the first thing I should do after updating the OS itself was update Snort and Barnyard.

I won't go through the process in detail since there are many resources online for installing and configuring Snort. The main thing I will point out is that you should always look in the docs/ directory for information on installing and upgrading. If you're updating from a previous version, pay particular attention to changes and new features. Another important thing to do is look closely at the snort.conf provided with a given version in etc/ since it will have a lot of information on defaults and configuration directives that may be required. These won't always be the same as previous versions. It's also important to update to the latest rule sets, check for new rules files, and do all the other normal tuning to make sure certain rules are turned off or on.

I had two main problems when I updated, one with Snort and one with Barnyard2. Since Snort is the main piece of the puzzle here, I updated it prior to Barnyard. After updating to Snort-2.9.2.1 and fixing the configuration, I was able to run Snort successfully using the options I normally had previously. However, as soon as I put the sensor back inline and Snort started processing packets, Snort would exit with an error.

Can't acquire (-1) - ipq_daq_acquire: ipq_read=-1 error Failed to receive netlink message!

A quick search revealed that I had to remove the ip_queue module. JJ Cummings on the #snort channel pointed out to me that NFQ is the more recent option than IPQ. I am using Slackware-current, so even though it is a maintained distribution it is also not surprising that I was using an older option. Slackware also did not have a couple of the required libraries to compile DAQ with support for NFQ, so I went to Slackbuilds.org to get the files allowing me to create Slackware packages for libnetfilter_queue and libnfnetlink.

Once I got the new packages installed, made sure the ip_queue module wasn't loaded, recompiled DAQ to support NFQ, and changed my Snort init to use --daq nfq, my inline Snort was working once again.

Next, I updated from Barnyard-0.2.0.

$ barnyard2 -V

  ______   -*> Barnyard2 <*-
 / ,,_  \  Version 2.1.10-beta2 (Build 266) TCL
 |o"  )~|  By Ian Firns (SecurixLive): http://www.securixlive.com/
 + '''' +  (C) Copyright 2008-2011 Ian Firns


Barnyard2 is needed to process Snort's newer output mode, unified2. My snort.conf changed from:

output log_unified: filename unified.log, limit 128

to:

output unified2: filename unified.log, limit 128

When I got Barnyard2 up and running, it was obviously not successfully processing the unified2 files from Snort. Barnyard2 kept repeating the following error as it tried to process the files.

WARNING: No function defined to read header.

I found a thread on the snort-users list that indicated Barnyard2 was getting a file type it wasn't expecting, which made sense considering the warning message. This issue gave me more problems than it should have and I eventually realized it was because of an error in my barnyard.conf file. The input is supposed to read "input unified2" but I had somehow managed to include a colon after "input". Once I fixed that line, Barnyard2 started working, with alerts being properly processed and showing up in Sguil once again.

The next update will be to go from Sguil-0.7.0 to Sguil-0.8.0.

02 December, 2010

Using slackbuilds.org to create Slackware packages

Sorry for the long posting hiatus but don't expect it to end. I just don't have a lot of time or material to devote to the blog right now.

I recently wanted to upgrade Postfix on my Slackware mail server. I used to use packages from LinuxPackages.net for unofficial packages, but the site has gotten less active and always had a reputation for varying package quality. My preference is using the SlackBuilds to build my own packages. It's fairly simple to download their build script, edit as needed, then build a Slackware package from source.

Since Postfix is not available from Slackware official repositories, I downloaded the SlackBuild files and then the Postfix source.

$ wget http://postfix.cs.utah.edu/source/official/postfix-2.6.8.tar.gz 
$ wget http://slackbuilds.org/slackbuilds/13.1/network/postfix.tar.gz
$ tar xvzf postfix.tar.gz
$ ls postfix/
README     postfix-2.6.8.tar.gz  postfix.info  slack-desc
doinst.sh  postfix.SlackBuild*   rc.postfix
I am using Cyrus-SASL, so it was important for me to note the following from the SlackBuild Postfix page.
This script builds postfix with support for Dovecot SASL but does not
include any support for Cyrus-SASL. If you need to enable support for
Cyrus see SASL_README in the source code.
I also noted the following from the postfix.SlackBuild file itself.
# Postfix unfortunately does not use a handy ./configure script so you
# must generate the makefiles using (what else?) "make makefiles". The
# following includes support for TLS and SASL. It should automatically
# find PCRE and DB3 support. The docs have information for adding
# additional support such as MySQL or LDAP.
I changed the "make makefile" lines from:
make makefiles \
  CCARGS='-DUSE_SASL_AUTH -DDEF_SERVER_SASL_TYPE=\"dovecot\" -DUSE_TLS' \
  AUXLIBS="-lssl -lcrypto"
to:
make makefiles CCARGS="-DUSE_SASL_AUTH -DUSE_CYRUS_SASL -DHAS_PCRE \
                 -I/usr/local/include/sasl -I/usr/include" \
                 AUXLIBS="-L/usr/local/lib -lsasl2 -L/usr/lib -lpcre"
This added Cyrus-SASL support and also fixed a problem I was having with it finding PCRE. I also changed the VERSION variable to 2.6.8 since the postfix.SlackBuild file was for 2.6.1. After the changes, all I have to do is run the postfix.SlackBuild file then use "upgradepkg" on the resulting postfix-2.6.8-iX86-1_SBo.tgz package. (Note that official packages use xz for compression now, not gzip, so they will have the extension txz).

The next package I will create using SlackBuilds is cyrus-imapd since it also is not included in Slackware. Cyrus-SASL actually has an official package, but I I've been running Cyrus for so long that I have always installed it from source. I don't remember if that is because it wasn't available as a package back in the day or just because I was using some non-standard options.

14 March, 2010

March Slackware-current: libblkid.so.1

 The Slackware-current updates from March 1, 2010, included updates to both the e2fsprogs package and the util-linux-ng package. An important thing to note is that libblkid was moved out of e2fsprogs and into util-linux-ng. If you search the web for libblkid.so.1, slackpkg, util-linux-ng, and e2fsprogs, you will see various forum posts about not being able to boot. This is because libblkid.so.1 is required to mount and the updates included a new kernel, which meant a lot of people updated then rebooted without having util-linux-ng installed.  Booting without the library will get you error messages about libblkid.so.1 not being found when the system tries to mount the drives.

$ man libblkid
LIBBLKID(3)                                                        LIBBLKID(3)

NAME
       libblkid - block device identification library

SYNOPSIS
       #include 

       cc file.c -lblkid

DESCRIPTION
       The  libblkid  library  is used to identify block devices (disks) as to
       their content (e.g.  filesystem type) as well as extracting  additional
       information  such  as  filesystem  labels/volume  names, unique identi-
       fiers/serial numbers, etc. 
If you don't already have util-linux-ng installed then make sure to install it before rebooting since the update to e2fsprogs will remove libblkid.
$ sudo slackpkg update
---snip---
$ sudo slackpkg install util-linux-ng
---snip---
$ sudo slackpkg install-new
---snip---
$ sudo slackpkg upgrade-all
If you get stuck because you ran upgrade-all, don't have util-linux-ng installed, then rebooted for the kernel update, you can boot to the Slackware install CD or DVD so you can install the old version of e2fsprogs or the new util-linux-ng. This will allow you to boot normally then fix whatever is needed, such as installing the new util-linux-ng and/or upgrading e2fsprogs.

13 March, 2010

Customizing Slackware Tcl Package for Sguil

Most distributions these days are configuring their Tcl packages with --enable-threads as a default. Slackware-current switched some months back with the following in the ChangeLog.txt.

+--------------------------+
Mon Dec  7 02:13:13 UTC 2009
d/ruby-1.9.1_p243-i486-3.txz:  Rebuilt.
  Added an explicit --enable-pthread.  This is mostly to make sure that we get
  the expected option set from future releases of Ruby -- it appears that not
  only is --enable-pthread the default in ruby-1.9.1, but trying to use
  --disable-pthread doesn't work.  Furthermore, Ruby and Tcl/Tk no longer work
  together unless both Ruby and Tcl/Tk are compiled with thread support.
  Compiling Tcl/Tk with thread support has caused some problems in the past.
  If a threaded Tcl app tries to fork(), it will hang, but by now most affected
  Tcl apps (such as eggdrop) should have patches available.
  Anyway, this should fix the issues with Ruby and Tk.  Please test it, and
  report any other problems that arise.
tcl/tcl-8.5.8-i486-1.txz:  Upgraded.
  Compiled using --enable-threads, since Ruby requires it to work with Tk.
tcl/tclx-8.4-i486-3.txz:  Rebuilt.
  Recompiled using --enable-threads.
tcl/tix-8.4.3-i486-2.txz:  Rebuilt.
  Recompiled using --enable-threads.
tcl/tk-8.5.8-i486-1.txz:  Upgraded.
  Compiled using --enable-threads, since Ruby requires it to work with Tk.
+--------------------------+ 
The Sguil daemon will not work with threaded Tcl, so to fix this you need to build a package for the distribution of your choice with the --disable-threads configure option. In Slackware and most other distributions, it is fairly simple to customize a package.

Download Tcl from the source directory on the Slackware mirror of your choice. It should include a slack-desc file, a tcl.SlackBuild file, and the Tcl source. Modify the tcl.SlackBuild file to replace --enable-threads with --disable-threads.
./configure \
  --prefix=/usr \
  --libdir=/usr/lib${LIBDIRSUFFIX} \
  --enable-shared \
  --disable-threads \
  --enable-man-symlinks \
  --enable-man-compression=gzip \
  ${CONFARGS} \
  --build=$ARCH-slackware-linux
You may also want to modify the slack-desc to note that this is a non-threaded version. Then build the new package.
$ sh tcl.SlackBuild
---snip--- 
Slackware package /tmp/tcl-8.5.8-i486-1.txz created.
As you see, the package will get written to /tmp by default. Now replace the threaded version with the new non-threaded version.
$ sudo upgradepkg --reinstall /tmp/tcl-8.5.8-i486-1.txz
+==============================================================================
| Upgrading tcl-8.5.8-i486-1 package using /tmp/tcl-8.5.8-i486-1.txz
+==============================================================================

Pre-installing package tcl-8.5.8-i486-1...

Removing package /var/log/packages/tcl-8.5.8-i486-1-upgraded-2010-03-13,20:03:22...

Verifying package tcl-8.5.8-i486-1.txz.
Installing package tcl-8.5.8-i486-1.txz:
PACKAGE DESCRIPTION:
# tcl (Tool Command Language)
#
# Tcl, developed by Dr. John Ousterhout, is a simple to use text-based
# script language with many built-in features which make it especially
# nice for writing interactive scripts.
#
# This is a version customized by nr that uses --disable-threads.
#
Executing install script for tcl-8.5.8-i486-1.txz.
Package tcl-8.5.8-i486-1.txz installed.

Package tcl-8.5.8-i486-1 upgraded with new package /tmp/tcl-8.5.8-i486-1.txz.

15 September, 2009

MySQL replication on RHEL

I recently configured MySQL for replication after first enabling SSL connections between the two systems that would be involved with replication. I have to say that MySQL documentation is excellent and all these notes are simply based on what is available on the MySQL site. I have included links to as many of the relevant sections of the documentation as possible.

For reference, here is the MySQL manual on enabling SSL: 5.5.7.2. Using SSL Connections

Before beginning, it is a good idea to create a directory for the SSL output files and make sure all the files end up there.

MySQL’s RHEL5 packages from mysql.com support SSL by default, but to check you can run:


$ mysqld --ssl --help
mysqld  Ver 5.0.67-community-log for redhat-linux-gnu on i686 (MySQL Community Edition (GPL))Copyright (C) 2000 MySQL AB, by Monty and others
This software comes with ABSOLUTELY NO WARRANTY. This is free software,
and you are welcome to modify and redistribute it under the GPL license

Starts the MySQL database server

Usage: mysqld [OPTIONS]

For more help options (several pages), use mysqld --verbose --help


The command will create an error if there is no SSL support.

Next, check that the MySQL server has SSL enabled. The below output means that the server supports SSL but it is not enabled. Enabling it can be done at the command line or in the configuration file, which will be detailed later.


$ mysql -u username -p -e"show variables like 'have_ssl'"
Enter password:
+---------------+----------+
| Variable_name | Value    |
+---------------+----------+
| have_ssl      | DISABLED |
+---------------+----------+


Documentation on setting up certificates:
5.5.7.4. Setting Up SSL Certificates for MySQL

First, generate the CA key and CA certificate:


$ openssl genrsa 2048 > mysql-ca-key.pem
Generating RSA private key, 2048 bit long modulus
............................................+++
............+++

$ openssl req -new -x509 -nodes -days 356 -key mysql-ca-key.pem > mysql-ca-cert.pem
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [GB]:US
State or Province Name (full name) [Berkshire]:California
Locality Name (eg, city) [Newbury]:Burbank
Organization Name (eg, company) [My Company Ltd]:Acme Road Runner Traps
Organizational Unit Name (eg, section) []:Acme IRT
Common Name (eg, your name or your server's hostname) []:mysql.acme.com
Email Address []:acme-irt@acme.com


Create the server certificate:


$ openssl req -newkey rsa:2048 -days 365 -nodes -keyout mysql-server-key.pem >
mysql-server-req.pem

Generating a 2048 bit RSA private key
.............................+++
.............................................................+++
writing new private key to 'mysql-server-key.pem'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [GB]:US
State or Province Name (full name) [Berkshire]:California
Locality Name (eg, city) [Newbury]:Burbank
Organization Name (eg, company) [My Company Ltd]:Acme Road Runner Traps
Organizational Unit Name (eg, section) []:Acme IRT
Common Name (eg, your name or your server's hostname) []:mysql.acme.com
Email Address []:acme-irt@acme.com

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:

$ openssl x509 -req -in mysql-server-req.pem -days 356 -CA mysql-ca-cert.pem -CAkey mysql-ca-key.pem
-set_serial 01 > mysql-server-cert.pem
Signature ok
subject=/C=US/ST=California/L=Burbank/O=Acme Road Runner Traps/OU=Acme IRT/CN=
mysql.acme.com/emailAddress=acme-irt@acme.com
Getting CA Private Key


Finally, create the client certificate:


$ openssl req -newkey rsa:2048 -days 356 -nodes -keyout mysql-client-key.pem >
mysql-client-req.pem
Generating a 2048 bit RSA private key
................+++
.................+++
writing new private key to 'mysql-client-key.pem'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [GB]:US
State or Province Name (full name) [Berkshire]:California
Locality Name (eg, city) [Newbury]:Burbank
Organization Name (eg, company) [My Company Ltd]:Acme Road Runner Traps
Organizational Unit Name (eg, section) []:Acme IRT
Common Name (eg, your name or your server's hostname) []:mysql.acme.com
Email Address []:acme-irt@acme.com

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:

$ openssl x509 -req -in mysql-client-req.pem -days 356 -CA mysql-ca-cert.pem
-CAkey mysql-ca-key.pem -set_serial 01 > mysql-client-cert.pem
Signature ok
subject=/C=US/ST=California/L=Burbank/O=Acme Road Runner Traps/OU=Acme
IRT/CN=mysql.acme.com/emailAddress=acme-irt@acme.com
Getting CA Private Key

[nr@mysqld mysqlcerts]$ ls
mysql-ca-cert.pem      mysql-client-key.pem   mysql-server-key.pem
mysql-ca-key.pem       mysql-client-req.pem   mysql-server-req.pem
mysql-client-cert.pem  mysql-server-cert.pem


To enable SSL when starting mysqld, the following should be in /etc/my.cnf under the [mysqld] section. For this example, I put the files in /etc/mysql/openssl:


ssl-ca="/etc/mysql/openssl/mysql-ca-cert.pem"
ssl-cert="/etc/mysql/openssl/mysql-server-cert.pem"
ssl-key="/etc/mysql/openssl/mysql-server-key.pem"


To use any client, for instance mysql from the command line or the GUI MySQL Administrator, copy the client cert and key to a dedicated folder on the local box along with ca-cert. You will have to configure the client to use the client certificate, client key, and CA certificate.

To connect with the mysql client using SSL, copy the client certificates to a folder, for instance /etc/mysql, then under the [client] section in /etc/my.cnf:


ssl-ca="/etc/mysql/openssl/mysql-ca-cert.pem"
ssl-cert="/etc/mysql/openssl/mysql-client-cert.pem"
ssl-key="/etc/mysql/openssl/mysql-client-key.pem"


In MySQL Administrator, the following is an example you would put into the Advanced Parameters section if you want to connect using SSL.


SSL_CA U:/keys/mysql-ca-cert.pem
SSL_CERT U:/keys/mysql-client-cert.pem
SSL_KEY U:/keys/mysql-client-key.pem
USE_SSL Yes


Replication

Before configuring replication, I made sure to review the MySQL replication documentation.

16.1.1.1. Creating a User for Replication
Because MySQL stores the replication user’s name and password using plain text in the master.info file, it’s recommended to create a dedicated user that only has the REPLICATION SLAVE privilege. The replication user needs to be created on the master so the slaves can connect with that user.


GRANT REPLICATION SLAVE ON *.* TO 'repl'@'192.168.1.50' IDENTIFIED BY ‘password’;


16.1.1.2. Setting the Replication Master Configuration
Edit my.cnf to uncomment the “log-bin” line. Also uncomment “server-id = 1”. The server-id can be anything between 1 and 2^32 but must be unique.

Also add “expire_logs_days” to my.cnf. If you don’t, the binary logs could fill up the disk partition because they are not deleted by default!


expire_log_days = 4


16.1.1.3. Setting the Replication Slave Configuration
Set server-id to something different from the master in my.cnf. Although not required, enabling binary logging on the slave is also recommended for backups, crash recovery, and in case the slave will also be a master to other systems.

16.1.1.4. Obtaining the Master Replication Information
I flush the tables to disk and lock them to temporarily prevent changes.


# mysql -u root -p -A dbname
mysql> FLUSH TABLES WITH READ LOCK;
Query OK, 0 rows affected (0.00 sec)


If the slave already has data from master, then you may want to copy over data manually to simplify things, 16.1.1.6. Creating a Data Snapshot Using Raw Data Files. However, you can also use mysqldump, as shown in Section 16.1.1.5, “Creating a Data Snapshot Using mysqldump”.

Once the data is copied over to the slave, I get the current log position.


Mysql> SHOW MASTER STATUS;
+------------------+----------+--------------+------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000002 | 16524487 |              |                  |
+------------------+----------+--------------+------------------+ 
1 row in set (0.00 sec)

mysql> UNLOCK TABLES;


16.1.1.10. Setting the Master Configuration on the Slave

Finally, configure the slave. The log file and log position tell the slave where to begin replication. All changes after that log position will be replicated to the slave.


mysql> CHANGE MASTER TO
->     MASTER_HOST=’192.168.1.50’,
->     MASTER_USER=’repl’,
->     MASTER_PASSWORD='replication_password',
->     MASTER_LOG_FILE=’ mysql-bin.000002’,
->     MASTER_LOG_POS=16524487,
->     MASTER_SSL=1,
->     MASTER_SSL_CA = '/etc/mysql/openssl/mysql-ca-cert.pem',
->     MASTER_SSL_CAPATH='/etc/mysql/openssl/',
->     MASTER_SSL_CERT = '/etc/mysql/openssl/mysql-server-cert.pem',
->     MASTER_SSL_KEY = '/etc/mysql/openssl/mysql-server-key.pem';
Query OK, 0 rows affected (0.74 sec)  

mysql> START SLAVE;


Replication can start!
The slave status can be checked via the following command:


mysql> show slave status;

26 March, 2009

Slackware-current updates and Nvidia driver

I recently updated a desktop running Slackware-current to the latest packages released up to 24-Mar-2009. There were a few minor issues. The first is that, if using slackpkg to upgrade, I had to upgrade the findutils package before everything else or slackpkg would no longer work properly.

# slackpkg update
# slackpkg upgrade findutils
# slackpkg upgrade-all
The second was that I couldn't install the Nvidia driver for the new 2.6.28.8 kernel package because it was failing.
ERROR: The 'cc' sanity check failed:

The C compiler 'cc' does not appear able to
create executables. Please make sure you have
your Linux distribution's gcc and libc development
packages installed.
It turns out that gcc-4.3.3 in slackware-current depends on mpfr.
# slackpkg info mpfr

PACKAGE NAME: mpfr-2.3.1-i486-1.tgz
PACKAGE LOCATION: ./slackware/l
PACKAGE SIZE (compressed): 348 K
PACKAGE SIZE (uncompressed): 930 K
PACKAGE DESCRIPTION:
mpfr: mpfr (Multiple-Precision Floating-Point Reliable Library)
mpfr:
mpfr: The MPFR library is a C library for multiple-precision floating-point
mpfr: computations with exact rounding (also called correct rounding).
mpfr: It is based on the GMP multiple-precision library.
mpfr: The main goal of MPFR is to provide a library for multiple-precision
mpfr: floating-point computation which is both efficient and has
mpfr: well-defined semantics. It copies the good ideas from the
mpfr: ANSI/IEEE-754 standard for double-precision floating-point arithmetic
mpfr: (53-bit mantissa).
mpfr:

# slackpkg install mpfr
After installing mpfr, I was able to compile the Nvidia module for the running kernel.

Finally, kdeinit4 was failing with an error about loading the libstreamanalyzer.so.0 and libqimageblitz.so shared libraries. I fixed this by installing strigi and qimageblitz.
# slackpkg info strigi

PACKAGE NAME: strigi-0.6.3-i486-1.tgz
PACKAGE LOCATION: ./slackware/l
PACKAGE SIZE (compressed): 904 K
PACKAGE SIZE (uncompressed): 2570 K
PACKAGE DESCRIPTION:
strigi: strigi (fast and light desktop search engine)
strigi:
strigi: Strigi is a fast and light desktop search engine. It can handle a
strigi: large range of file formats such as emails, office documents, media
strigi: files, and file archives. It can index files that are embedded in
strigi: other files. This means email attachments and files in zip files
strigi: are searchable as if they were normal files on your harddisk.
strigi:
strigi: Homepage: http://strigi.sourceforge.net/
strigi:

# slackpkg info qimageblitz

PACKAGE NAME: qimageblitz-r900905-i486-1.tgz
PACKAGE LOCATION: ./slackware/l
PACKAGE SIZE (compressed): 82 K
PACKAGE SIZE (uncompressed): 240 K
PACKAGE DESCRIPTION:
qimageblitz: QImageBlitz (Graphical effect and filter library for KDE4)
qimageblitz:
qimageblitz: Blitz is a graphical effect and filter library for KDE4.0 that
qimageblitz: contains many improvements over KDE 3.x's kdefx library
qimageblitz: including bugfixes, memory and speed improvements, and MMX/SSE
qimageblitz: support.
qimageblitz:

# slackpkg install strigi qimageblitz
After those minor issues, I had KDE4 up and running with the latest Nvidia driver.

22 December, 2008

Answers to NIDS management

C.S. Lee had a post called NIDS: Administration, Management & Provisioning that asked some good questions about managing large numbers of NSM sensors. I have managed large numbers of sensors in the past, so thought I would take a shot at describing some of the ways I eased management as well as other methods I still look forward to trying. Since my post is long, I thought it better to write it here than stuff it all into a comment on geek00l's blog.

A couple of things to remember; first, there are almost always ways to improve complex systems management. Second, "perfect" is the enemy of "good enough". At some point you reach the point of diminishing returns, so the cost of additional improvement of the management or administration of the systems may not be worth the reward.

1. What tools do you use to manage all the NIDS, and why you choose them over others?
- For example ssh, however I would like to know more about tools you use to manage massive NIDS instead of one, and the reason you choose it.
SSH is obviously going to be one way to login to systems and do certain things. If it is something that you must do consistently, then scripts or other system management methods that I will discuss later are likely more appropriate. When using SSH for a large number of systems, don't forget that SSH keys and ssh-agent are your friend. With ssh-agent, you can login to all your systems with your SSH key after entering your passphrase only once. This simplifies running scripts that require logging into or copying files to each system.

Also, when I talk about using SSH along with scripts, I'm also talking about using programs that support SSH as the transport protocol, for example rsync and rdist. Expect scripts are also a common way to roll your own centralized management of systems, but for C.S. Lee's 50+ system question, a dedicated application seems to be a better answer than only using scripts and logging in manually.
2. How do you perform efficient administration securely? For examples,
- System changes/updates
- NIDS tools' changes/updates
- NIDS rules' changes/updates
- NIDS Configuration files' changes/updates
- NIDS Policies' changes/updates
I think these types of changes and updates will require a combination of tools, and the tools could depend in part on the operating system(s). If you have multiple operating systems then it also makes the management more complex, so ideally you want to standardize on an operating system as well as keeping the release versions identical whenever possible.

One thing I've mentioned in the past for system management is puppet.
Puppet lets you centrally manage every important aspect of your system using a cross-platform specification language that manages all the separate elements normally aggregated in different files, like users, cron jobs, and hosts, along with obviously discrete elements like packages, services, and files.
Although I haven't yet had the chance to use puppet, it seems to have a good reputation. Another option is cfengine, though most people I have talked to that have experience with both seem to prefer puppet. Change management of configuration files, cron scripts, and other files like NIDS rules can definitely be handled by one of these central management tools.

Another thing to consider is whether your operating system or its vendor includes anything for these tasks. For instance, Red Hat Network Satellite can handle a lot of centralized management, including package management. NIDS/NSM sensors often need configuration changes from the standard distribution package for certain software, so being able to roll your own packages and push them to sensors automatically can drastically reduce system management overhead.

Although puppet seems to handle users, I've also written three posts about OpenLDAP for centralized management of users and groups [1, 2, 3]. With most current Linux or BSD, once LDAP is configured it is pretty easy to manage users, groups, and even sudo. Since I've worked in environments with not just large numbers of Linux systems, but also large numbers of users, LDAP was definitely useful. With a small number of users on large number of systems, I'm not sure that it would be needed.

For the security requirement, any good centralized management system better have some sort of authentication and encryption. Puppet supports a CA and SSL, cfengine supports RSA and Blowfish along with public-private keys, and Red Hat Satellite suports SSL and GPG. Other basics including host-based firewalls like iptables can also be useful for limiting exposure and access from the network.

Truthfully, I have mostly relied on home-grown scripts combined with SSH, rsync and/or rdist to push files or commands to Linux systems. However, with the number of systems I have managed, the up-front cost of implementing something like puppet, cfengine, or Satellite would be worth the long-term benefits.
3. Which method you like to use in order to manage them, and why? For example,
- Server pushes rules update to all the sensors(Push)
- Sensors pull the rules update from server(Pull)
I think this question is largely moot because it will usually be determined by the management tools you are using. For instance, Red Hat runs a daemon on the individual systems that will check in either with Red Hat Network or with your local Satellite Network.

When using scripts, I will usually use a push simply because I like to login to one system then run the script that will connect to all the other systems to copy files or run a command.
3. NIDS health monitoring and self-healing
- I'm talking about something like this, if the system is in incosistent state, operators will be notified. If certain process die, it should recover by itself.
The obvious answer to monitoring processes is something like Nagios, an open source solution. Nagios can also handle restarting services or processes through event handlers. Realistically, any software that monitors services should have the ability to restart those services if needed. Another example of process monitoring and restarting is daemontools, but it does not really meet monitoring needs for an enterprise and is fairly limited. There are additional choices of monitoring software, as well.

13 November, 2008

OpenLDAP Security

Since I have been doing a lot of system administration blogging lately and not much on security, I decided I should post something related to security even if it is still in reference to system configuration and administration. Despite being years old, many of the pages I found about LDAP and security were still pertinent, for example Security with LDAP. The OpenLDAP documentation has a whole section titled Security Considerations in addition to other sections throughout that address security.

The TLDR version of this post is that some of the defaults for OpenLDAP may not be secure, it is easy to make other configuration mistakes, and you should make sure to examine configurations, permissions, ACLs, and schemas with security in mind. Different distributions can have different defaults. If you are using LDAP for account information in particular, you need to be careful.

I will go over some specifics that I noticed, but I certainly won't cover everything. OpenLDAP can be configured to get a similar level of protection for account information compared to the standard Unix/Linux shadow files and actually makes some security-related tasks easier for an administrator, such as disabling accounts or enforcing password policies.

Encryption of Data and Authentication

The first and most obvious problem is that the default OpenLDAP configuration does not encrypt network activity. Whether you're using LDAP for account information or not, it is very likely that most people will not want their LDAP traffic going over the network unencrypted. OpenLDAP has support for TLS that makes it relatively easy to implement. Also important to note is that, though network activity is not protected by default, the minimum recommended authentication mechanism is SASL DIGEST-MD5.

The DIGEST-MD5 mechanism is the mandatory-to-implement authentication mechanism for LDAPv3. Though DIGEST-MD5 is not a strong authentication mechanism in comparison with trusted third party authentication systems (such as Kerberos or public key systems), it does offer significant protections against a number of attacks.
Another option, Kerberos, is also "highly recommended" for strong authentication services.

Passwords

When using OpenLDAP with nss_ldap and centralized accounts, if you're storing passwords in LDAP they should be stored as hashes, not plain text. This seems obvious, but it's important to understand how to generate the hashes with the 'slappasswd' command and then use 'ldapadd', 'ldapmodify' or a GUI LDAP management tool to put the hashes into LDAP. This is done when creating or altering accounts. The 'passwd' command will hash passwords automatically when users change their own passwords.

Different distributions have different default ACLs, but RHEL for example allows anonymous reads of LDAP by default and allows authenticated users read access to everything in another sample ACL included in the openldap-servers package. If you're going to store account information and passwords with LDAP, the access controls need to be changed to prevent both anonymous and authenticated users from viewing password hashes. As we all should know, all it takes to crack a password hash is an appropriate tool and processing time. Depending on the attacker's computing power, the password hashing algorithm and the actual password, cracking passwords can be extremely fast or very slow.

OpenLDAP supports a number of hashing algorithms and the default is to use {SSHA}, which is SHA-1 with a seed.
-h scheme
If -h is specified, one of the following RFC 2307 schemes may be specified: {CRYPT}, {MD5}, {SMD5}, {SSHA}, and {SHA}. The default is {SSHA}.

Note that scheme names may need to be protected, due to { and }, from expansion by the user's command interpreter.

{SHA} and {SSHA} use the SHA-1 algorithm (FIPS 160-1), the latter with a seed.

{MD5} and {SMD5} use the MD5 algorithm (RFC 1321), the latter with a seed.

{CRYPT} uses the crypt(3).

{CLEARTEXT} indicates that the new password should be added to userPassword as clear text.

 This is fine when setting initial passwords, but you should note that the 'passwd' command on Linux systems will generally use MD5 or the 'crypt' function instead of SHA1, depending on system configuration.

ACL Problems

There can also be problems related to Access Control Lists for slapd. Red Hat's default configuration allows anonymous reads. Ubuntu's slapd.conf seems to have a much more secure default ACL. Below is RHEL5's default, which allows anonymous reads, user reads, but only the rootdn to write.
access to * by * read
The following is a sample configuration that is also included in the default slapd.conf on RHEL5, though it is commented out in favor of the above ACL. The danger with the following is that users still can read everything as well as write their own entries.
access to *
by self write
by users read
by anonymous auth
Allowing users 'self write' to change their own entries is obviously a big problem if you're using LDAP for account information. Any user can change his own uidNumber or gidNumber to become uid 0, gid 0, gid 10 (wheel), etc. Not good!

To authenticate with nss_ldap, OpenLDAP must allow some sort of read access. Without anonymous reads, users can't authenticate unless there is a proxy user with read access. The proxy user's binddn and password must be in /etc/ldap.conf and /etc/openldap/ldap.conf in plain text and the files are world readable. This is somewhat mitigated because the ldap.conf files can only be accessed by authenticated users logged into the system, so if an attacker already gained access to the system the proxyuser password is a fairly trivial concern in the big scheme of things.

Another file with a plain text password is /etc/ldap.secret. This file must contain the rootdn password in plain text, but is again somewhat mitigated with file permissions. The permissions for the file must be set to 600 so only root can read the file, so the obvious way an attacker will get the rootdn password from the file is if he already has root privileges on that particular system. However, with the rootdn password the attacker could wreak havoc on all the LDAP entries, including all the account information stored in LDAP.

To prevent users from viewing the password hashes of others, two things are required. First, change the ACL in slapd.conf. Something like this would allow users to change their own passwords, but not any other attributes and not view other users' hashes. You can hide additional attributes from authenticated users if needed.
access to attrs=userpassword
by anonymous auth
by self write
by * none

access to *
by self read
by users read
by anonymous auth
Another important thing to do is put users in the objectclass 'shadowAccount', which is in the NIS schema along with the objectclass 'posixAccount' that stores most account information. This will prevent password hashes from displaying when using 'getent passwd'. This is similar to shadow passwords on the local system, which move the password hashes from the world-readable /etc/passwd to /etc/shadow, which is only readable by root. The 'getent' commands will fetch both local and LDAP information.

Password Policy Overlay

The password policy overlay for OpenLDAP allows password policies to be enforced on OpenLDAP accounts. Quoting from the documentation:
The key abilities of the password policy overlay are as follows:
  • Enforce a minimum length for new passwords
  • Make sure passwords are not changed too frequently
  • Cause passwords to expire, provide warnings before they need to be changed, and allow a fixed number of 'grace' logins to allow them to be changed after they have expired
  • Maintain a history of passwords to prevent password re-use
  • Prevent password guessing by locking a password for a specified period of time after repeated authentication failures
  • Force a password to be changed at the next authentication
  • Set an administrative lock on an account
  • Support multiple password policies on a default or a per-object basis.
  • Perform arbitrary quality checks using an external loadable module. This is a non-standard extension of the draft RFC.
Particularly for people that have specific company requirements for password policies, this overlay will do just about everything except complexity checking. For complexity checking, it's fairly easy to enable and configure pam_cracklib on each client. As far as I know, since only the hash crosses the wire when authenticating or changing passwords, it is not possible to centrally enforce complexity requirements.

Personally, for password expiration I prefer not to allow any grace logins, thereby enforcing a lockout if the password expires. As long as the policy is set to provide ample warning, this shouldn't cause problems. Consider if you allow some number of 'grace' logins after the password expires and for some reason a user does not login for an extended period of time. The account could conceivably remain active for as long as it takes to brute force the password rather than being disabled once the password expires.

Another password policy overlay feature is temporary lockouts after failed authentication. For instance, you could set a lockout after x login attempts in y seconds. The lockout can be z seconds. I don't know what the maximum number of seconds the overlay or OpenLDAP will accept, but it can definitely be zero up to months in seconds if needed for some fields like pwdMaxAge.

When enabing 'pwdReset' to require an immediate password change, I eventually found that the following line must be uncommented in slapd.conf.
pam_lookup_policy yes
After doing this, you can set 'pwdReset: TRUE' when generating temporary passwords, then the user will be required to change passwords immediately when logging in.

From my testing, the password policy overlay is definitely superior to the shadow password options within the nis.schema that comes with OpenLDAP. The biggest problem with the password policy overlay is that some distributions may not include it in the distribution's package for the OpenLDAP server, requiring compiling with support for the overlay instead of a standard OpenLDAP package from your distribution of choice.

Post Script

I have two previous posts about OpenLDAP. I would love to get any comments on what could be added or any mistakes that could be corrected.

Setting up OpenLDAP for centralized accounts
OpenLDAP continued