22 April, 2013

Home Lab Part 2: VMware ESXi, Security Onion, and More

As I stated in my previous post about a new home lab configuration, I decided to try VMware ESXi 5.1 on my new Shuttle SH67H. ESXi is free for uses like this, presumably because it clearly benefits VMware if professionals can use it in a lab setting and that encourages use of their paid products in production. I have seen some conflicting accounts, but it appears that the main limit on the free version of ESXi 5.1 is 32GB of RAM.

I won't go into too much detail about the installation since it is adequately covered by a couple of other posts I found prior to purchasing my system.

I will mainly cover details that stood out and things I discovered as someone new to ESXi.

I had already planned to get a Shuttle for the small form factor, low noise, and low power usage. Finding out that the SH67H could be used as a white box for ESXi made it easy to pick an initial project once I built the system. (Okay, we could quibble over whether a Shuttle counts as a white box). Additionally, since my previous home network sensor running Sguil had died, I figured that the first VM to build would be Security Onion but that I'd still be able to run multiple other VMs without impacting my home lab NSM.

Getting ESXi installed on the Shuttle was pretty simple. After booting to CD, I just followed the prompts and made sane choices. The one thing to note is that I installed ESXi to an external USB flash drive. Since the OS is so small, it gets loaded primarily to RAM at boot anyway. Using a flash drive has some advantages and some disadvantages, as shown in many discussions on the VMware and other discussion boards. For my home lab I decided to install to the flash drive, but chances are that it will actually make no difference to me. Some ESXi servers have no local storage, so I imagine it is particularly common for those systems to use a USB flash drive.

After using directly connected keyboard and monitor, I moved the system into my home "server closet" and booted it headless. I installed the vSphere Client on my local Windows VM since I don't have a non-VM Windows installation. The vSphere Client was surprisingly easy and I might even go as far as user-friendly. You can see in the screenshot below that it is relatively straightforward.
The error states "System logs on host vmshuttle are stored on non-persistent storage."
The first thing I noticed was, because of installing ESXi to the flash drive, I got the error shown in my screenshot.

This error was only temporary. I am not sure when it was resolved, most likely after a reboot or I created the initial guest VM, but the system created a ".locker" directory in which I can clearly see all the logs. I am assuming they are persistent since vmstore0 is the internal 1TB hard drive, not the USB flash drive.

# cd /vmfs/volumes/vmstore0/
# ls -l .locker/
drwxr-xr-x    1 root     root           280 Apr  7 16:01 core
drwxr-xr-x    1 root     root           280 Apr  7 16:01 downloads
drwxr-xr-x    1 root     root          4340 Apr 16 02:15 log
drwxr-xr-x    1 root     root           420 Apr  7 16:01 var


I believe another option for fixing the error would be to manually set the scratch partition as detailed in VMware's Knowledge Base. Note that I haven't actually tried that to date.

Before being able to SSH into the ESXi host and look at the above directories and files, I had to enable SSH. The configuration for SSH and a number of other security-related services is available in vSphere by highlighting the host (since in the workplace you may use vSphere to manage multiple ESXi systems), then going to the Configuration tab, Security Profile, and finally SSH Properties. If you haven't noticed already, ESXi defaults to using root for everything. I haven't yet investigated the feasibility of locking down the ESXi host, but I think it's safe to say most people will rely on keeping the host as isolated as possible since the host OS is not particularly flexible or configurable outside options VMware provides.

I decided the best way to use vSphere would be to copy my Windows 7 VM from my laptop to the ESXi host. Trying to scp the VM then adding it to the inventory never worked properly. I had similar problems when trying to scp a CentOS VM from my laptop. When I tried browsing the datastore in vSphere and adding a local machine to the remote inventory, it would get partway through and then fail with an I/O error. I believe this was all actually a case of a flaky wireless access point, but even in cases where I successfully copied the CentOS VM I got errors when trying to add it to the inventory.

I eventually got it to work by converting the VM locally using ovftool then deploying it to ESXi. OVF is the Open Virtualization Format, an open standard for packaging virtual machines. The syntax to convert an existing VM is simple. First, make sure the VM is powered down rather than just paused. On OSX running VMware Fusion, you can export a VM easily.

~ nr$ cd  /Applications/VMware\ Fusion.app/Contents/Library/VMware\ OVF\ Tool/ovftool
~ nr$ ./ovftool -dm=thin ~/Documents/Virtual\ Machines.localized/Windows\ 7\ x64.vmwarevm/Windows\ 7\ x64.vmx ~/Documents/VM-OVF/Windows\ 7\ x64/Windows\ 7\ x64.ovf

After the conversion, the VM still needed to be exported to the ESXi host. I plugged my laptop into a wired connection to speed the process and eliminate any issues I was having over wireless, then sent the VM to ESXi. The options I used are to set the datastore, disk mode, and network.

~ nr$ ./ovftool -ds=vmstore0 -dm=thin --network="VM Network 2" ~/Documents/VM-OVF/Windows\ 7\ x64/Windows\ 7\ x64.ovf vi://192.168.1.10/

Once the VM is copied to the host, you will need to browse the datastore and add the VM to the ESXi inventory. Other ways to move a VM to ESXi are not endorsed by VMware. They officially recommend using OVF to import VMs.

All things considered, getting ESXi installed and configured was relatively easy. There are certainly drawbacks to using unsupported hardware. For example, vSphere does not display CPU temperature and other health or status information. I believe ESXi expects to use IPMI for hardware status like voltages, temperatures, and more. There are options to consider for anyone wanting a home lab using supported hardware. VMware maintains a lengthy HCL and I presume systems on their list support all the health status information in vSphere. I did find several possibilities to buy used servers like a Dell PowerEdge 2950 at reputable sites for about $650. Since I didn't want the noise, don't have a rack, and may not keep the system as a dedicated ESXi host, I did not go that route for a lab system.

Building a Security Onion VM


As stated, the first VM I built was Security Onion. I did this through the vSphere client and include some screenshots here. Most of this applies to building any VM using vSphere.

After choosing the option to create a new VM, I selected a custom configuration. I named the VM simply "Security Onion" and chose my only datastore, vmstore0, as the storage location. I am not concerned with backwards compatibility, so chose "Virtual Machine Version 8." I chose only one core and one socket for the CPU, but allocated 4GB of RAM since I knew the combination of Suricata, Bro, and other monitoring processes would eat a lot of RAM. I was installing 64-bit, so I chose 64-bit Ubuntu as the Linux version. 
Choosing the number of NICs, network, and
adapter to use when initially configuring the VM
I selected two NICs both using VMXNET 3, which was probably the first non-standard selection in my custom configuration. I wanted to make sure I had separate management and promiscuous mode NICs since this will be a sensor. The option for VMXNET 3 should not be available as a choice if you previously selected an OS that doesn't support it when you created the VM.

I next chose the LSI Logic SAS for the SCSI Controller. Although I think it won't matter for Ubuntu, note the following from VMware's local help files.
"The LSI Logic Parallel adapter and the LSI Logic SAS adapter offer equivalent performance. Some guest operating system vendors are phasing our support for parallel SCSI in favor of SAS, so if your virtual machine and guest operating system support SAS, choose LSI SAS to maintain future compatibility."
This is a good time to point out that hitting the "Help" button in vSphere will open the local help files in your browser, and they contain actual useful information about the differences in choices when configuring the VM. In the case of the help button during the process of creating a new VM, it will actually open the specific page that is contextually useful for the options on the current step of the process. In general, both their help files and the Knowledge Base seem quite useful.

Finally, I created the virtual disk. This includes deciding whether to thin provision, thick provision lazy zeroed, or thick provision eager zeroed, meaning prepare the disk ahead of time. The documentation states that eager zeroed supports clustering for fault tolerance. I chose thick provisioned for my Security Onion since I knew with certainty that the virtual disk would get filled with NSM data like packet captures and logs. There are a number of KB and blog posts on the VMware site that detail advantages and disadvantages of the different provisioning methods.
The final settings for my Security Onion VM

Once the VM was configured on ESXi, I still needed to actually install Security Onion. You can do it the old-fashioned way by burning a disc and using the CD/DVD drive, but I used mounted the ISO. To do this, you just need to start the VM, which doesn't yet have an OS, then open a console in vSphere and click the button to mount the ISO in the virtual CD drive so it will boot to the disc image and start the installation process. The vSphere console is a similar view and interface to Fusion or Workstation and mounting the ISO works essentially the same way.

The time it took from hitting the button to create a VM to the time I had a running Security Onion sensor was quite short. I had a couple small problems after the initial installation. First, in ESXi you have to manually go to the NIC settings and check a box that allows it to sniff all the traffic. My sniffing interface was initially not seeing the traffic when I checked with tcpdump, which made me realize it was probably not yet in promiscuous mode.

Second, the 4GB RAM and one CPU I had initially allocated was insufficient. When the sensor was running and I tried to update Ubuntu, the system became very unresponsive. I eventually doubled the RAM to 8GB and the number of cores to two, which resolved the issue. I think at this point that I could probably actually drop back down to 4GB of RAM, but since the system has 32GB I don't need to worry about it yet.

Other ESXi Notes


Although ESXi is stripped pretty bare of common Linux utilities and commands, there is plenty you can do from a command line through SSH instead of using vSphere. For example, to list all VMs on the system, power on my Windows 7 VM, and find the IP address so I can connect through RDP:

# vim-cmd vmsvc/getallvms
Vmid        Name                            File                           Guest OS       Version   Annotation
1      Security Onion   [vmstore0] Security Onion/Security Onion.vmx   ubuntu64Guest      vmx-08              
13     Windows 7 x64    [vmstore0] Windows 7 x64/Windows 7 x64.vmx     windows7_64Guest   vmx-09              
6      CentOS 64-bit    [vmstore0] CentOS 64-bit/CentOS 64-bit.vmx     centos64Guest      vmx-08 
~ # vim-cmd vmsvc/power.on 13
~ # vim-cmd vmsvc/get.guest 13 | grep -m 1 ipAddress
   ipAddress = "192.168.1.160",

I can get smartd information from my hard drive if needed.

~ # esxcli storage core device list
t10.ATA_____ST1000DM0032D1CH162__________________________________Z1D3GHKF
   Display Name: Local ATA Disk (t10.ATA_____ST1000DM0032D1CH162__________________________________Z1D3GHKF)
   Has Settable Display Name: true
   Size: 953869
   Device Type: Direct-Access 
   Multipath Plugin: NMP
   Devfs Path: /vmfs/devices/disks/t10.ATA_____ST1000DM0032D1CH162__________________________________Z1D3GHKF
   Vendor: ATA     
   Model: ST1000DM003-1CH1
   Revision: CC44
   SCSI Level: 5
   Is Pseudo: false
   Status: on
   Is RDM Capable: false
   Is Local: true
   Is Removable: false
   Is SSD: false
   Is Offline: false
   Is Perennially Reserved: false
   Queue Full Sample Size: 0
   Queue Full Threshold: 0
   Thin Provisioning Status: unknown
   Attached Filters: 
   VAAI Status: unknown
   Other UIDs: vml.01000000002020202020202020202020205a31443347484b46535431303030
   Is Local SAS Device: false
   Is Boot USB Device: false

~ # esxcli storage core device smart get -d t10.ATA_____ST1000DM0032D1CH162__________________________________Z1D3GHKF
Parameter                     Value  Threshold  Worst
----------------------------  -----  ---------  -----
Health Status                 OK     N/A        N/A  
Media Wearout Indicator       N/A    N/A        N/A  
Write Error Count             N/A    N/A        N/A  
Read Error Count              115    6          99   
Power-on Hours                100    0          100  
Power Cycle Count             100    20         100  
Reallocated Sector Count      100    10         100  
Raw Read Error Rate           115    6          99   
Drive Temperature             32     0          40   
Driver Rated Max Temperature  68     45         65   
Write Sectors TOT Count       200    0          200  
Read Sectors TOT Count        N/A    N/A        N/A  
Initial Bad Block Count       100    99         100  

There is a lot more you can do from the ESXi command line interface, but I should emphasize again that it is stripped fairly bare and does not have a lot of commands you expect if you come from a Linux or Unix background. Even some of the utilities that are available do not have some of the options or functionality you would expect. The CLI commands will generally list options or help when run without arguments. You can also get plenty of CLI documentation from VMware.


Next Steps


I now have a number of VMs installed, including a CentOS snapshot, FreeBSD, and my Windows 7 VM. My next steps will include setting up some VLANs to have some fun with a vulnerable network and an attacker network that will include KaliLinux. I am intimately familiar with Sguil and some of the other tools in Security Onion, but also hope to dig into Suricata and Bro more than I have in the past.

I also hope that my lab will provide some interesting material for future blog posts.

04 April, 2013

New Home Lab Configuration

I received all my new equipment for my home lab a couple of days ago. After setting up the hardware in less than a day, I'm quite happy with it so far.

I was lucky enough to have two 12-year-olds assist me when I assembled the computer. This was their first time assembling a computer from parts and they really enjoyed it.

The first component was the Shuttle SH67H3. My friend Richard recommended the DS61, but I had two main problems with that barebones system. First, it only has two RAM slots for a maximum of 16GB. That's not bad, but I decided I wanted to get a desktop that supported more RAM without going to server components while keeping the form factor small. I actually may have a second system on my purchase list for sometime this year, and in that case I would definitely consider the DS61.

Second, I had read that the SH67H3 worked as an ESXi whitebox. Overall, I am a fan of Shuttle barebones. The SH67H3 is essentially the same chassis my coworkers and I used on our lab network at a previous job, just with a new motherboard and other improvements. I used very similar or identical parts for my Shuttle as the ones listed in the ESXi whitebox link above.

When we popped the case open, it all looked pretty familiar and I explained the various pieces to the 12-year-olds. We removed the fan and heat sink array, which is a pretty nice low-noise setup. The case fan actually slides over the second heat sink so air blows over it on the way out the back of the chassis.
Don't forget to remove both the sticker from the heatsink and the plastic film that is on the CPU load plate before putting the CPU in the socket. After we inserted the Intel Core i7 2600, we applied thermal paste, reattached the passive cooling, and finally reattached the fan including plugging it back into the motherboard. We also inserted the four 8GB RAM sticks.
Shuttle SH76H3 motherboard with CPU and RAM installed
The fan slides over the heat sink at the rear of the chassis on the left
Next, we put the DVD/CD drive and hard drive into the tray, attached the tray to the chassis, and connected the SATA and power cables. I also added a dual Intel PRO/1000 PT NIC to give a total of three physical network interfaces. We finally tested and everything appeared to be working.


New Network Architecture


Going to all this trouble for a relatively powerful computer compared to my three old Pentium III servers, I decided to take the opportunity to make a couple of other network changes. I used to run my network sensor inline, but along with the new computer I purchased a Netgear GS108T-200 smart switch. This switch has an abundance of features, including VLANs and port mirroring. Along with the new switch, all I needed was an extra WAP to reconfigure home network as shown below.
The router/firewall also works as a WAP, but to see most client traffic
I disabled it and connected an access point behind the mirroring switch
With this configuration, the switch will mirror traffic to a dedicated network interface on my network sensor. Only traffic that doesn't make it to the switch will not be seen on the mirror port. I can also configure VLANs on the switch if I want to segment the network based on functions like management interfaces and WiFi clients.

I plan to write more about my lab setup as I continue to redevelop it. The first thing I did after testing the new box was install ESXi and create a network sensor VM using Security Onion. I may have a post about it soon.