26 December, 2007

Quicktime exploit, infection, and NSM

I've recently seen malware activity that will grab data from HTTP POST requests and send the data to a web server. Reliable signatures for this malware have been part of the Bleeding rules for many months.

I would hope that up-to-date anti-virus will catch this. There is really no excuse for missing it, but it certainly doesn't have to be the only malicious result of the successful exploit attempts. It's always a good idea to remember that whatever method was used to compromise a system could have been used to do more than what is easily observable.

Using network security monitoring (NSM), I've worked backwards from the first check-in to find the infection method was the latest Quicktime exploits. Specifically, I've seen malicious QTL files in web traffic prior to the infection symptoms. I'm actually not positive that the most recent vulnerabilities are being exploited rather than one of the numerous older Quicktime vulnerabilities, but the timing of the activity suggests the more recent.

Even with a reliable IDS signature, you still have to ask whether it is worth risking the loss of data that is in POST requests. I think it's not worth risking, particularly if there is a common external web server that can be blocked to prevent the data extrusion.

There are a few ways to block data in a case like this. An inline IDS is one, but may not catch all outbound data. In this case, the signatures have an extremely low risk of blocking non-malicious traffic, but that doesn't mean they'll catch all the malicious traffic either. The signatures will accurately point out any infected systems even if configured to drop the traffic.

Blocking the IP address at a firewall would work well to prevent data loss, but has two main drawbacks. First, you won't detect infected systems with your IDS, at least not based on current signatures. The TCP handshake won't even be completed, let alone the GET request that triggers the alert. Second, but much less problematic, is that you could be blocking many websites on that particular IP address. I say less problematic because it may be the case that one compromised virtual host on an IP address means you should treat the server as completely compromised, so it is best to block the IP rather than just a FQDN.

Another way to block the server that is receiving the stolen POST data is with an appliance or software for filtering web traffic. The advantage here is that connection attempts may still generate IDS alerts even though the connection actually stops at the web filter rather than the external server. You can potentially block all the traffic while still generating IDS alerts to detect infected systems, depending on your web filter.

If blocking connections to a web server that is known as a check-in location, NSM is very useful. Put together a script or database query that looks at your session data for repeated connection attempts to the blocked site and you'll find out if you have any systems that may be infected and trying to check-in. Session data will show each connection attempt even though there will be no responses.

02 December, 2007

What I read

One of the things I've noticed about a lot of successful information security professionals is the large amount of reading they do. The pace and scope of the field make it very important to read.

As I find one site to read, it often leads to one, two, or many other interesting sites. Reading all these sites has led to me learning a huge amount on a wide variety of topics. Even when I'm just reading opinions rather than technical content, it is often food for thought.

The sites I have listed on the right represent just a portion of the regular reading I do. Some of those sites will also mention or have links to other good reading. I particularly like technical content, explanations or HOWTOs that can be used in the real world.

While it's probably impossible to read too much in this field, it's definitely possible to read too little.

Managing multiple systems

This post is really just my thoughts and notes on managing multiple Sguil sensors. A lot of the thoughts could be applied to managing multiple UNIX-like systems in general, whether or not they are NSM systems. I would love any feedback on what other people recommend for managing Unix deployments.

In the past, most Linux or FreeBSD systems I've managed have been single systems, not part of a larger group of systems. Currently, I'm actually well past the point where I should have come up with a more elegant and automated solution to managing multiple systems of similar configurations.

In my current environment, I have multiple sensors with almost identical configurations. The configuration differences are minor and mainly because of different sensor locations.

Most of my management amounts to patching operating systems, patching software, and updating Snort signatures. To update Snort signatures, I generally run Oinkmaster on a designated system, review rule changes, disable or enable new rules as needed, then run a script to push out the updated rules and restart Snort with the new rules on each sensor.

When I build new sensors, I manually install the operating system. With RHEL systems, I should really be using Kickstart to help automate the installation and configuration of new systems.

After I install the operating system, I have a set of scripts I use to automate configuration and installation of software. These scripts are similar to David Bianco's InstantNSM except mine are much more specifically tailored for my environment and include configuration of the operating system, not just installing NSM software.

User account management is another facet of system management. For NSM sensors, since access to the systems is limited to an extremely small number of people, account management is not a huge issue and I won't be getting into that aspect of system management.

One site I've seen mentioned as a reference for managing numerous systems is Infrastructures.Org. The first thing I noticed on the Infrastructures site was the recommendation to use version control as a way of managing configuration files. This is probably obvious to a lot of people, but I never really thought of version control for that purpose. Subversion, CVS, or your version control of choice may be useful for managing systems, not just software. Normally, when I thought version control I thought of it in reference to software projects, not system management. Another option would be something like rdist, which is a program to maintain identical copies of files over multiple hosts.

One other thing some people do is local software repositories. On RHEL systems this might mean a local RPM repository for software that has been customized or is not available in the official Red Hat repositories. This could also mean considering something like Red Hat Satellite Server, depending on exactly what you want out of it. In my case, I think Satellite Server may be overkill but it certainly has some intersting things to offer.

There are several things I see myself needing to do to make management of my systems better. These are just places to start, and hopefully once I explore the options I will be posting my experiences.

  1. Kickstart for OS installation, which would also probably replace some of my scripts used to configure new systems.
  2. Modify my scripts for NSM setup to work with Kickstart if necessary.
  3. Version control and automating the process of systems synchronizing files.
  4. Local software repositories for customized software, including making RPMs of the modified software instead of compiling the software and then pushing it out.