A lot of people noticed the recent Congressional ethics probe that was disclosed because a junior staff member put a sensitive document on her computer at home. Not surprisingly, her computer also had file-sharing software installed and she inadvertently was sharing the document on a peer-to-peer network. Some are calling for a review of congressional cybersecurity policies after the breach. One thing to remember is that this sort of thing is not unique, new or surprising.
David Bianco wrote about a similar topic in 2006 and covers the important points, though I would add that the problem also extends to personal systems, not just mobile devices. Whether the vulnerability is a mobile device that is easily lost or stolen (laptop, smart-phone, music player, etc), or a personal system running software that would never be allowed in a work environment, don't put sensitive information on systems that are difficult to control.
19 November, 2009
SNAFU: Peer-to-peer and Sensitive Information
Posted by Nathaniel Richmond at 06:18 0 comments
Labels: risk, vulnerabilities
17 November, 2009
SANS WhatWorks in Incident Detection Summit 2009
I am scheduled to be a part of several discussion panels at the SANS WhatWorks in Incident Detection Summit 2009 on 9-10 December. There are a lot of good speakers participating and the agenda will cover many topics related to incident detection. I believe there is still space available for anyone that is interested in attending.
From SANS:
Following the success of the 2008 and 2009 editions of the SANS WhatWorks in Forensics and Incident Response Summits, SANS is teaming with Richard Bejtlich to create a practioner-focused event dedicated to incident detection operations. The SANS Incident Detection Summit will share tools, tactics, and techniques practiced by more than 40 of the world's greatest incident detectors in two full days of content consisting of keynotes, expert briefings, and dynamic panels.
http://www.sans.org/incident-detection-summit-2009/
Posted by Nathaniel Richmond at 08:12 0 comments
19 October, 2009
Hackintosh Dell Mini 9
If you want a small laptop running Mac OSX, this is a pretty cool. The Dell outlet sometimes still has these laptops, but remember to factor in that they are old and need a larger SSD. Also take into account that installing even a retail copy of OSX on non-Apple hardware may violate their EULA.
Posted by Nathaniel Richmond at 22:23 0 comments
Labels: osx
12 October, 2009
Adding GeoIP to the Sguil Client
This is a post I meant to publish months ago, but for some reason slipped off my radar. I was reading the Sguil wishlist on NSMWiki and saw something that looked simple to implement. Here are a couple diffs I created after adding a menu item for GeoIP in sguil.tk and a proc for it in lib/extdata.tcl. All I did was copy the existing DShield proc and menu items, then edit as needed to change the URL and menu listings.
I think it should work and I downloaded a pristine copy of the files before running diff since I've hacked Sguil files previously, but no warranty is assumed or implied, et cetera.
Ideally, I would love to help out and tackle some of the other items on the wishlist. My time constraints make it hard, but at least I now have a Tcl book.
sguil.tk
2865a2866 > .ipQueryMenu add cascade -label "GeoIP Lookup" -menu $ipQueryMenu.geoIPMenu 2873a2875,2876 > menu $ipQueryMenu.geoIPMenu -tearoff 0 -background $SELECTBACKGROUND -foreground $SELECTFOREGROUND \ > -activeforeground $SELECTBACKGROUND -activebackground $SELECTFOREGROUND 2917a2921,2922 > $ipQueryMenu.geoIPMenu add command -label "SrcIP" -command "GetGeoIP srcip" > $ipQueryMenu.geoIPMenu add command -label "DstIP" -command "GetGeoIP dstip"lib/extdata.tcl
211a212,243 > proc GetGeoIP { arg } { > > global DEBUG BROWSER_PATH CUR_SEL_PANE ACTIVE_EVENT MULTI_SELECT > > if { $ACTIVE_EVENT && !$MULTI_SELECT} { > > set selectedIndex [$CUR_SEL_PANE(name) curselection] > > if { $arg == "srcip" } { > set ipAddr [$CUR_SEL_PANE(name) getcells $selectedIndex,srcip] > } else { > set ipAddr [$CUR_SEL_PANE(name) getcells $selectedIndex,dstip] > } > > if {[file exists $BROWSER_PATH] && [file executable $BROWSER_PATH]} { > > # Launch browser > exec $BROWSER_PATH http://www.geoiptool.com/?IP=$ipAddr & > > } else { > > tk_messageBox -type ok -icon warning -message\ > "$BROWSER_PATH does not exist or is not executable. Please update the BROWSER_PATH variable\ > to point your favorite browser." > puts "Error: $BROWSER_PATH does not exist or is not executable." > > } > > } > > } >
Posted by Nathaniel Richmond at 02:55 0 comments
15 September, 2009
MySQL replication on RHEL
I recently configured MySQL for replication after first enabling SSL connections between the two systems that would be involved with replication. I have to say that MySQL documentation is excellent and all these notes are simply based on what is available on the MySQL site. I have included links to as many of the relevant sections of the documentation as possible.
For reference, here is the MySQL manual on enabling SSL: 5.5.7.2. Using SSL Connections
Before beginning, it is a good idea to create a directory for the SSL output files and make sure all the files end up there.
MySQL’s RHEL5 packages from mysql.com support SSL by default, but to check you can run:
$ mysqld --ssl --help mysqld Ver 5.0.67-community-log for redhat-linux-gnu on i686 (MySQL Community Edition (GPL))Copyright (C) 2000 MySQL AB, by Monty and others This software comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to modify and redistribute it under the GPL license Starts the MySQL database server Usage: mysqld [OPTIONS] For more help options (several pages), use mysqld --verbose --help
The command will create an error if there is no SSL support.
Next, check that the MySQL server has SSL enabled. The below output means that the server supports SSL but it is not enabled. Enabling it can be done at the command line or in the configuration file, which will be detailed later.
$ mysql -u username -p -e"show variables like 'have_ssl'" Enter password: +---------------+----------+ | Variable_name | Value | +---------------+----------+ | have_ssl | DISABLED | +---------------+----------+
Documentation on setting up certificates:
5.5.7.4. Setting Up SSL Certificates for MySQL
First, generate the CA key and CA certificate:
$ openssl genrsa 2048 > mysql-ca-key.pem Generating RSA private key, 2048 bit long modulus ............................................+++ ............+++ $ openssl req -new -x509 -nodes -days 356 -key mysql-ca-key.pem > mysql-ca-cert.pem You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [GB]:US State or Province Name (full name) [Berkshire]:California Locality Name (eg, city) [Newbury]:Burbank Organization Name (eg, company) [My Company Ltd]:Acme Road Runner Traps Organizational Unit Name (eg, section) []:Acme IRT Common Name (eg, your name or your server's hostname) []:mysql.acme.com Email Address []:acme-irt@acme.com
Create the server certificate:
$ openssl req -newkey rsa:2048 -days 365 -nodes -keyout mysql-server-key.pem > mysql-server-req.pem Generating a 2048 bit RSA private key .............................+++ .............................................................+++ writing new private key to 'mysql-server-key.pem' ----- You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [GB]:US State or Province Name (full name) [Berkshire]:California Locality Name (eg, city) [Newbury]:Burbank Organization Name (eg, company) [My Company Ltd]:Acme Road Runner Traps Organizational Unit Name (eg, section) []:Acme IRT Common Name (eg, your name or your server's hostname) []:mysql.acme.com Email Address []:acme-irt@acme.com Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: $ openssl x509 -req -in mysql-server-req.pem -days 356 -CA mysql-ca-cert.pem -CAkey mysql-ca-key.pem -set_serial 01 > mysql-server-cert.pem Signature ok subject=/C=US/ST=California/L=Burbank/O=Acme Road Runner Traps/OU=Acme IRT/CN= mysql.acme.com/emailAddress=acme-irt@acme.com Getting CA Private Key
Finally, create the client certificate:
$ openssl req -newkey rsa:2048 -days 356 -nodes -keyout mysql-client-key.pem > mysql-client-req.pem Generating a 2048 bit RSA private key ................+++ .................+++ writing new private key to 'mysql-client-key.pem' ----- You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [GB]:US State or Province Name (full name) [Berkshire]:California Locality Name (eg, city) [Newbury]:Burbank Organization Name (eg, company) [My Company Ltd]:Acme Road Runner Traps Organizational Unit Name (eg, section) []:Acme IRT Common Name (eg, your name or your server's hostname) []:mysql.acme.com Email Address []:acme-irt@acme.com Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: $ openssl x509 -req -in mysql-client-req.pem -days 356 -CA mysql-ca-cert.pem -CAkey mysql-ca-key.pem -set_serial 01 > mysql-client-cert.pem Signature ok subject=/C=US/ST=California/L=Burbank/O=Acme Road Runner Traps/OU=Acme IRT/CN=mysql.acme.com/emailAddress=acme-irt@acme.com Getting CA Private Key [nr@mysqld mysqlcerts]$ ls mysql-ca-cert.pem mysql-client-key.pem mysql-server-key.pem mysql-ca-key.pem mysql-client-req.pem mysql-server-req.pem mysql-client-cert.pem mysql-server-cert.pem
To enable SSL when starting mysqld, the following should be in /etc/my.cnf under the [mysqld] section. For this example, I put the files in /etc/mysql/openssl:
ssl-ca="/etc/mysql/openssl/mysql-ca-cert.pem" ssl-cert="/etc/mysql/openssl/mysql-server-cert.pem" ssl-key="/etc/mysql/openssl/mysql-server-key.pem"
To use any client, for instance mysql from the command line or the GUI MySQL Administrator, copy the client cert and key to a dedicated folder on the local box along with ca-cert. You will have to configure the client to use the client certificate, client key, and CA certificate.
To connect with the mysql client using SSL, copy the client certificates to a folder, for instance /etc/mysql, then under the [client] section in /etc/my.cnf:
ssl-ca="/etc/mysql/openssl/mysql-ca-cert.pem" ssl-cert="/etc/mysql/openssl/mysql-client-cert.pem" ssl-key="/etc/mysql/openssl/mysql-client-key.pem"
In MySQL Administrator, the following is an example you would put into the Advanced Parameters section if you want to connect using SSL.
SSL_CA U:/keys/mysql-ca-cert.pem SSL_CERT U:/keys/mysql-client-cert.pem SSL_KEY U:/keys/mysql-client-key.pem USE_SSL Yes
Replication
Before configuring replication, I made sure to review the MySQL replication documentation.
16.1.1.1. Creating a User for Replication
Because MySQL stores the replication user’s name and password using plain text in the master.info file, it’s recommended to create a dedicated user that only has the REPLICATION SLAVE privilege. The replication user needs to be created on the master so the slaves can connect with that user.
GRANT REPLICATION SLAVE ON *.* TO 'repl'@'192.168.1.50' IDENTIFIED BY ‘password’;
16.1.1.2. Setting the Replication Master Configuration
Edit my.cnf to uncomment the “log-bin” line. Also uncomment “server-id = 1”. The server-id can be anything between 1 and 2^32 but must be unique.
Also add “expire_logs_days” to my.cnf. If you don’t, the binary logs could fill up the disk partition because they are not deleted by default!
expire_log_days = 4
16.1.1.3. Setting the Replication Slave Configuration
Set server-id to something different from the master in my.cnf. Although not required, enabling binary logging on the slave is also recommended for backups, crash recovery, and in case the slave will also be a master to other systems.
16.1.1.4. Obtaining the Master Replication Information
I flush the tables to disk and lock them to temporarily prevent changes.
# mysql -u root -p -A dbname mysql> FLUSH TABLES WITH READ LOCK; Query OK, 0 rows affected (0.00 sec)
If the slave already has data from master, then you may want to copy over data manually to simplify things, 16.1.1.6. Creating a Data Snapshot Using Raw Data Files. However, you can also use mysqldump, as shown in Section 16.1.1.5, “Creating a Data Snapshot Using mysqldump”.
Once the data is copied over to the slave, I get the current log position.
Mysql> SHOW MASTER STATUS; +------------------+----------+--------------+------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +------------------+----------+--------------+------------------+ | mysql-bin.000002 | 16524487 | | | +------------------+----------+--------------+------------------+ 1 row in set (0.00 sec) mysql> UNLOCK TABLES;
16.1.1.10. Setting the Master Configuration on the Slave
Finally, configure the slave. The log file and log position tell the slave where to begin replication. All changes after that log position will be replicated to the slave.
mysql> CHANGE MASTER TO -> MASTER_HOST=’192.168.1.50’, -> MASTER_USER=’repl’, -> MASTER_PASSWORD='replication_password', -> MASTER_LOG_FILE=’ mysql-bin.000002’, -> MASTER_LOG_POS=16524487, -> MASTER_SSL=1, -> MASTER_SSL_CA = '/etc/mysql/openssl/mysql-ca-cert.pem', -> MASTER_SSL_CAPATH='/etc/mysql/openssl/', -> MASTER_SSL_CERT = '/etc/mysql/openssl/mysql-server-cert.pem', -> MASTER_SSL_KEY = '/etc/mysql/openssl/mysql-server-key.pem'; Query OK, 0 rows affected (0.74 sec) mysql> START SLAVE;
Replication can start!
The slave status can be checked via the following command:
mysql> show slave status;
Posted by Nathaniel Richmond at 05:28 0 comments
Labels: linux, mysql, rhel, system administration
06 September, 2009
Two years
It has been two years since I started this blog. Here is a quick recap of notable posts that consistently get a substantial number of page views.
IR/NSM:
- Building an IR Team: People
- Building an IR Team: Organization
- Transparent Bridging, MMAP pcap, and Snort Inline
- Snort Performance and Memory Map Pcap on RHEL
- Upgrading to Snort 2.8.0
- Snort 2.8.1 changes and upgrading
- Snort shared object rules with Sguil
- JavaScript decoding and more
- Querying Session Data Based on Snort Rule IPs
- Setting up OpenLDAP for centralized accounts
- OpenLDAP continued
- OpenLDAP Security
- Using parted and LVM2 for large partitions
Posted by Nathaniel Richmond at 01:00 0 comments
15 July, 2009
Building an IR Team: Documentation
My third post on building an Incident Response (IR) team covers documentation. The first post was Building an IR Team: People, followed by Building an IR Team: Organization.
Good documentation promotes good communication and effective analysts. Documentation is not sexy, and can even be downright annoying to create and maintain, but it is absolutely crucial. Making it as painless and useful as possible will be a huge benefit to the IR team.
Since documentation and communication are so intertwined, I had planned on making one post to cover both topics. However, the amount of material I have for documentation made me decide to do a future post, Building an IR Team: Communication, and concentrate on keeping this post to a more digestible size.
There are quite a few different areas where a Computer Incident Response Team (CIRT) will need good documentation.
Incident Tracking
Since I am writing about computer IR teams, it is obvious that the teams will be dealing with digital security incidents. For an enterprise, you will almost certainly need a database back-end for your incidents. Even smaller environments may find it best to use a database to track incidents.You will need some sort of incident tracking system for many reasons, including but not necessarily limited to the following.
- Tracking of incident status and primary responder(s)
- Incident details
- Response details and summary
- Trending, statistics and other analysis
However, off-the-shelf software may not have great support for the incident details. A great example is IP addresses and ports. Logging IP addresses, names of systems, ports if applicable, and what type of vulnerability was exploited can be extremely useful for trending, statistics, and historical analysis. A field for IP addresses can probably be more easily be queried than a full text field that contains IP addresses. If I see that a particular IP address successfully attacked two systems in the previous shift, or a particular type of exploit was used successfully on two systems, I want to be able to quickly check and see how many times it happened in the past week. I also want to be able to pull that data out and use it to query my NSM data to see if there was similar activity that garnered no response from analysts.
Reponse details can be thought of as a log that is updated throughout the incident, from discovery to resolution. Having the details to look back on is extremely useful. You can use the details for a technical write-up, an executive summary, to recreate incidents in a lab environment, for training, lessons learned, and more. My general thought process is that the longer it takes to document an incident, the more likely the documentation is to be useful.
Trending and statistical analysis can be used to help guide future response and look back at previous activity for anything that was missed, as I already mentioned. It is also extremely useful for reports to management that can also be used to help gain political capital within the organization. What do I mean by political capital?
Say you have noticed anecdotally that you are getting owned by web servers over HTTP, but the malicious sites are usually known to be malicious, for instance when searching Google or using a anti-malware toolbar. Your company has no web proxy and you recommend one with the understanding that most of the malicious sites would be blocked by the web proxy. The problem is that the networking group does not want to re-engineer or reconfigure, and upper management does not think it is worth the money. With a thorough report and analysis using the information from incident tracking, and by using that data to show the advantages of the proxy solution, you could provide the CIRT or SOC management the political capital they need to get things moving when faced with other parts of the company that resist.
Standard Operating Procedures (SOP)
Although analysts performing IR need to be able to adapt and the tasks can be fluid, a SOP is still important for a CIRT. A SOP can cover a lot of material, including IR procedures, notification and contact information, escalation procedures, job functions, hours of operation, and more. A good SOP might even include the CIRT mission statement and other background to help everyone understand the underlying purpose and mission of the group.
The main goal of a SOP should be to document and detail all the standard or repetitive procedures, and it can even provide guidance on what to do if presented with a situation that is not covered in the SOP. As an example, a few bullet points of sections that might be needed in a SOP are:
- Managing routine malware incidents
- Analyzing short term trends
- Researching new exploits and malicious activity
- Overview of security functions and tools, e.g. NSM
- More detailed explanation and basic usage information for important tools, e.g. how to connect to Sguil and who are the administrators of the system
I also like analysts to think about the most efficient way to analyze an incident. Some may gather information and investigate using slightly different methodology, but each analyst should understand that something simple should be checked before something that takes a lot of time, particularly when the value of the information returned will be roughly equal. The analysis should use what my boss likes to call the "Does it make sense?" test. Gathering some of the simplest and most straightforward information first will usually point you in the right direction, and a SOP can help show how to do this.
Knowledge Base
A knowledge base can take many different forms and contains different types of information than SOP, though there also may be overlap. There are specific knowledge base applications, wikis, simple log applications, and even ticketing or tasking systems that provide some functionality for an integrated knowledge base. A knowledge base will often contain technical information, technical references, HOWTOs, white papers, troubleshooting tips, and various other types of notes and information.
One of my favorite options for a knowledge base is a wiki. You can see various open knowledge bases that are using wikis, for instance NSMWiki and Emerging Threats Documentation Wiki, but if you want organization- and job-specific knowledge bases then you will also need something to hold the information for your CIRT.
The reason I pick those two wikis as examples is because they contain some of the exact type of information that is useful in a knowledge base for your CIRT. The main difference is that your knowledge will be specific to your organization. One good example are wiki entries for specific IDS rules as they pertain to your network, in other words an internal version of the Emerging Threats rule wiki. There may be shortcuts to take with regard to investigating specific rules or other network activity to quickly determine the nature of the traffic, and a wiki is a good place to keep that information.
Similarly, documentation on setting up a NSM device, tuning, or maintenance can be very effectively stored and edited on a wiki. The ease of collaboration with a wiki helps keep the documentation useful and up to date. If properly organized, someone could easily find information needed to keep the team running smoothly. Some example of documentation I have found useful when put it in a wiki:
- How to troubleshoot common problems on a NSM sensor
- How to build and configure a NSM sensor
- How to update and tune IDS rules
- List and overview of scripts available to assist incident response
- Overviews of each available IR tool
- More detailed descriptions and usage examples of IR tools
- Example IR walk-throughs using previously resolved incidents
- Links to external resources, e.g. blogs, wikis, manuals, and vendor sites
Shift Logs
In an environment with multiple shifts, it is important to keep shift logs of notable activity, incidents, and any other information that needs to be passed to other shifts. Although I will also discuss this in Building an IR Team: Communication, the usefulness of connecting the shifts with a dedicated log is apparent. Given the amount of email and incident tickets that are generated in an environment that requires 24x7 monitoring, having a shift log to quickly summarize important and ongoing event helps separate the wheat from the chaff.
Since my feeling is that shift logs should be terse and quick to parse, what to use for logging may not be crucial. The first examples that come to my mind are software designed for shift logs, forum software, or blogging software. The main features needed are individual accounts to show who is posting, timestamps, and an effective search feature. Anything else is a bonus, though it may depend on exactly what you want analysts logging and what is being used to handle incident tracking.
One thing that is quite useful with the shift log is a summary post at the end of each shift, and then the analysts should verbally go over the summary at the shift change. This can help make sure the most significant entries are not missed and it gives the chance for the oncoming shift to ask questions before the outgoing shift leaves for the day.
As usual, I can't cover everything on the topic, but my goal is to provide a reference and get the gears turning. The need for good documentation exists and documentation is important to use to the IR team's advantage.
Posted by Nathaniel Richmond at 05:19 0 comments
Labels: documentation, incident response
07 July, 2009
The "Does it make sense?" test
I was composing the next installment of my series on building an incident response team and started to include this, but then decided it deserves a separate entry.
Some time ago, my boss came up with what he calls the "Does it make sense?" test as a cheat-sheet for help training new analysts and to use as a quick reference. When we refer to traffic making sense, we are asking whether the traffic is normal for the network.
This is very simple and covers some of the quickest ways an analyst can investigate a possible incident. Consider it a way to triage possible NSM activity or incidents. Using something like this can easily eliminate a lot of unnecessary and time-consuming analysis, or point out when the extra analysis is needed.
The "does it make sense" test:
- Determine the direction of the network traffic.
- Determine the IP addresses involved.
- Determine the locations of the systems (e.g. internal, external, VPN, whois, GeoIP).
- Determine the functions of the systems involved (e.g. web server, mail server, workstation).
- Determine protocols involved and whether they are "normal" protocols and ports that should be seen between the systems.
- When applicable, look at the packet capture and compare it to the signature/rule.
- Use historical queries on NSM systems and searches of documentation to determine past events that may be related to the current one.
- A file server sending huge amounts of SMTP traffic over port 25 probably does not make sense, whether because of malicious activity or a misconfiguration.
- Someone connecting to a workstation on port 21 with FTP probably does not make sense.
- A DNS server sending and receiving traffic to another DNS server over port 53 does make sense. However, an analysis of the alert and the DNS traffic may still be needed to verify whether the traffic is malicious or not.
Also remember, traffic that makes sense is not always friendly. A good attacker will make his network traffic look like it fits in with the the baseline traffic, making the traffic less likely to stick out.
Posted by Nathaniel Richmond at 21:01 0 comments
Labels: incident response, nsm, training
30 June, 2009
Exploiting the brain
In some interesting science news, talking into a person's right ear is apparently a good idea if you want the person to be receptive to what you're saying.
If you want to get someone to do something, ask them in their right ear, say scientists.Italian researchers found people were better at processing information when requests were made on that side in three separate tests.
They believe this is because the left side of the brain, which is known to be better at processing requests, deals with information from the right ear.
The article also states that what is heard through the right ear gets sent to "a slightly more amenable part of the brain." Even when you know about something like this, it is probably difficult or even impossible to consciously over-ride the differences between hearing something in the right ear versus the left ear.
Looking at this through a security mindset, a threat could be someone that knows how to exploit this behavior, and to reduce the vulnerability could require actively training yourself to overcome standard neuroscience. This type of knowledge can even be applied directly to something like a penetration test that includes social engineering in the rules of engagement.
Next time you ask for a raise, make sure you're to the right of your boss.
Posted by Nathaniel Richmond at 16:44 1 comments
25 June, 2009
Building an IR Team: Organization
This is my second post in a planned series. The first is called Building an IR Team: People.
How to organize an Computer Incident Response Team (CIRT) is a difficult and complex topic. Although there may be best practices or sensible guidelines, a lot will be dictated by the size of your team, the type and size of network environment, management, company policies and the abilities of analysts. I also believe that network security monitoring (NSM) and incident response (IR) are so intertwined that you really should talk about them and organize them together.
A few questions that come to mind when thinking of organization and hierarchy of the team:
- Will you only be doing IR, or will you be responsible for additional security operations and security engineering?
- What is the minimal amount of staffing you need to cover your hours of operation? What other coverage requirements do you have dictated by management, policies, or plain common sense?
- How will the size of your team effect your hierarchy and organization?
- Since being understaffed is the norm, how can you organize to improve efficiency without hurting the quality of work?
- Can you train individuals or groups so you have redundancy in key job functions?
- Referencing both physical and logical organization of the team, will they be centralized or distributed?
- What is your budget? (Richard Bejtlich has had a number of posts about how much to spend on digital security, including one recently).
The first question really needs to be answered before you start answering all the rest. There are two basic models I have seen when organizing a response team. The simpler model is to have a response team that only performs incident response, often along with NSM or working directly with the NSM team. Even if the response team does not do the actual first tier NSM, the NSM team usually will function as a lower tier that escalates possible incidents to the IR team.
The more complex, but possibly more common, model is to have incident responders and NSM teams that also perform a number of other duties. I mentioned both security operations and security engineering in the bullet point. Examples of security operations and engineering could be penetration testing, vulnerability assessment, malware analysis, NSM sensor deployment, NSM sensor tuning, firewall change reviews or management, and more. The reason I say this model may be more common is the bottom line, money. It is also difficult to discretely define all these job duties without any overlap.
There are advantages and disadvantages to each model. For dedicated incident responders, advantages compared to the alternative include:
- Specialization can promote higher levels of expertise.
- Duties, obligations, procedures and priorities are clearer.
- Documentation can probably be simplified.
- IR may be more effective overall.
- Money. If incident responders perform a narrow set of duties, you will probably need more total personnel to complete the same tasks.
- Less flexibility with personnel.
- Limiting duties exclusively to incident response may result in more burn-out. Although not a given, many people like the variety that comes with a wider range of duties.
- Money.
- A better understanding of incident response can produce better engineering. A great example is tuning NSM sensors, where an engineer that does the tuning has a much better understanding of feedback and even sees the good and bad firsthand if the same person is also doing NSM or IR.
- Similarly, other projects can promote a better understanding of the network, systems and security operations that may promote more efficient and accurate IR.
- Conflicting priorities between IR and other projects.
- More complex operating procedures.
- Burn-out due to workload. (Yes, I listed burn-out as a disadvantage of both models).
- Less specialization in IR will probably reduce effectiveness.
Before deciding on the number of analysts you need for NSM and IR, you have to come to a decision on what hours you will maintain. This question is probably easier for smaller operations that don't have as much flexibility. If there is no budget for anything other than normal business hours, it is definitely easier to staff IR and security operations in general. Once you get to an enterprise or other organization that maintains some 24x7 presence, it starts getting stickier.
If you will have more than one shift, you will obviously have to decide the hours for each shift. It is important to build a slight overlap into the shifts so information can be passed from the shift that is ending to the shift that is starting. Both verbal and written communication, namely some kind of shift log, is important so any ongoing incidents, trends or other significant activity are not dropped. I will get into more detail when I write a future post, tentatively titled Building an IR Team: Communication and Documentation.
Organizing so each shift has the right people is significant. Obviously, the third shift will generally be seen as less desirable. Usually someone that is willing to work the third shift is trying to get into the digital security field, already has a day job, or is going to school. It is fine line between finding someone that will do a good job on the third shift but not immediately start looking for another job that has better hours, so you have to get a clear understanding of why people want to work the third shift and how long you expect them to stay on that shift. It can help to leave opportunities for third shift analysts to move to another shift since that can allow enough flexibility to keep the stand-outs rather than losing them to another job with more desirable hours.
I am not a big fan of rotating shifts. Though a lot of places seem to implement shifts by having everyone eventually rotate through each shift, I think it does not promote stability or employee satisfaction as much as each person having a dedicated shift.
Staffing can also be influenced by policy or outside factors. Businesses, government and military all will have certain information security requirements that must be met, and some of those requirements may influence your staffing levels or hours of operation.
Hierarchy
If you only have one or two analysts, you probably won't need to put much thought into your hierarchy. If you have a 24x7 operation with a number of analysts, you definitely need some sort of defined hierarchy and escalation procedures to define NSM and IR duties. Going back to the section on other security operations, you may also need to define how other duties fit into the hierarchy, procedures and priorities for analysts that handle NSM, IR, and/or additional duties.
At left is an example of an organizational chart when the IR Team also has other duties and operates in a 24x7 environment. In addition to rotating through NSM and IR duties, each analyst is a member of a team. This is just an example to show the thought process on hierarchy. There are certainly other operational security needs that I mentioned, may merit a dedicated team, but are not included in my example, for instance forensics or vulnerability assessment.
Each team has a senior analyst as the lead, and the senior analysts can also double as IR leads. It is crucial that every shift have a lead to define a hierarchy and prevent any misunderstandings about the chain of command and responsibilities.
For this example, let us say that your organizational requirements state two junior analysts per shift doing NSM and IR. You could create a schedule to rotate each junior analyst through the NSM/IR schedule, which means monitoring the security systems, answering the phone, responding to emails, investigating activity, and coordinating IR for the more basic incidents. You would also probably want one senior analyst designated as the lead for the day. The senior analyst can provide quality assurance, handle anything that needs to be escalated, do more in-depth IR, and task and coordinate the junior analysts. The senior analyst can also decide that the NSM and IR workloads require temporarily pulling people off their project or team tasks to bolster NSM or IR. Finally, it may be a good idea to have the senior analyst designated as the one coordinating and communicating with management.
While the senior analysts need to excel at both the technical duties and management, the shift leads need to facilitate communication between everyone on that particular shift, management, and other shifts. Though it is helpful if the shift lead is strong in a technical sense, I do not think the shift lead necessarily has to be the strongest technical person on the shift. He or she needs to be able to handle communication, escalation, delegation, and prioritization to keep both the shift members and management happy with each other. The shift lead is basically responsible for making sure the shift is happy and making sure the CIRT is getting what it needs from the shift.
The next diagram shows a group that is dedicated only to NSM and IR. Obviously, this model is much easier to organize and manage since the tasks are much narrower. Note that, even with this model where everyone is dedicated to NSM and IR without additional duties, proper NSM and IR may call for things like malware analysis, certainly forensics for IR, or giving feedback about the security systems' effectiveness to dedicated engineers.
As one last aside regarding the different models, I have to stress that vulnerability assessment and reporting is one of the biggest time sinks I have ever seen in a security operation. If you can only separate one task away from your NSM and IR team to another team, I strongly suggest it be vulnerability assessment. There are certainly a lot of arguments about how much or how little vulnerability assessment you should be doing in any organization, but most organizations do have requirements for it. As such, it is a good idea to have a separate vulnerability assessment team whenever possible because of the number of work-hours the process requires. Note that penetration testing is clearly distinct from vulnerability assessment, and requires a whole different type of person with a different set of skills.
Redundancy
Ideally, you want to minimize what some call "knowledge hoarding" on your team. If someone is excellent at a job, you need that person to share knowledge, not squirrel it away. Some think knowledge hoarding provides job security, but a good manager will recognize that an analyst that shares knowledge is much better than one that does not. From personal experience, I can also say that mentoring, training and sharing knowledge is a great way to reduce the number of calls you get during non-working hours. If I do not want to be bothered at home, I do my best to document and share everything I know so the knowledge is easily accessible even when I am not there.
Sharing knowledge provides redundancy and flexibility. That flexibility can also spread the workload more evenly when you have some people swamped with work and others underutilized. If someone is sick or too busy for a particular task, you do not want to be stuck with no redundancy. I suppose this is true of most jobs, but it can be a huge problem in IR. As an example, if a particular person is experienced at malware analysis and has automated the process without sharing the knowledge, someone else called on to do the work in a pinch will be much less efficient and may even try to manually perform tasks that have already been automated.
Certainly most groups of incident responders will have standouts that simply can't be replaced easily, but you should do your best to make sure every job function has redundancy and that every senior analyst has what you could call at least one understudy.
Distribution of Resources
If you are in a business that has multiple locations or it is a true enterprise, one thing to consider is the physical and logical distribution of your incident response team. Being physically located in one place can be helpful to communication and working relationships. Being geographically distributed can be more conducive to work schedules if the business spans many timezones. One thing that can greatly increase morale is providing as many tools as possible to do remote IR. Sending a team to the field for IR may be needed sometimes, but reducing the burden or even allowing work from home is a sure way to make your team happier.
Regardless, an IR team needs people in the field that can assist them when needed. Depending on the technical level of those field representatives, the duties may be as simple as unplugging a network cable or as advanced as starting initial data collection with a memory and disk capture. Most IR teams will need to have a good working relationship with support and networking personnel to help facilitate the proper response procedures.
I only touched on some of the possibilities for organizing both NSM and IR teams. As with anything, thought and planning will help make the organization more successful and efficient. The key is to reach a practical equilibrium given the resources you have to work with.
Posted by Nathaniel Richmond at 05:02 3 comments
Labels: incident response, nsm
09 May, 2009
Extracting emails from archived Sguil transcripts
Here is a Perl script I wrote to extract emails and attachments from archived Sguil transcripts. It's useful for grabbing suspicious attachments for analysis.
In Sguil, whenever you view a transcript it will archive the packet capture on the Sguil server. You can then easily use that packet capture to pull out data with tools like tcpxtract or tcpflow along with Perl's MIME::Parser in this case. The MIME::Parser code is modified from David Bianco's blog.
As always with Perl or other scripts, I welcome constructive feedback. The first regular expression is fairly long and may scroll off the page, so make sure you get it all if you copy it.
#!/usr/bin/perl
# by nr
# 2009-05-04
# A perl script to read tcpflow output files of SMTP traffic.
# Written to run against a pcap archived by Sguil after viewing the transcript.
# 2009-05-07
# Updated to use David Bianco's code with MIME::Parser.
# http://blog.vorant.com/2006/06/extracting-email-attachements-from.html
use strict;
use MIME::Parser;
my $fileName; # var for tcpflow output file that we need to read
my $outputDir = "/var/tmp"; # directory for email+attachments output
if (@ARGV != 1) {
print "\nOnly one argument allowed. Usage:\n";
die "./emailDecode.pl /path/archive/192.168.1.13\:62313_192.168.1.8\:25-6.raw\n\n";
}
$ARGV[0] =~ m
/.+\/(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}):(\d{1,5})_(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}):(25)-\d{1,3}\.raw/
or die "\nIncorrect file name format or dst port is not equal to 25. Try again.\n\n";
system("tcpflow -r $ARGV[0]"); # run tcpflow w/argument for path to sguil pcap
my $srcPort = sprintf("%05d", $2); # pad srcPort with zeros
my $dstPort = sprintf("%05d", $4); # pad dstPort with zeros
# Put the octest and ports into array to manipulate into tcpflow fileName
my @octet = split(/\./, "$1\." . "$srcPort\." . "$3\." . "$dstPort");
foreach my $octet(@octet) {
my $octetLength = length($octet); # get string length
if ($octetLength < 5) { # if not a port number
$octet = sprintf("%03d", $octet); # pad with zeros
}
$fileName = $fileName . "$octet\."; # concatenate into fileName
}
$fileName =~ s/(.+\d{5})\.(.+\d{5})\./$1-$2/; # replace middle dot with hyphen
my $unusedFile = "$2-$1"; # this is the other tcpflow output file
# open the file and put it in array
open INFILE, "<$fileName" or die "Unable to open $fileName $!\n";
my @email = <INFILE>;
close INFILE;
my $count = 0;
# skip extra data at beginning
foreach my $email(@email) {
if ($email =~ m/^Received:/i) {
last;
}
else {
delete @email[$count];
$count ++;
}
}
my $parser = new MIME::Parser;
$parser->output_under("$outputDir");
my $entity = $parser->parse_data(\@email); # parse the tcpflow data
$entity->dump_skeleton; # be verbose when dumping
unlink($fileName, $unusedFile); # delete tcpflow output files
07 May, 2009
Do we need anti-virus software?
My friend Richard has a good post about Verizon's 2009 Data Breach Report. One of his last comments really struck me since it is something I have seen firsthand again and again.
Most companies are probably relying on their anti-virus software to save them. This is too bad, because the explosion in customized malware means it probably won't.Anti-virus software just does not work against most recent malware. The table from the Verizon report shows a drastic upswing in customized malware and my experience tells me that doesn't tell half the story. Even only small changes will often evade anti-virus software.
I'm not saying anything new here. Anyone that does penetration tests, reverse engineers malware, writes exploits, or is involved with information security in a number of ways already knows that anti-virus software is terrible at detecting new malware. I have even written about it before and pointed out that more subtle methods of exploitation aren't always necessary because of the effectiveness of commodity malware.
My question is, do we really need anti-virus software?
When you take into account the amount of resources spent running anti-virus in the enterprise, is it a good investment in risk reduction? We pay for hours worked to setup the anti-virus infrastructure, update, and troubleshoot. If you are in an enterprise, you're paying for the software, not using a free alternative. You're probably paying for support and also paying for hardware.
What does it get you? I find malware on a weekly basis, sometimes daily, that is not detected by the major vendors. I submit the malware to some of these vendors and places like VirusTotal, but the responses from anti-virus vendors are inconsistent at best. Even after definitions are updated, I'll then run across malware that is obviously just an altered version of the previous but is once again not detected.
I don't pretend to have the answers, but I do wonder if all the resources spent on anti-virus by a business, particularly large or enterprise businesses, might be better spent somewhere else. Is it really worth tens or hundreds of thousands of dollars in software, hours, and hardware to make sure old malware is detected? If not, how much is it worth? Does the occasional quick response to emerging malware make it more worthwhile? If you have enough influence on the vendor, does being able to contact them directly to help protect against a specific attack make it more valuable?
Anti-virus software is too ingrained in corporate culture to think it is realistic that companies will stop using it altogether, but we need to keep asking these types of questions.
Posted by Nathaniel Richmond at 21:44 1 comments
28 April, 2009
Building an IR Team: People
For some time I have been thinking about a series of posts about building an incident response team. I started in security as part of a very small Computer Incident Response Team (CIRT) that handled network security monitoring (NSM) and the ensuing security incidents. Although we were small, we had a very good core of people that helped us succeed and grow, probably well beyond anything we had imagined. We grew from a handful of people to four or five times our original size. While there were undoubtedly setbacks, we constantly got better and more efficient as the team grew.
As the first in this series, I definitely want to concentrate on people. I don't care what fancy tools, enormous budget, buy-in from management, or whatever else you have. If you don't have the right people, you'll have serious problems succeeding. Most of this is probably not unique to a response team, information security, or information technology.
Hiring
Of course, hiring is where it all starts. What do you look for in a candidate for an incident response team? Here are some of the things I look for.
- Initiative: The last thing I want is someone that constantly needs hand-holding. Certainly people need help sometimes, and sharing knowledge and mentoring are huge, but you have to be able to work through the bumps and find solutions. A NSM operation or CIRT is not a help desk. Although you can have standard procedures, you have to be flexible, adapt, do a lot of research, and teach yourself whenever possible.
- Drive: Most people who are successful in security seem to think of it as more than a job. They spend free time hacking away at tools, breaking things, fixing things, researching, reading, and more. I don't believe this kind of drive has to be all-consuming, because I certainly have plenty of outside interests. However, generally speaking there is plenty of time to be interested in information security outside of work while still having a life. I, and undoubtedly many successful security professionals, enjoy spending time reading, playing with new tools, and more. Finding this type of person is not actually difficult, but it can take some patience. Local security groups or mailing lists are examples of places to look for analysts to add to a team. Even if they have little work experience, by going to a group meeting or subscribing to mailing lists, they are already demonstrating some drive and initiative.
- Communication skills: Although this may be more important for a senior analyst, being able to write and speak well is crucial. Knowing your audience is one of the most important skills. For instance, if you are writing a review of a recent incident that includes lessons learned, the end product will be different depending whether the review is for management or the incident responders on the team. Documentation, training, and reporting are other examples where good writing and speaking skills are important. I think good communication skills are underrated by many people in the field and IT in general, but the higher you look the better the chance you will find someone that realizes the importance of effective communication.
- Background: Most of the successful NSM analysts and incident responders I know have a background in one or more of three core areas; networking, programming, or system administration. A person from each background will often have different strengths, so understanding the likely strengths of each background can go a long way toward filling a missing need on the team. You do not have to come from one of these backgrounds, it is just relatively common for the good analysts I know to have backgrounds in these areas.
- The wrong candidate in the wrong position: Do not be scared to turn down people that are wrong for the job. That seems obvious, but it is worth emphasizing. Along the same lines, if someone is not working out, take steps to correct the problems if possible, but do not be afraid to get rid of a person that is not right for the job. Try to understand exactly what you are looking for and where in the organization the person is most likely to excel.
When filling a senior position, experience is definitely important. However, when filling a junior position I think automatically giving a lot of weight to information security experience can be a mistake. The last thing I want to do is hire someone who has experience but is overly reliant on technology rather than critical thinking skills. I don't mean to denigrate or automatically discount junior analysts that have experience, I just mean that I'd rather have someone with a lot of potential that needs a little more training in the beginning than what some would call a "scope dope", someone whose experience is looking at IDS alerts and taking them at face value with little correlation or investigation. If you have both experience and potential, great!
Training
Information security covers a huge range of topics, requires a wide range of skills, and changes quickly. Good analysts will want training, and if you don't give it to them you will wind up with a bunch of people that don't care about increasing their knowledge and skills as the ones that do want to learn look for greener pastures.
There are many different types of training in addition to what most people think of first, which is usually formal classes. Senior analysts mentoring junior analysts can be one of the most useful types of training because it is very adaptable and can be very environment-specific. "Brown-bag" sessions where people get together over lunch and analyze a recent incident or learn how to more efficiently investigate incidents can also work well. Finally, when someone researches and learns new things on one's own or with coworkers as mentioned previously, that is also an excellent form of training. Load up a lab, attack it, and look at the traffic and resulting state of the systems.
Finally, do not forget about both undergraduate and graduate degrees. Though you may not consider them training, most people want to have the option open to either finish a degree or get an advanced degree in their off hours. There are a huge number of ways to provide training.
People versus Technology
Analysts are not the only ones that can overly rely on technology. Management will often take the stance that paying a bunch of money for tools and subscriptions means two things. One, that the systems must be what they need and will do all the work for them. Two, that the money means that the selling company has the systems optimally designed and configured for your environment. Just because you pay five or six digits for an IPS, IDS, anomaly detection, or forensics tools does not mean that you can presume a corresponding decrease in the amount you need to spend on people. Any tool is worthless without the right people using it.
Turnover, Retention, Mobility, and Having Fun
Creating and continuing a successful response team is to make sure the people you want to keep remain happy. There are a lot of things you need to retain the right people, including competitive pay, decent benefits, a chance for promotion, and a good work environment. Honestly, I think work environment is probably the most important factor. I know many analysts I have worked with receive offers of more money, but a good work environment has usually kept them from leaving. My boss has always said that the right environment is worth $X dollars to him, and I feel the same way. Effective and enjoyable coworkers, management that listens, and all the little things are not worth giving up without substantial reasons. Some opportunities are impossible to pass up, but having an enjoyable work environment and management that "gets it" goes a long way towards reducing turnover.
Bottom Line
I believe getting a good group assembled is the most important thing to have an effective response team. Obviously, I kept the focus of this post relatively broad. I would love to see comments with additional input. I hope to post additional material about building a response team in the near future, possibly covering organizing the team, dealing with growth, and a few other topics.
Posted by Nathaniel Richmond at 04:30 0 comments
Labels: incident response, nsm, training
10 April, 2009
Upgrading to Snort 2.8.4
Hopefully, everyone running Snort has been paying attention and noticed that Snort 2.8.4 was released on 07 April and the corresponding new rule sets were released on 08 April. If you pay for Snort rules, the new netbios.rules will not work with Snort versions prior to 2.8.4, so you need to upgrade.
Upgrading was mostly painless just to get Snort running, though the dcerpc2 preprocessor settings definitely may require tweaking beyond the defaults. Generally, when I'm upgrading and there is something like a new preprocessor, I will start by getting Snort upgraded and successfully running with the defaults for the preprocessor, then will tune the settings as needed after I'm sure the new version is working.
The first thing to do after downloading and extracting is read the RELEASENOTES and also any applicable READMEs. Since the dcerpc2 preprocessor is new, the README.dcerpc2 may be of particular interest. A lot of people don't realize how much documentation is included.
$ wget http://www.snort.org/dl/snort-2.8.4.tar.gzThe following is the default configuration as listed in the README.dcerpc2.
$ tar xzf snort-2.8.4.tar.gz
$ cd snort-2.8.4/doc
$ ls
AUTHORS README.alert_order README.ppm
BUGS README.asn1 README.sfportscan
CREDITS README.csv README.ssh
INSTALL README.database README.ssl
Makefile.am README.dcerpc README.stream5
Makefile.in README.dcerpc2 README.tag
NEWS README.decode* README.thresholding
PROBLEMS README.decoder_preproc_rules README.variables
README README.dns README.wireless
README.ARUBA README.event_queue TODO
README.FLEXRESP README.flowbits USAGE
README.FLEXRESP2 README.frag3 WISHLIST
README.INLINE README.ftptelnet faq.pdf
README.PLUGINS README.gre faq.tex
README.PerfProfiling README.http_inspect generators
README.SMTP README.ipip snort_manual.pdf
README.UNSOCK README.ipv6 snort_manual.tex
README.WIN32 README.pcap_readmode snort_schema_v106.pdf
preprocessor dcerpc2_server: default, policy WinXP, \For some environments, it may be useful to turn off the preprocessor alerts or turn them off for specific systems.
detect [smb [139,445], tcp 135, udp 135, rpc-over-http-server 593], \
autodetect [tcp 1025:, udp 1025:, rpc-over-http-server 1025:], \
smb_max_chain 3
preprocessor dcerpc2: memcap 102400, events noneOr:
preprocessor dcerpc2_server: default, policy WinXP, \No matter what, the dcerpc configuration should be removed from the snort.conf and replaced with a dcerpc2 configuration for Snort 2.8.4.
detect [smb [139,445], tcp 135, udp 135, rpc-over-http-server 593], \
autodetect [tcp 1025:, udp 1025:, rpc-over-http-server 1025:], \
smb_max_chain 3
preprocessor dcerpc2_server: net $SCANNERS, detect none
After that, I'll configure and install Snort.
$ ./configure --enable-dynamicplugin --enable-inline --enable-perfprofilingAlthough the old netbios.rules will still work with the new version, it's also time to update to the latest rules to take advantage of the smaller number of netbios.rules. Note that this only applies for subscription rules since the free ones won't reflect the changes for 30 days.
$ make
$ sudo /etc/rc.d/rc.snort-inline stop
$ sudo make install
$ which snort
/usr/local/bin/snort
$ snort -V
,,_ -*> Snort! <*-
o" )~ Version 2.8.4 (Build 26) inline
'''' By Martin Roesch & The Snort Team: http://www.snort.org/team.html
Copyright (C) 1998-2009 Sourcefire, Inc., et al.
Using PCRE version: 7.7 2008-05-07
$ sudo /etc/rc.d/rc.snort-inline start
Starting snort-inline
Initializing Inline mode
building cached socket reset packets
Some of the rules in the new netbios.rules file have SIDs that were unchanged, so I decided to re-enable any of those that I had previously disabled in my oinkmaster.conf. I can always disable them again, but with the new preprocessor and possible changes to the remaining SIDs, I decided it was best to reevaluate them.
Since I'm also using precompiled shared object rules, I also update from the ones compiled for 2.8.3.2 to the ones compiled for 2.8.4. I keep the rule stubs in a directory along with symlinks to the corresponding SO files, so I simply remove the symlink and recreate it to the new SO rules.
$ rm /etc/snort/so_rules/*so
$ ln -s /etc/snort/so_rules/precompiled/CentOS-5.0/i386/2.8.4/*so /etc/snort/so_rules/
Posted by Nathaniel Richmond at 05:18 0 comments
09 April, 2009
Watching the watchers
The Washington Post had an article on the Federal Page about falsified security clearance checks.
Half a dozen investigators conducting security-clearance checks for the federal government have been accused of lying in the reports they submitted to the Office of Personnel Management, which handles about 90 percent of the background inquiries for more than 100 agencies.This is not particularly surprising, especially when you consider that the article reports a 22 percent increase in security checks since 2006. Presumably, the personnel and budget allocations did not get a corresponding 22 percent increase. Even before 2006, there were plenty of reports about the backlog for security investigations. Factor in the increase in investigations and you have a situation ripe for abuse through shortcuts.Federal authorities said they do not think that anyone who did not deserve a job or security clearance received one or that investigators intentionally helped people slip through the screening. Instead, law enforcement officials said, the investigators lied about interviews they never conducted because they were overworked, cutting corners, trying to impress their bosses or, in the case of one contractor, seeking to earn more money by racing through the checks.
The problem shows the importance of management and quality assurance. Through the fairly simple process of sending follow-up questionnaires to 20 percent of those that were interviewed, OPM was able to identify falsified interviews and investigations. GAO also has determined that 90 percent of a sampling of reports were missing at least one required document. These seem like fairly easy and accurate ways to determine whether the investigations were being completed properly.
Fixing the problem is another matter, but you can't fix a problem until you've identified it. Prosecuting the responsible investigators, reviewing allocation of funds and personnel, and changing compensation structure are all just a few things that may help reduce the amount of fraud.
The personnel problem immediately jumps out when you see the numbers in the article, which lists 1380 staff investigators and 5400 contractors doing 2 million investigations last year. You have to assume that even the most basic clearance requires a minimum of a credit check, criminal background check, employment history check, and some type of interview. I don't know how 292 investigations per investigator per year could possibly be exhaustive enough to provide the needed information.
It also seems to demonstrate that the number of people requiring clearances is excessive, which is really a whole different discussion.
Posted by Nathaniel Richmond at 04:30 0 comments
26 March, 2009
Slackware-current updates and Nvidia driver
I recently updated a desktop running Slackware-current to the latest packages released up to 24-Mar-2009. There were a few minor issues. The first is that, if using slackpkg to upgrade, I had to upgrade the findutils package before everything else or slackpkg would no longer work properly.
# slackpkg updateThe second was that I couldn't install the Nvidia driver for the new 2.6.28.8 kernel package because it was failing.
# slackpkg upgrade findutils
# slackpkg upgrade-all
ERROR: The 'cc' sanity check failed:It turns out that gcc-4.3.3 in slackware-current depends on mpfr.
The C compiler 'cc' does not appear able to
create executables. Please make sure you have
your Linux distribution's gcc and libc development
packages installed.
# slackpkg info mpfrAfter installing mpfr, I was able to compile the Nvidia module for the running kernel.
PACKAGE NAME: mpfr-2.3.1-i486-1.tgz
PACKAGE LOCATION: ./slackware/l
PACKAGE SIZE (compressed): 348 K
PACKAGE SIZE (uncompressed): 930 K
PACKAGE DESCRIPTION:
mpfr: mpfr (Multiple-Precision Floating-Point Reliable Library)
mpfr:
mpfr: The MPFR library is a C library for multiple-precision floating-point
mpfr: computations with exact rounding (also called correct rounding).
mpfr: It is based on the GMP multiple-precision library.
mpfr: The main goal of MPFR is to provide a library for multiple-precision
mpfr: floating-point computation which is both efficient and has
mpfr: well-defined semantics. It copies the good ideas from the
mpfr: ANSI/IEEE-754 standard for double-precision floating-point arithmetic
mpfr: (53-bit mantissa).
mpfr:
# slackpkg install mpfr
Finally, kdeinit4 was failing with an error about loading the libstreamanalyzer.so.0 and libqimageblitz.so shared libraries. I fixed this by installing strigi and qimageblitz.
# slackpkg info strigiAfter those minor issues, I had KDE4 up and running with the latest Nvidia driver.
PACKAGE NAME: strigi-0.6.3-i486-1.tgz
PACKAGE LOCATION: ./slackware/l
PACKAGE SIZE (compressed): 904 K
PACKAGE SIZE (uncompressed): 2570 K
PACKAGE DESCRIPTION:
strigi: strigi (fast and light desktop search engine)
strigi:
strigi: Strigi is a fast and light desktop search engine. It can handle a
strigi: large range of file formats such as emails, office documents, media
strigi: files, and file archives. It can index files that are embedded in
strigi: other files. This means email attachments and files in zip files
strigi: are searchable as if they were normal files on your harddisk.
strigi:
strigi: Homepage: http://strigi.sourceforge.net/
strigi:
# slackpkg info qimageblitz
PACKAGE NAME: qimageblitz-r900905-i486-1.tgz
PACKAGE LOCATION: ./slackware/l
PACKAGE SIZE (compressed): 82 K
PACKAGE SIZE (uncompressed): 240 K
PACKAGE DESCRIPTION:
qimageblitz: QImageBlitz (Graphical effect and filter library for KDE4)
qimageblitz:
qimageblitz: Blitz is a graphical effect and filter library for KDE4.0 that
qimageblitz: contains many improvements over KDE 3.x's kdefx library
qimageblitz: including bugfixes, memory and speed improvements, and MMX/SSE
qimageblitz: support.
qimageblitz:
# slackpkg install strigi qimageblitz
Posted by Nathaniel Richmond at 05:23 0 comments
Labels: linux, slackware, system administration
12 February, 2009
Snort 2.8.4 has a new DCE/RPC preprocessor
According to the VRT blog, Snort 2.8.4 is going to have a new DCE/RPC 2 preprocessor. The README.dcerpc2 was not included in the first release candidate for 2.8.4, but a follow-up on the snort-users list included it. The README covers a lot of information, and is definitely useful to understand the changes and configuration options.
These changes look to be an improvement, but they also have the potential to cause a lot of work and heartache in the short term for Emerging Threats and Snort users. From the VRT blog:
This preprocessor handles all the decoding functions that were previously taken care of using rules and flowbits in a lot of those rules. The upshot is that the number of netbios rules released for any vulnerability that can be exploited over dcerpc is going to be reduced greatly. The number of netbios rules previously released is also going to be reduced in a similar manner.Snort users are going to have to make sure to properly configure the new version. I know from the snort-users mailing list that far too many people use old versions of Snort, so this causes them problems. Realistically, there is usually no reason to use older versions rather than the latest stable release.
The downside is that this functionality is only available in Snort 2.8.4 with the dcerpc2 preprocessor. There is no backwards compatibility. Also, a number of netbios rules will be deleted and replaced.
Emerging Threats distributes quite a few NetBIOS rules, so I'm sure the new preprocessor will also have an effect on Emerging Threats rules. I seriously doubt that either VRT or Emerging Threats wants to maintain a set of rules for 2.8.4 and above, plus another set for older versions. If I'm interpreting Nigel's blog post correctly, it seems that VRT is going to force the issue by only issuing new and updated NetBIOS rules for 2.8.4 and above. Assuming the improvements in the preprocessor are as stated, I think that is the right choice, but lots of users are going to complain.
Posted by Nathaniel Richmond at 04:01 0 comments
Labels: snort
08 February, 2009
Shmoocon 2009 Notes
I attended Shmoocon V over the weekend and had a good time as usual. There are always interesting people, the usual suspects, and some good talks. I think Shmoocon is still a great conference for the money. The bags given to attendees this year were by far the best of the four years I've been. There were some very good entries to the Barcode Shmarcode contest and I also saw some entertaining runs through the lockpick contest.
These notes do not include every time slot.
Friday
1500: Bruce Potter started the con with his opening remarks. He said that Shmoocon added around 500 tickets this year, bringing the total number of attendees above 1600. To have enough space, they had to add another room down the hall from the main area. The satellite room was out of sight of the main area, but not too difficult to find. Potter said that moving to the next larger space available instead of adding the one room would have been overkill and cost too much for the number of attendees.
One of the things I really like about Shmoocon is their involvement in charities. As usual, tshirt proceeds went to charities, in this case the buyer's choice between the EFF and Johnny Long's Hackers for Charity. Shmoocon also had a raffle with proceeds going to Covenant House, as did the proceeds from the Hacker Arcade. It is nice to see Bruce and other Shmoocon organizers promoting charity among their peers.
Potter often will make a small comparison between Shmoocon and other conferences, and this year he mentioned other conferences charging large amounts for training. Conversely, at Shmoocon if you want to learn something in a non-classroom environment, you can try to participate in Shmoocon Labs to help build a functional enterprise-like environment rather than just slapping together a simple wireless network. As an example, this year they had an open wireless network, a WPA-enabled wireless network, and third using RADIUS. All attendees are welcome to walk through the room serving as their NOC and ask questions.
I really like The Shmoo Group's philosophy when it comes to running a conference. They try to be very transparent and take feedback, don't overcharge, and just generally want everyone to have a great time while still providing good technical content. It's a really attendee-friendly conference, right down to the 0wn the Con talk near the end.
Finally, Potter went on a rant about how security isn't working. Nothing to see here, move along. ;)
1600: The first technical talk I heard was Matt Davis and Ethan O'Toole presenting Open Vulture - Scavenging the Friendly Skies Open Source UAV Platform. Open Vulture is a software application and library designed to control unmanned vehicles. It was a neat talk though not a topic I know much about. Some of the possible uses for this would be controlling an unmanned vehicle to sniff wireless networks or take photos. They even have a GPS navigation module.
Saturday
Saturday is always the meat of the conference since it is the only full day and most people haven't been there long enough for the late nights to catch up to them.
1000: I enjoyed Matt Neely's presentation, Radio Reconnaisance in Penetration Testing. Matt had a lot of practical advice for radio reconnaisance, including recommending some relatively inexpensive hand-held scanners, the AOR 8200, Uniden Bearcat BCD396T, and the Uniden Bearcat SC230, which also happens to be a good choice for NASCAR. He pointed out what features to look for in scanners, for example channel memory.
His anecdotes from penetration tests included sniffing wireless headsets from blocks away, even when the phone is hung up. Apparently, many wireless headsets transmit constantly even if on the cradle, effectively functioning like a bug for eavesdropping. He has also used video converters when sniffing video.
When testing a client's casino, he visually scouted the location to help identify their hand-held radios, and then was able to get information from casino security through their radio communications, including their radio link to the police. He got a ton of information useful for social engineering and more, like guard names, the dispatcher's name, times of shift changes, and the lingo used by the guards.
At another client site, he noticed people using wireless headsets and got those added to the rules of engagement. Once they were added, he was able to eavesdrop on calls to the help desk for password resets, people calling their voicemail, and found that the headsets would keep transmitting even when the phone was hung up. Matt was able to get passwords, voicemail passwords, and assorted Personally Identifiable Information (PII) that was sensitive or could be used for social engineering. Rules of engagement and adhering to applicable laws are very important if you don't want to end up in jail after eavesdropping on voice communications.
I also talked to Neely the next day regarding learning about RF for a personal project I am interested in. He was very helpful and nice, just like most Shmoocon presenters I've ever spoken with. Hopefully, I will have time to start learning more about RF and playing around with it for a "fun" project.
1100: Next, I attended Fail 2.0: Further Musings on Attacking Social Networks presented by Nathan Hamiel and Shawn Moyer. Their talk was fun and definitely relevant. Their main focus was that "social engineering + vulnerabilities in social networks = ROI". They pointed out a number of ways to manipulate various social networking sites, including malicious code like IMG to CSRF, CSS javascript hijacking, and request forgeries (POST to GET).
One good anecdote was getting permission from Marcus Ranum to make a phony profile in his name and then using it to socially engineer others, particularly security professionals, on a social networking site. They actually got Ranum's sister to attempt to contact them through the phony profile.
Hamiel and Moyer demonstrated technical tricks to force someone to "friend" you and also posting a comment with code that will force the user to log out, effectively denial-of-servicing the person off his or her own profile. They also told anecdotes about posing as a recruiter, joining groups on LinkedIn so they could more easily build up a lot of connections, then looking for candidates with government security clearances and getting many responses to their inquiries.
1400: I skipped the 1200 talks to have a long lunch with some friends, then attended Jay Beale's Man in the Middling Everything with the Middler. The talk had a very slow start because of audience interaction, particularly involving Shmooballs and launchers.
Jay Beale's Middler is a tool to help leverage man-in-the-middle attacks, including injecting javascript, temporary or permanent redirects, session hijacking, and more. It seems like a neat tool and was released to the public at his talk. Jay pointed out some dangers of mixed HTTP and HTTPS sites and their vulnerabilities to things like injected javascript, stored session keys, intercepted logout requests, and replacing HTTPS links in proxied pages with HTTP links. Although the Middler has some specific support right now for attacking social networking sites and wide area sites like Google/GMail and live.com, it uses a plugin architecture so we should expect to see more plugins targeting specific sites.
1600: I had heard a good talk last year by Enno Rey and Daniel Mende, and combined with my focus on network security monitoring I definitely was interested in their presentation this year, Attacking backbone technologies. Their main focuses this year were BGP, MPLS, and Carrier Internet, one example of the latter being Carrier Ethernet. They were careful to point out that you really have to be part of the "old boys club" of trusted backbone providers to successfully use most of their attacks and that not just anyone would have enough access to core backbones to download their tools and use them for successful penetration testing or attacks.
For BGP, they mentioned that it is mostly manually configured, thus making it susceptible to simple mistakes like the famed AS7007 incident or the Youtube/Pakistan blocking incident. Rey and Mende also did a live demonstration using their "bgp_cli" tool to inject routes, and demonstrated how a single BGP packet signed with MD5 can be used to crack rather than brute forcing directly against a router that limits the number of attempts per second.
Multiprotocol Label Switching (MPLS) is deployed on carrier backbones and uses a trusted core assumption while attacks from outside the core are not possible. Rey and Mende demonstrated their "mpls_redirect" tool to modify MPLS Layer 3 VPN labels and redirect traffic. This is possible in part because of trusting carrier insiders and can be used to send traffic to different customer networks. Rey had a great line where we called it "branching" the traffic because he was told, due to his thick German accent, he should not use the word "forked" (or "fokt" as it sounded when he said it).
These two definitely are in a position to test the security of major providers from an insider perspective, which is not the norm, and they do a good job explaining some of the issues they find.
1700: David Kennedy's Fast-Track Suite: Advanced Penetration Techniques Made Easy was probably the most crowded presentation I attended. One suggestion I have for getting a good seat at Shmoocon is to plan your schedule ahead and note the presentations that are likely to be crowded. If you are not changing rooms, do not get up and lose your seat, because rooms definitely end up standing room only sometimes.
Kennedy was a good presenter. When you have fun on the podium, it definitely shows and keeps everyone attentive. He had a lot of audience participation as he showed a slide and said "Let's Pop a Box" each time before he used Fast-Track to own a system. When he started to forget "Let's Pop a Box," someone from the audience would invariably ask him if he forgot something or shout "Pop a Box!" as Kennedy did a face-palm.
Fast-Track itself is obviously pretty neat. He showed a variety of automated attacks against different targets that most often ended with a reverse shell back or reverse VNC back to his attacking system. He also talked about his evasion technique using Windows debug to download his stager, which is actually just a version of Windows debug without the 64k size limit.
Fast-Track 4.0 includes some new features like logging and payload conversion so you can load your own payloads to deliver. Although Fast-Track has a smaller list of exploits than Metasploit, Kennedy said that he strives to make them available across as many OS versions as possible. Version 4.0 also includes a mass client attack using ARP poisoning combined with emailed links to targets. The malicious page will display a generic "loading please wait..." message as it launches a multitude of attacks, but Kennedy said that 4.1 will also include browser profiling for more targeted exploits. One really nice feature is the auto-update to update a multitude of tools included in Fast-Track. Although I didn't look into it yet, I did wonder if it had any SNMP attacks and I think a SNMP auto-own attack would be a neat and not too complicated addition if it's not there yet.
Sunday
1000: I really feel for anyone in this time slot. After a weekend of hacking and partying, the number of people in any room is much smaller than the number of people at 1000 on Saturday. The numbers increase as people drag themselves into the talks through the hour.
I attended Re-Playing with (Blind) SQL Injection by Chema Alonso and Palako during this hour. I was starting to think I made the wrong choice because they started off slow and quite dry, but maybe they were included in the ones recovering from the previous night's festivities. By the second half, they started to have a little fun and had some funny moments, including a slide with, "Yes, we can!" Another funny moment was when they found a database username length of two and referred to it as "the most famous Microsoft SQL user..."
Although we've all probably seen or read about blind SQL injection before, they did have some interesting techniques and used their Marathon Tool to ease the tedious nature of blind SQL injection. One thing I liked was their method of using timing to seperate a True answer page from a False answer page if there is no visible or code difference. Most SQL engines have slightly different supported methods to introduce time-based blind SQL injection so a response that is timed above a certain value can be considered true. Even those that don't include time-delay functions can be leveraged with by running a "heavy" query only if a "light" query first returns as true. An example of a heavy query that would slow the response after a successful light query is multiple cross-table joins.
Alonso and Pakato also did a good job answering questions. They definitely seemed more comfortable by the end of their presentation.
1100: Chris Paget is a very entertaining presenter and clearly had fun showing off his RFID reader during EDL Cloning for Under $250. He demonstrated how easy it is to read, clone, and write RFID cards created as part of the Western Hemisphere Travel Initiative. By design, these cards are supposed to be readable from 30 feet but it is trivial to read them at more than 200 feet and much longer distances, possibly around half a mile, should be possible. The cards also have no encryption or authentication.
Paget was able to buy an enterprise-level card reader by Motorola on eBay. Although he needed to perform some repairs on the RFID reader, the whole sniffing setup was only around $250. The card reader has no real security mechanisms for logon and listens on port 3000.
There are no federal anti-skimming laws to prevent RFID skimming/sniffing, though CA and WA states do have laws. Paget was able to grab a lot of information through war driving with his setup and pointed out that correlation means the cards can provide more than just an anonymous number. For instance, if you detect the same card tag twice you could compare it to photos to see whose face you saw twice. You could also correlate against other data like credit cards containing RFID to figure out which data belonged to which person.
Eventually as RFID cards become more common, this could present more serious issues like collecting tons of RFID card data until you get one where the person's appearance is close enough that you could use his identity, or terrorists could use it to identify targets in a crowd.
Paget stated that the supposed purpose for the cards was to enhance security, which clearly is a failure, and also to speed border crossings, which also has been a failure since users still have to present their cards directly. Paget believes that WHTI is broken but that RealID could be an alternative if it was revamped to fix all the serious problems. Ideally, among other recommendations, he advocates a contact smartcard rather than one that can be read remotely.
Another Shmoocon in the books. Thanks to the Shmoo Group, speakers and attendees for a good time.
Posted by Nathaniel Richmond at 16:09 0 comments
Labels: exploits, shmoocon, technology, wireless