It looks like the previously mentioned exploits for the latest IE vulnerability, and more, have moved to an additional domain. Everyone is probably seeing SQL injection attempts with obfuscated code similar to before, except now the referenced domain is mcuve.cn. As far as I can see, the site is hosting the same code that was hosted on the 17gamo site. A quick Google shows that a few sites have already been hit (and at least one other person has already blogged it).
29 December, 2008
IE exploits on the move
Posted by Nathaniel Richmond at 15:33 0 comments
Labels: javascript, malware
22 December, 2008
Answers to NIDS management
C.S. Lee had a post called NIDS: Administration, Management & Provisioning that asked some good questions about managing large numbers of NSM sensors. I have managed large numbers of sensors in the past, so thought I would take a shot at describing some of the ways I eased management as well as other methods I still look forward to trying. Since my post is long, I thought it better to write it here than stuff it all into a comment on geek00l's blog.
A couple of things to remember; first, there are almost always ways to improve complex systems management. Second, "perfect" is the enemy of "good enough". At some point you reach the point of diminishing returns, so the cost of additional improvement of the management or administration of the systems may not be worth the reward.
1. What tools do you use to manage all the NIDS, and why you choose them over others?SSH is obviously going to be one way to login to systems and do certain things. If it is something that you must do consistently, then scripts or other system management methods that I will discuss later are likely more appropriate. When using SSH for a large number of systems, don't forget that SSH keys and ssh-agent are your friend. With ssh-agent, you can login to all your systems with your SSH key after entering your passphrase only once. This simplifies running scripts that require logging into or copying files to each system.
- For example ssh, however I would like to know more about tools you use to manage massive NIDS instead of one, and the reason you choose it.
Also, when I talk about using SSH along with scripts, I'm also talking about using programs that support SSH as the transport protocol, for example rsync and rdist. Expect scripts are also a common way to roll your own centralized management of systems, but for C.S. Lee's 50+ system question, a dedicated application seems to be a better answer than only using scripts and logging in manually.
2. How do you perform efficient administration securely? For examples,I think these types of changes and updates will require a combination of tools, and the tools could depend in part on the operating system(s). If you have multiple operating systems then it also makes the management more complex, so ideally you want to standardize on an operating system as well as keeping the release versions identical whenever possible.
- System changes/updates
- NIDS tools' changes/updates
- NIDS rules' changes/updates
- NIDS Configuration files' changes/updates
- NIDS Policies' changes/updates
One thing I've mentioned in the past for system management is puppet.
Puppet lets you centrally manage every important aspect of your system using a cross-platform specification language that manages all the separate elements normally aggregated in different files, like users, cron jobs, and hosts, along with obviously discrete elements like packages, services, and files.Although I haven't yet had the chance to use puppet, it seems to have a good reputation. Another option is cfengine, though most people I have talked to that have experience with both seem to prefer puppet. Change management of configuration files, cron scripts, and other files like NIDS rules can definitely be handled by one of these central management tools.
Another thing to consider is whether your operating system or its vendor includes anything for these tasks. For instance, Red Hat Network Satellite can handle a lot of centralized management, including package management. NIDS/NSM sensors often need configuration changes from the standard distribution package for certain software, so being able to roll your own packages and push them to sensors automatically can drastically reduce system management overhead.
Although puppet seems to handle users, I've also written three posts about OpenLDAP for centralized management of users and groups [1, 2, 3]. With most current Linux or BSD, once LDAP is configured it is pretty easy to manage users, groups, and even sudo. Since I've worked in environments with not just large numbers of Linux systems, but also large numbers of users, LDAP was definitely useful. With a small number of users on large number of systems, I'm not sure that it would be needed.
For the security requirement, any good centralized management system better have some sort of authentication and encryption. Puppet supports a CA and SSL, cfengine supports RSA and Blowfish along with public-private keys, and Red Hat Satellite suports SSL and GPG. Other basics including host-based firewalls like iptables can also be useful for limiting exposure and access from the network.
Truthfully, I have mostly relied on home-grown scripts combined with SSH, rsync and/or rdist to push files or commands to Linux systems. However, with the number of systems I have managed, the up-front cost of implementing something like puppet, cfengine, or Satellite would be worth the long-term benefits.
3. Which method you like to use in order to manage them, and why? For example,I think this question is largely moot because it will usually be determined by the management tools you are using. For instance, Red Hat runs a daemon on the individual systems that will check in either with Red Hat Network or with your local Satellite Network.
- Server pushes rules update to all the sensors(Push)
- Sensors pull the rules update from server(Pull)
When using scripts, I will usually use a push simply because I like to login to one system then run the script that will connect to all the other systems to copy files or run a command.
3. NIDS health monitoring and self-healingThe obvious answer to monitoring processes is something like Nagios, an open source solution. Nagios can also handle restarting services or processes through event handlers. Realistically, any software that monitors services should have the ability to restart those services if needed. Another example of process monitoring and restarting is daemontools, but it does not really meet monitoring needs for an enterprise and is fairly limited. There are additional choices of monitoring software, as well.
- I'm talking about something like this, if the system is in incosistent state, operators will be notified. If certain process die, it should recover by itself.
Posted by Nathaniel Richmond at 06:21 0 comments
Labels: linux, scripts, system administration
13 December, 2008
IE vulnerability just one of many
The latest IE "0day" is making big news. The bulletin now includes IE6, IE7, and IE8 beta. Looking at CVE-2008-4844 will give a decent round-up of related links. Shadowserver has a list of domains known to be using exploits that attack this vulnerability. Microsoft has some workarounds to help mitigate the vulnerability.
One thing to remember is that many malicious sites do not rely on one vulnerability. Don't let one high-profile vulnerability and news of exploits in the wild make you forget about the big picture. If a site is hosting exploits against this IE vulnerability, it is very likely the site will be hosting additional exploits.
One example is one of the highest profile domains hosting exploits, 17gamo[dot]com. The SQL attacks referenced on SANS are injecting a URI containing this malicious domain. As mentioned on SANS diary, the javascript in the injected URI leads to additional files on the malicious site. Although the SANS diary specifically mentions the IE exploit, it doesn't mention the other exploits.
Please remember the following site is malicious!
$ wget -r http://www.17gamo.com/coAfter downloading the content, I change to the correct directory and see what is there:
$ ls co/The index file tries to open iframes containing 14.htm, flash.htm, ie7.htm, nct.htm, office.htm, real.htm and real.html. The flash.htm file then references ihhh.html and fhhh.html. We already know from the SANS diary what ie7.htm does.
14.htm flash.htm ihhh.html nct.htm real.htm swfobject.js
fhhh.html ie7.htm index.html office.htm real.html
It was nice of the file authors to use relevant names for some of the files. The flash.htm code references both ihhh.html and fhhh.html. Both these files look like they will serve up a Flash exploit of varying names depending what version of the Flash Player is detected. Downloading a couple of the SWF files, they are the same size but diff shows that they are not identical. They all seem to produce similar results on Virustotal.
The office.htm file appears to be an exploit targeting CVE-2008-2463, a MS Office Snapshot Viewer ActiveX vulnerability. If vulnerable, this will lead to the download of the same win.exe mentioned in the SANS diary and it looks like it will attempt to write the executable to the 'Startup' folder for All Users.
I haven't looked at real.htm, real.html, nct.htm or 14.htm yet.
This is all just to point out that most malicious sites these days will run a number of attacks against web clients, so just because one failed doesn't mean the others did the same. I saw a system get hit by the ie7.htm exploit without immediately downloading the win.exe from steoo[dot]com, yet it did run one of the malicious SWF files.
Posted by Nathaniel Richmond at 13:35 0 comments
Labels: malware
04 December, 2008
ArsTechnica Ovatio Awards posted
ArsTechnica has some year-end awards, the Ovatio Awards, online. With any type of awards like this, there will be plenty of arguments about what is missed and what should not be on the list.
I find it interesting that none of the Year's Biggest Stories section includes anything directly related to security. If I was making a list (maybe I will?), I can think of a number of important stories related to security that had or will continue to have a huge impact. They did choose "cloud computing" as Buzzword of the Year, which I think is a good choice and is definitely a big topic on security blogs recently.
I can't say I understand the choice of PlayStation 3 as Product of the Year, but I haven't been a serious gamer for years, which may explain why I own a Wii. In addition to being launched more than two years ago, the PS3 doesn't seem particularly innovative and the price makes it a tough buy for a lot of people. Still, ArsTechnica is fairly focused on gaming so it may make some amount of sense. It will definitely get a lot of page views and some comments in their forum!
The Hardware Trend of the Year, netbooks, is definitely a good pick. Really, it seems like this is something customers have wanted for years and companies just didn't realize that there would be a big market even where traditional laptop sales were strong. I know that my laptop philosophy has light weight higher on my priorities list than performance. I don't mind a laptop as a desktop replacement around the house, but I sure don't want to travel with it if I have a lighter option.
The pick of OpenSUSE as a Linux Distro of the Year surprises me, but that may just show that I haven't used it in years. It may not have been part of their criteria, but if I had to pick a distribution with the most community and end-user impact, I would definitely have to say Ubuntu. There is no other distribution besides possibly Red Hat that has had as much name recognition among my non-technical friends and acquaintances. My 91-year-old neighbor even tried running Ubuntu on a live CD for a while because he was thinking of ditching Windows.
I enjoy reading year-end articles like the one posted at ArsTechnica because it gets me thinking about the topics I thought were important or significant over the past year.
Posted by Nathaniel Richmond at 05:40 0 comments
Labels: cloud, linux, technology
20 November, 2008
Commodity malware versus custom exploits
In my post about e-card Trojans, I mentioned that I hoped to flesh out my thoughts on malware as compared to more customized exploits. As we all should know from numerous stories, commodity malware is big business. Malware is increasingly used to steal information to turn a profit, and is likely being used to target information that is valuable in other ways. So my question is, in a world where the U.S. military has to ban USB drives to combat malware, how much trouble are customized private exploits actually worth?
There are certainly advantages to customized private exploits, but when a spammer only needs one response for every 12.5 million emails sent to be profitable, it seems that the economics of the situation may favor the lower cost of slightly modifying malware to bypass anti-virus software and then blasting away with malicious emails, advertisements, and other links.
A customized exploit that is only being used by a small number of people should obviously be more difficult to detect. However, when anti-virus and traditional IDS rely so thoroughly on signatures of known activity, the question is really about how difficult the attacker needs detection to be. In many cases, it may not be worth using a skilled attacker to craft a specific exploit when said attacker could be increasing the efficiency of more voluminous attacks.
Of course, this is not really an 'either' 'or' situation. Both types of attacks can effectively be used, and when combined they are probably both more effective. Sow mass confusion and panic with widespread malware attacks while performing more targeted attacks for particularly desirable information. Those playing defense will likely be busy scurrying after the malware while the targeted attacks fly in under the radar, especially in these economic times where security operations may be suffering from budget cuts.
I also don't mean to downplay the skill it takes to enumerate network services and write a custom exploit for one or more of those services on the spot. Relatively few people can do that, I am certainly not one of them, and in many cases it is virtually undetectable. At the same time, I often feel that those talented exploit writers and penetration testers give too little credit to the effectiveness of common malware. It seems to me that commodity malware has become quite effective at generating revenue and stealing information.
Posted by Nathaniel Richmond at 19:21 0 comments
Labels: malware
17 November, 2008
'Tis the season for E-card Trojans
IP addresses and hostnames have been changed. Anyway, this is from a few days ago and it looks like the malware is no longer on the server. A user received an email with a link to a supposed holiday card...
Src IP: 10.1.1.18 (Unknown)The above is part of a Sguil transcript. I downloaded the file, took a brief look, then submitted it to VirusTotal.
Dst IP: 192.168.31.250 (Unknown)
Src Port: 1461
Dst Port: 80
OS Fingerprint: 10.1.1.18:1461 - Windows XP SP1+, 2000 SP3 (2)
OS Fingerprint: -> 192.168.31.250:80 (distance 7, link: ethernet/modem)
SRC: GET /ecard.exe HTTP/1.0
SRC: Accept: */*
SRC: Accept-Language: en-us
SRC: User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; InfoPath.1; .NET CLR 1.
1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30)
SRC: Host: fakeurl.info
SRC: Connection: Keep-Alive
SRC:
SRC:
DST: HTTP/1.1 200 OK
DST: Date: Wed, 12 Nov 2008 11:41:08 GMT
DST: Server: Apache/1.3.36 (Unix) mod_jk/1.2.14 mod_auth_passthrough/1.8 mod_log_bytes/1.2 mod_b
wlimited/1.4 PHP/4.3.9 FrontPage/5.0.2.2635.SR1.2 mod_ssl/2.8.27 OpenSSL/0.9.7a
DST: Last-Modified: Wed, 12 Nov 2008 10:14:10 GMT
DST: ETag: "944cd-8688-491aac72"
DST: Accept-Ranges: bytes
DST: Content-Length: 34440
DST: Content-Type: application/octet-stream
DST: Connection: Keep-Alive
DST:
DST:
DST: MZ..............@.......@........L.!........................@...PE..L...UB.I...............
..........p................@.......................... .........................................
................................................................................................
.............................UPX0.....p..............................UPX1.......................
.........@...UPX2................................@..............................................
3.03.UPX!
$ cd malwareThis is pretty run-of-the-mill malware that will get detected by Emerging Threats SID 2006434 when the executable is downloaded, but I guess people still fall for it. As Shirkdog so eloquently stated:
$ wget http://fakeurl.info/ecard.exe
---snip---
$ strings -n 3 -a ecard.exe | less
UPX0
UPX1
UPX2
3.03
UPX!
---snip---
XPTPSW
KERNEL32.DLL
LoadLibraryA
GetProcAddress
VirtualProtect
VirtualAlloc
VirtualFree
ExitProcess
Do not click on unsolicited URLs, including those received in email, instant messages, web forums, or internet relay chat (IRC) channels.It's always amazing how few anti-virus engines will catch known malware. A system compromised this way also brings to my mind a comparison between common malware and novel or custom exploits that are not widely available. I plan to flesh out thoughts comparing the two at a later date.
People will never get it through their skulls that they SHOULD NOT click links. It is the reason we are all employed, because the user will always be there.
Posted by Nathaniel Richmond at 09:07 0 comments
13 November, 2008
OpenLDAP Security
Since I have been doing a lot of system administration blogging lately and not much on security, I decided I should post something related to security even if it is still in reference to system configuration and administration. Despite being years old, many of the pages I found about LDAP and security were still pertinent, for example Security with LDAP. The OpenLDAP documentation has a whole section titled Security Considerations in addition to other sections throughout that address security.
The TLDR version of this post is that some of the defaults for OpenLDAP may not be secure, it is easy to make other configuration mistakes, and you should make sure to examine configurations, permissions, ACLs, and schemas with security in mind. Different distributions can have different defaults. If you are using LDAP for account information in particular, you need to be careful.
I will go over some specifics that I noticed, but I certainly won't cover everything. OpenLDAP can be configured to get a similar level of protection for account information compared to the standard Unix/Linux shadow files and actually makes some security-related tasks easier for an administrator, such as disabling accounts or enforcing password policies.
Encryption of Data and Authentication
The first and most obvious problem is that the default OpenLDAP configuration does not encrypt network activity. Whether you're using LDAP for account information or not, it is very likely that most people will not want their LDAP traffic going over the network unencrypted. OpenLDAP has support for TLS that makes it relatively easy to implement. Also important to note is that, though network activity is not protected by default, the minimum recommended authentication mechanism is SASL DIGEST-MD5.
The DIGEST-MD5 mechanism is the mandatory-to-implement authentication mechanism for LDAPv3. Though DIGEST-MD5 is not a strong authentication mechanism in comparison with trusted third party authentication systems (such asAnother option, Kerberos, is also "highly recommended" for strong authentication services.Kerberos or public key systems), it does offer significant protections against a number of attacks.
Passwords
When using OpenLDAP with nss_ldap and centralized accounts, if you're storing passwords in LDAP they should be stored as hashes, not plain text. This seems obvious, but it's important to understand how to generate the hashes with the 'slappasswd' command and then use 'ldapadd', 'ldapmodify' or a GUI LDAP management tool to put the hashes into LDAP. This is done when creating or altering accounts. The 'passwd' command will hash passwords automatically when users change their own passwords.
Different distributions have different default ACLs, but RHEL for example allows anonymous reads of LDAP by default and allows authenticated users read access to everything in another sample ACL included in the openldap-servers package. If you're going to store account information and passwords with LDAP, the access controls need to be changed to prevent both anonymous and authenticated users from viewing password hashes. As we all should know, all it takes to crack a password hash is an appropriate tool and processing time. Depending on the attacker's computing power, the password hashing algorithm and the actual password, cracking passwords can be extremely fast or very slow.
OpenLDAP supports a number of hashing algorithms and the default is to use {SSHA}, which is SHA-1 with a seed.
- -h scheme
- If -h is specified, one of the following RFC 2307 schemes may be specified: {CRYPT}, {MD5}, {SMD5}, {SSHA}, and {SHA}. The default is {SSHA}.
Note that scheme names may need to be protected, due to { and }, from expansion by the user's command interpreter.
{SHA} and {SSHA} use the SHA-1 algorithm (FIPS 160-1), the latter with a seed.
{MD5} and {SMD5} use the MD5 algorithm (RFC 1321), the latter with a seed.
{CRYPT} uses the crypt(3).
{CLEARTEXT} indicates that the new password should be added to userPassword as clear text.
ACL Problems
There can also be problems related to Access Control Lists for slapd. Red Hat's default configuration allows anonymous reads. Ubuntu's slapd.conf seems to have a much more secure default ACL. Below is RHEL5's default, which allows anonymous reads, user reads, but only the rootdn to write.
access to * by * readThe following is a sample configuration that is also included in the default slapd.conf on RHEL5, though it is commented out in favor of the above ACL. The danger with the following is that users still can read everything as well as write their own entries.
access to *Allowing users 'self write' to change their own entries is obviously a big problem if you're using LDAP for account information. Any user can change his own uidNumber or gidNumber to become uid 0, gid 0, gid 10 (wheel), etc. Not good!
by self write
by users read
by anonymous auth
To authenticate with nss_ldap, OpenLDAP must allow some sort of read access. Without anonymous reads, users can't authenticate unless there is a proxy user with read access. The proxy user's binddn and password must be in /etc/ldap.conf and /etc/openldap/ldap.conf in plain text and the files are world readable. This is somewhat mitigated because the ldap.conf files can only be accessed by authenticated users logged into the system, so if an attacker already gained access to the system the proxyuser password is a fairly trivial concern in the big scheme of things.
Another file with a plain text password is /etc/ldap.secret. This file must contain the rootdn password in plain text, but is again somewhat mitigated with file permissions. The permissions for the file must be set to 600 so only root can read the file, so the obvious way an attacker will get the rootdn password from the file is if he already has root privileges on that particular system. However, with the rootdn password the attacker could wreak havoc on all the LDAP entries, including all the account information stored in LDAP.
To prevent users from viewing the password hashes of others, two things are required. First, change the ACL in slapd.conf. Something like this would allow users to change their own passwords, but not any other attributes and not view other users' hashes. You can hide additional attributes from authenticated users if needed.
access to attrs=userpasswordAnother important thing to do is put users in the objectclass 'shadowAccount', which is in the NIS schema along with the objectclass 'posixAccount' that stores most account information. This will prevent password hashes from displaying when using 'getent passwd'. This is similar to shadow passwords on the local system, which move the password hashes from the world-readable /etc/passwd to /etc/shadow, which is only readable by root. The 'getent' commands will fetch both local and LDAP information.
by anonymous auth
by self write
by * none
access to *
by self read
by users read
by anonymous auth
Password Policy Overlay
The password policy overlay for OpenLDAP allows password policies to be enforced on OpenLDAP accounts. Quoting from the documentation:
The key abilities of the password policy overlay are as follows:Particularly for people that have specific company requirements for password policies, this overlay will do just about everything except complexity checking. For complexity checking, it's fairly easy to enable and configure pam_cracklib on each client. As far as I know, since only the hash crosses the wire when authenticating or changing passwords, it is not possible to centrally enforce complexity requirements.
- Enforce a minimum length for new passwords
- Make sure passwords are not changed too frequently
- Cause passwords to expire, provide warnings before they need to be changed, and allow a fixed number of 'grace' logins to allow them to be changed after they have expired
- Maintain a history of passwords to prevent password re-use
- Prevent password guessing by locking a password for a specified period of time after repeated authentication failures
- Force a password to be changed at the next authentication
- Set an administrative lock on an account
- Support multiple password policies on a default or a per-object basis.
- Perform arbitrary quality checks using an external loadable module. This is a non-standard extension of the draft RFC.
Personally, for password expiration I prefer not to allow any grace logins, thereby enforcing a lockout if the password expires. As long as the policy is set to provide ample warning, this shouldn't cause problems. Consider if you allow some number of 'grace' logins after the password expires and for some reason a user does not login for an extended period of time. The account could conceivably remain active for as long as it takes to brute force the password rather than being disabled once the password expires.
Another password policy overlay feature is temporary lockouts after failed authentication. For instance, you could set a lockout after x login attempts in y seconds. The lockout can be z seconds. I don't know what the maximum number of seconds the overlay or OpenLDAP will accept, but it can definitely be zero up to months in seconds if needed for some fields like pwdMaxAge.
When enabing 'pwdReset' to require an immediate password change, I eventually found that the following line must be uncommented in slapd.conf.
pam_lookup_policy yesAfter doing this, you can set 'pwdReset: TRUE' when generating temporary passwords, then the user will be required to change passwords immediately when logging in.
From my testing, the password policy overlay is definitely superior to the shadow password options within the nis.schema that comes with OpenLDAP. The biggest problem with the password policy overlay is that some distributions may not include it in the distribution's package for the OpenLDAP server, requiring compiling with support for the overlay instead of a standard OpenLDAP package from your distribution of choice.
Post Script
I have two previous posts about OpenLDAP. I would love to get any comments on what could be added or any mistakes that could be corrected.
Setting up OpenLDAP for centralized accounts
OpenLDAP continued
Posted by Nathaniel Richmond at 05:01 6 comments
Labels: encryption, ldap, linux, system administration
01 November, 2008
Shmoocon 2009 tickets
Tickets for Shmoocon 2009 went on sale at noon EDT today. All the "Early Bird" tickets for $100 went quickly but there are still some "Open Registration" for $175 and "I Love Shmoocon" for $300.
I like the way Shmoocon sells their tickets using three different rounds with three different price points in each round. Here is their chart with dates of sales. Noon is always the start time. Shmoocon itself is February 6 - 8.
Date Tickets to be Sold | Early Bird | Open Registration | I love ShmooCon |
---|---|---|---|
November 1, 2008 | 200 | 300 | 10 |
December 1, 2008 | 200 | 300 | 20 |
January 1, 2009 | 100 | 100 | 20 |
One really cool contest I noticed this year is Barcode Shmarcode. The Shmoocon ticket has always been simply a barcode they email to you after you purchase. This year, they want people to modify their barcodes to be unique and awesome while still scanning properly. They'll grade on originality, best use of theme, best use of materials, and most error free scan. I look forward to seeing the results.
Posted by Nathaniel Richmond at 13:45 0 comments
29 October, 2008
Snort shared object rules with Sguil
More and more Sourcefire VRT Snort rules are being released in the shared object (SO) rules format. As examples, Sourcefire released rules on 14 October that were designed to detect attacks targeting MS08-057, MS08-058, MS08-059, MS08-060, MS08-062, MS08-063 and MS08-065 vulnerabilities. On 23 October, Sourcefire released rules related to MS08-067, a vulnerability that has garnered a lot of attention.
It looks like all these recently released rules are SO rules. It is easy to tell a SO rule from a traditional Snort rule using the Generator ID, because SO rules use GID 3 while the old rule format uses GID 1.
Because Sourcefire is issuing all these rules in SO format, if you don't want to miss rules for recent vulnerabilities it is definitely important to test and implement the new format and keep the rules updated. When I implemented the rules in production, I used How to use shared object rules in Snort by Richard Bejtlich as a guide for adding shared object rules to Snort. It seems fairly complete and I was able to get the precompiled rules working on RHEL/CentOS.
Unfortunately, getting the rules working and updating them is not as simple and certainly not identical to updating rules that use the old format. Oinkmaster will edit the stub files to enable and disable the SO rules, but it requires running Oinkmaster a second time with a separate configuration file. From the Oinkmaster FAQ:
To update my rules, I first download the rules and run Oinkmaster to update my GID 1 rules. Then I extract the so_rules so I can run Snort with the '--dump-dynamic-rules' option. This will generate the required stub files, but they will not contain any changes you made to enable or disable specific rules. To change which are enabled or disabled, I run Oinkmaster again with the oinkmaster-so-rules.conf.Q34: Can Oinkmaster update the shared object rules (so_rules)?
A34: Yes, but you have to run Oinkmaster separately with its own
configuration file. Copy your regular oinkmaster.conf file
to oinkmaster-so-rules.conf (or create a new one) and set
"rules_dir = so_rules". Then run Oinkmaster with
-C <path tooinkmaster-so-rules.conf> and use an output directory
(-o <dir>) different than your regular rules directory. This is
important as the "rules" and "so_rules" directories contains
files with identical filenames. See the Snort documentation on how
to use shared object rules. The shared object rules are currently
disabled by default so you have to use "enablesid" or "modifysid"
to activate the ones you want to use.
For Sguil to properly show the rule msg in the 'Event Message' column on the client, you must generate a new gen-msg.map file. David Bianco gave me a couple shell commands, one of which uses 'create-sidmap.pl' from the Oinkmaster contrib directory, to update the gen-msg.map file.
create-sidmap.pl ruledir/so_rules | perl -ne 'm/(\d+) \|\| ([^\|]+) \|\|/g; print "3 || $1 || $2\n";' > ruledir/so_rules/gen-msg.mapAfter you check that the new file is correct, it can be moved to overwrite the old gen-msg.map. The alert name will be found based on the GID and SID, so without the update you would only see the numerical GID and SID when viewing alerts in Sguil instead of the actual text of the alert message.
cat ruledir/gen-msg.map ruledir/so_rules/gen-msg.map | sort -n | uniq > ruledir/so_rules/gen-msg-merged.map
One last thing to do is get the "Show Rule" checkbox in Sguil to show the rule stub when viewing SO rules. A quick temporary fix is fairly simple. Either rename and place all the rule stub files in the directory on your sguild server with all the other rules, or use a symbolic link. Either way, you have to use alternate names since the SO rules stub files use the same naming scheme as the standard rule files.
Once that is done, it just requires a simple change to 'extdata.tcl' in the 'lib' directory of the Sguil client. Change the following line:
if { $genID != "1" } {to
if { $genID != "1" && $genID != "3" } {David Bianco pointed out that there is nothing to prevent SO rules and standard rules from sharing a SID since they have a different GID, so the above solution could get the wrong rule message if there are identical SIDs. He said he will probably add code to look in the ruledir for GID 1, and look in ruledir/so_rules for GID 3. This is definitely the better way and not too complicated, so hopefully Sguil CVS will have the code added in the near future.
06 October, 2008
OpenLDAP continued
After initially configuring, setting up and testing LDAP, I still had a lot to resolve. Issues included password maintenance, 'sudo' only looking in the local sudoers files, adding a sudoers file to LDAP, setting up groups, replication of the LDAP database, and configuring fail-over.
Passwords
OpenLDAP has a default schema, nis.schema, that contains definitions for both Posix accounts and shadow accounts. By including the object class 'shadowAccount' when creating users, it allows defining some password requirements in LDAP.
shadowMin: 0They are mostly self-explanatory and correspond to local 'passwd' or 'chage' options. On my setup, I also was able to make users change passwords on first login. I tried both 'passwd' and 'chage' to do this, but neither recognized the LDAP accounts when used with options to expire passwords. I can only assume that defining the 'shadowAccount' variables caused the behavior requiring an initial password change, because new users were not forced to change passwords prior to defining the shadow variables. I will edit this with updates if I figure out exactly how to require a password change at initial logon when using LDAP accounts.
shadowMax: 180
shadowWarning: 14
shadowInactive: 30
shadowExpire: -1
Changing passwords with OpenLDAP works the same as local accounts. Simply use 'passwd', enter current LDAP password, then the new password.
Although I have various password requirements set using cracklib in /etc/pam.d/system-auth on RHEL, the checks did not seem to be run when changing LDAP user passwords. After reading the manual for pam_cracklib, I saw that some of my settings were not correct. After correcting the settings, pam_cracklib started enforcing my password complexity requirements. Note that pam_cracklib must be configured on each local system even when the passwords are stored in LDAP. Below is an example configuration from the pam_cracklib manual.
password required pam_cracklib.so \OpenLDAP also has a password policy overlay, but it does not appear to include enforcement of password complexity. It will enforce password reuse policy, password expiration, and other similar policies. It is not available in OpenLDAP 2.2, but at least on RHEL5 with OpenLDAP 2.3 the password policy overlay seems to be included as a schema.
dcredit=-1 ucredit=-1 ocredit=-1 lcredit=0 minlen=8
LDAP Groups
Just as LDAP allows you to centralize user accounts, you can also use it to centralize groups or add LDAP groups in addition to local groups. As with any bulk additions to LDAP, you can use a ldif file to add the groups. For example, the following could be used to add a group called "analysts" and a group called "sr_analysts". The "Group" OU must exist prior to adding these groups, or you can add the Group OU from the same file as long as it comes before the groups that will be within the OU.
dn: cn=sr_analysts,ou=Group,dc=security,dc=test,dc=comTo add the groups and test:
objectClass: top
objectClass: posixGroup
gidNumber: 1010
cn: sr_analysts
memberUid: nexample
memberUid: nother
dn: cn=analysts,ou=Group,dc=security,dc=test,dc=com
objectClass: top
objectClass: posixGroup
gidNumber: 1011
cn: analysts
memberUid: luser
$ ldapadd -x -D "cn=Manager,dc=security,dc=test,dc=com" -W -f groups.ldifsudo
Enter LDAP Password:
adding new entry "cn=sr_analysts,ou=groups,dc=security,dc=test,dc=com"
adding new entry "cn=analysts,ou=groups,dc=security,dc=test,dc=com"
$ getent group
----- snip -----
sr_analysts:x:1010:nexample,nother
analysts:x:1011:luser
The version of 'sudo' provided in RHEL/CentOS v4 is not compiled for LDAP, while in RHEL/CentOS v5 the package was built using --with-ldap.
$ ldd $(type -p sudo) | grep ldapWhether you need to compile or build a 'sudo' RPM with LDAP support on RHEL4 may depend on how complex your 'sudoers' needs to be. If you only need to add support for certain people to be in the wheel group then you can easily rely on the standard RHEL/CentOS v4 'sudo' by uncommenting wheel in the local 'sudoers' file and creating the needed LDAP user accounts with gidNumber: 10, which is the default for the wheel group on RHEL. As long as the local system has wheel with GID of 10 then the LDAP account will be seen as a local sudoer on the system. If 'sudo' needs are more complex, it may be worth creating a custom RPM for 'sudo' using --with-ldap. I have not tried this with 'sudo', but I have downloaded RHEL5 source RPMs for other software and successfully built a RPM for RHEL4.
libldap-2.3.so.0 => /usr/lib/libldap-2.3.so.0
If using LDAP on RHEL5, it definitely makes sense to move the 'sudoers' to LDAP. Since the purpose of groups is to ease system administration by grouping users logically, it makes sense to create a 'sudoers' LDAP container and then allow groups to perform 'sudo' commands rather than adding and removing individual accounts.
Prior to adding 'sudoers' and related information into LDAP, I had to add 'schema.OpenLDAP', which is included with the 'sudo' source, to my schema directory and include it in my 'slapd.conf'. I also renamed it to 'sudo.schema'.
After adding the schema, I used another file from the 'sudo' source, README.LDAP, as an example for creating the following ldif file. I tried using sudoOption: env_keep, but kept getting errors saying the option was not recognized despite the examples I've seen showing its use.
dn: ou=SUDOers,ou=role,dc=security,dc=test,dc=comThose in the analysts LDAP group will be able to run /path/to/script with no password while those in the sr_analysts group can run all commands but a password is required. As another example, changing sudoUser to '%wheel' would allow all accounts in the 'wheel' group to execute sudoCommand. The 'cn' for the 'sudoRole' does not have to correspond to anything, but it makes sense to name it something that reflects what the role is for.
objectClass: top
objectClass: organizationalUnit
ou: SUDOers
dn: cn=defaults,ou=SUDOers,ou=role,dc=security,dc=test,dc=com
objectClass: top
objectClass: sudoRole
cn: defaults
description: Default sudo options go here
sudoOption: requiretty
sudoOption: env_reset
dn: cn=analyst_script,ou=SUDOers,ou=role,dc=security,dc=test,dc=com
objectClass: top
objectClass: sudoRole
cn: analyst_script
sudoUser: %analysts
sudoHost: ALL
sudoCommand: /path/to/script
sudoOption: !authenticate
dn: cn=sr_analysts_all,ou=SUDOers,ou=role,dc=security,dc=test,dc=com
objectClass: top
objectClass: sudoRole
cn: sr_wheel
sudoUser: %sr_analysts
sudoHost: ALL
sudoCommand: ALL
The percent sign for 'sudoUser' is only used in front of group names, not users, the same syntax as the local 'sudoers' file.
Finally, don't forget to modify /etc/ldap.conf to include sudoers_base and sudoers_debug. Debug can normally be set to '0' but setting it to '2' for troubleshooting is extremely useful. The following shows the output of sudo -l with sudoers_debug 2. It is the result of the wheel group being in both the local and LDAP sudoers.
[nr@ldap ~]$ sudo -lAlso note that the OpenLDAP Administrator's Guide has an overlay for dynamic lists, which includes the functions available in the deprecated dynamic groups overlay.
LDAP Config Summary
===================
host 192.168.1.10
port 389
ldap_version 3
sudoers_base ou=SUDOers,ou=roles,dc=security,dc=test,dc=com
binddn cn=proxyuser,ou=roles,dc=security,dc=test,dc=com
bindpw PASSWORD
ssl start_tls
===================
ldap_set_option(LDAP_OPT_X_TLS_REQUIRE_CERT,0x00)
ldap_init(192.168.1.10,389)
ldap_set_option(LDAP_OPT_PROTOCOL_VERSION,0x03)
ldap_start_tls_s() ok
ldap_bind() ok
found:cn=defaults,ou=SUDOers,ou=roles,dc=security,dc=test,dc=com
ldap sudoOption: 'requiretty'
ldap sudoOption: 'env_reset'
ldap search '(|(sudoUser=nr)(sudoUser=%wheel)(sudoUser=%wheel)(sudoUser=%sr_analysts)(sudoUser=ALL))'
found:cn=analyst_script,ou=SUDOers,ou=roles,dc=security,dc=test,dc=com
ldap sudoHost 'ALL' ... MATCH!
found:cn=sr_analysts_all,ou=SUDOers,ou=roles,dc=security,dc=test,dc=com
ldap sudoHost 'ALL' ... MATCH!
ldap search 'sudoUser=+*'
user_matches=-1
host_matches=-1
sudo_ldap_check(50)=0x02
Password:
User nr may run the following commands on this host:
(ALL) ALL
LDAP Role: sr_analysts_all
Commands:
ALL
Replication
Replication and fail-over are both relatively simple to configure on the OpenLDAP server. Replication either uses 'slurpd' or 'Syncrepl', depending on the OpenLDAP version. For replication with 'slurpd', the replogfile , replica host or replica uri, binddn, bindmethod, and credentials variables need to be set in slapd.conf. Because a 'binddn' and password ('credentials') need to be used for replication, the binddn and password also have to be added to LDAP before replication will work.
Once the primary server is prepared, slapd needs to be stopped or set to read-only so the existing OpenLDAP database files can be manually copied over to the server receiving the replications. Configuration files on the receiving server also need to be edited, though I was able to just copy my 'slapd.conf' and schemas. The only changes needed to 'slapd.conf' were to remove all the replication directives and replace them with the 'updatedn' that was identical to the replication 'binddn' on the primary server. I also had to make sure that 'binddn' was allowed 'write' so it could add the replicated data. If not, you will get Error: insufficent access when trying to replicate.
Once the configuration files and database files are copied to the secondary server, 'slapd' needs to be (re)started on both systems. Then 'slurpd' needs to be started so it can periodically check the log file for changes that can be pushed. On RHEL, the 'ldap' init script handles stops, starts, and restarts for both 'slapd' and 'slurpd' at the same time rather than having separate scripts, so service ldap restart would restart both daemons.
For failover, the LDAP clients' host variable can have a list of hosts separated by spaces, and bind_timelimit is used to determine when the client fails over to the next server. If the first server is unreachable, the client will go on to the next one in the list of hosts.
LDAP Management
Although using LDIF files is fine at first, managing LDAP entries can become a chore from the command line. The 'sudo' README has a few recommendations that I quote below.
Doing a one-time bulk load of your ldap entries is fine. However what if you
need to make minor changes on a daily basis? It doesn't make sense to delete
and re-add objects. (You can, but this is tedious).
I recommend using any of the following LDAP browsers to administer your SUDOers.
* GQ - The gentleman's LDAP client - Open Source - I use this a lot on Linux
and since it is Schema aware, I don't need to create a sudoRole template.
http://biot.com/gq/
* LDAP Browser/Editor - by Jarek Gawor - I use this a lot on Windows
and Solaris. It runs anywhere in a Java Virtual Machine including
web pages. You have to make a template from an existing sudoRole entry.
http://www.iit.edu/~gawojar/ldap
http://www.mcs.anl.gov/~gawor/ldap
http://ldapmanager.com
* Apache Directory Studio - Open Source - an Eclipse-based LDAP
development platform. Includes an LDAP browser, and LDIF editor,
a schema editor and more.
http://directory.apache.org/studio
There are dozens of others, some Open Source, some free, some not.
Posted by Nathaniel Richmond at 05:30 1 comments
Labels: ldap, linux, rhel, system administration
19 September, 2008
Setting up OpenLDAP for centralized accounts
I recently decided to move to centralized account management on some RHEL systems. There are a number of ways to do this, including NIS, kerberos, Active Directory, LDAP, or some combination thereof. Ease of configuration, administrative overhead, and security were the top priorities. I chose LDAP because it is relatively simple and has some support for authentication, encryption, and password hashing. There are definitely security issues, but I plan a separate post to specifically address LDAP security.
This document:
External resources:
- Red Hat LDAP deployment guide
- OpenLDAP quick start
- TLDP LDAP Howto
- TLDP LDAP Implementation Howto
- Security with LDAP
Server configuration
First, I install the needed software on the server. Use yum for RHEL/CentOS v5 and up2date for v4. Once the software was installed, I generated a hashed password for LDAP's root user. There are a number of hashing schemes to choose from in the 'slappasswd' manual.
# yum install openldap-servers openldap-clients openldapAfter typing in the password twice, you'll get a hash that should be pasted into /etc/openldap/slapd.conf, along with some other edits.
# cd /etc/openldap/
# slappasswd
suffix "dc=security,dc=test,dc=com"The default access control policy of allowing anonymous reads with Manager getting write access will need to be changed. Users need write access to their own passwords for 'passwd' to work, anonymous users should only be able to authenticate, and I will let authenticated users read everything but passwords. There are some problems with the default and example configurations in the slapd.conf comments. The following is what I used to enforce saner settings. The rootdn always has write access to everything. I will update if needed since I am still playing with OpenLDAP's access controls.
rootdn "cn=Manager,dc=security,dc=test,dc=com"
rootpw {SSHA}hashed_password
access to attrs=userpasswordOnce the basic server configuration is done, I start the LDAP daemon.
by anonymous auth
by self write
by * none
access to *
by self read
by users read
by anonymous auth
service ldap start
Client configuration
yum install authconfig openldap-clients nss_ldapConfiguring the client is somewhat confusing because Red Hat and many other distributions have two ldap.conf files. The one needed by nss_ldap is in /etc and OpenLDAP's client configuration file is in /etc/openldap.
I edited /etc/ldap.conf and /etc/openldap/ldap.conf for host, base, and the binddn and bindpw of proxyuser. proxyuser will be used for read access by nss_ldap since anonymous reads are disallowed. I also added Manager as rootbinddn, which requires creating /etc/ldap.secret with the plain text password, owned by root, and chmod 600. Both ldap.conf files need to be chmod 644.
OpenLDAP's client configuration file is much smaller and only needs a few changes. Most of the settings are the same though I did notice the TLS directives are different.
Next, I ran 'authconfig' or 'authconfig-tui' to edit /etc/pam.d/system-auth. From the menu, I selected to use LDAP Authentication and use LDAP for user information. Enabling LDAP Authentication will make local accounts unusable when LDAP is down! The server and base can be set here or manually edited in a ldap.conf file. 'authconfig' will edit /etc/nsswitch.conf to add ldap.
passwd: files ldap
shadow: files ldap
group: files ldap
Testing, using and modifying LDAP
Note that it is easier to test first using the default slapd.conf access controls that allows anonymous users to read rather than the controls above that I am testing. The below search is performed without authenticating.
# ldapsearch -x -b '' -s base '(objectclass=*)' namingContextsNow that it's working, I need to create an ldif (LDAP Data Interchange Format) file that will hold information to be put into LDAP. It includes a proxy user account that will have read access to LDAP using a password. You can use slappasswd to generate hashes for the password fields in the ldif. Note that my test user in the below LDIF is in the wheel group.
# extended LDIF
#
# LDAPv3
# base <> with scope base
# filter: (objectclass=*)
# requesting: namingContexts
#
dn:
namingContexts: dc=security,dc=test,dc=com
# search result
search: 2
result: 0 Success
# numResponses: 2
# numEntries: 1
dn: dc=security,dc=test,dc=comTo load the information from the file into LDAP. Add '-ZZ' to issue StartTLS from any of the ldap* commands:
objectclass: top
objectclass: organization
objectclass: dcObject
o: NR Test Group
dc: security
dn: ou=groups,dc=security,dc=test,dc=com
objectclass: organizationalUnit
ou: groups
dn: ou=people,dc=security,dc=test,dc=com
objectclass: organizationalUnit
ou: people
dn: ou=role,dc=security,dc=test,dc=com
objectclass: organizationalUnit
ou: role
dn: cn=proxyuser,ou=role,dc=security,dc=test,dc=com
cn: proxyuser
objectclass: top
objectclass: person
objectclass: posixAccount
objectclass: shadowAccount
uid: proxyuser
uidNumber: 1001
gidNumber: 100
homeDirectory: /home
loginShell: /sbin/nologin
userpassword: don't use plain text
sn: proxyuser
description: Account for read-only access
dn: cn=N R,dc=security,dc=test,dc=com
cn: N R
objectclass: top
objectclass: person
objectclass: posixAccount
objectclass: shadowAccount
uid: nr
uidNumber: 1002
gidNumber: 10
homeDirectory: /home/nr
loginShell: /bin/bash
userpassword: don't use plain text
sn: R
ldapadd -x -ZZ -D "cn=Manager,dc=security,dc=test,dc=com" -W -f ldapinfo.ldifIf there are errors, they should give a hint about the reason. Once the command succeeded, I did a search to display all the information.
ldapsearch -x -ZZ -D "cn=Manager,dc=security,dc=test,dc=com" -W -b "dc=security,dc=test,dc=com"You can also specify search terms.
ldapsearch -x -ZZ -D "cn=Manager,dc=security,dc=test,dc=com" -W -b "dc=security,dc=test,dc=com" "cn=proxyus*" uidNumberOnce the clients are configured with LDAP and working properly, creating an account in LDAP will allow that account to SSH to all systems functioning as LDAP clients. Before trying to authenticate to systems other than the LDAP server, I setup TLS.
Enter LDAP Password:
# extended LDIF
#
# LDAPv3
# basewith scope sub
# filter: cn=proxyu*
# requesting: uidNumber
# proxyuser, role, security.test.com
dn: cn=proxyuser,ou=role,dc=security,dc=test,dc=com
uidNumber: 1001
# search result
search: 3
result: 0 Success
# numResponses: 2
# numEntries: 1
TLS
Right now, without any encryption, password hashes and a ton of other information would be streaming through the network if I was not using a local client. To check, I ran 'tcpdump' on the loopback and searched LDAP again.
tcpdump -v -i lo -X -s 0The results of the search could clearly be seen in the traffic as shown from the following snippet (with the hash removed).
LoginShell1.. /bin/bash08..userPassword1(.&{SSHA}The process for configuring TLS is addressed in Red Hat's FAQ. For /etc/openldap/slapd.conf I used the following:
TLSCertificateFile /etc/pki/tls/certs/slapd.pemIf not requiring certificate checks because of self-signing, /etc/openldap/ldap.conf will need TLS_REQCERT allow and according to the comments /etc/ldap.conf is set to default of tls_checkpeer no. I still needed to explicitly set tls_checkpeer no to fix a problem with sudo not finding the uid in the passwd file. Both client configuration files need ssl start_tls entries.
TLSCertificateKeyFile /etc/pki/tls/certs/slapd.pem
Administrivia
There are other things to consider when using LDAP accounts to login with SSH. For instance, I edited /etc/pam.d/sshd to have user home directories created when users log in the first time with SSH on a particular system:
#%PAM-1.0For this to work, you will also need to have UsePAM yes in your sshd_config. On RHEL, you can tell if you are getting errors related to PAM not being enabled in sshd_config.
auth required pam_stack.so service=system-auth
auth required pam_nologin.so
account required pam_stack.so service=system-auth
password required pam_stack.so service=system-auth
session required pam_stack.so service=system-auth
session required pam_mkhomedir.so skel=/etc/skel/ umask=0077
session required pam_loginuid.so
# tail -n 1 /var/log/secureAt one point in the process, I was getting an error at login:
Sep 18 17:11:58 slapd sshd[24274]: fatal: PAM: pam_open_session(): Module is unknown
id: cannot find name for user ID 1002To fix this, I checked the permissions of /etc/ldap.conf and also made sure the proxyuser was working properly. Without LDAP read permissions, the user name can't be mapped to the user ID. Since I am denying anonymous reads, I have to make sure proxyuser is set up properly.
[I have no name!@slapd ~]$
You may need to modify iptables rules to allow linux systems to connect on 389. Using TLS will not change the port by default.
There is a Red Hat FAQ about getting sudo to work with LDAP. It will not necessarily work out of the box with LDAP users and groups.
Posted by Nathaniel Richmond at 05:13 4 comments
Labels: ldap, linux, rhel, system administration
12 September, 2008
Grass is green, sky is blue, news at 11
Study: Hotel Networks Put Corporate Users at Risk. You think?
Is it any surprise that most hotels use unencrypted or weak encryption for wireless? Is it any surprise that a substantial number still use hubs instead of switched networks?
It would surprise me more if hotels consistently worried about security for their guests' networks. If only 21 percent of hotels had reports of "wrongdoing" on their guest networks, that means the percentage of guests that report attacks is actually much lower. There is little financial incentive for the hotels to upgrade hardware and configure networks to prevent malicious activity. Most road warriors are more worried about convenience than security.
I bet encryption on hotel wireless networks causes more complaints than unencrypted wireless.
Posted by Nathaniel Richmond at 20:22 0 comments
09 September, 2008
My take on cloud computing
I heard an interesting story about Google's Chrome browser on NPR's Morning Edition for 09 September. While it was billed as a review of Chrome, the first question asked by host Renee Montagne to commentator Mario Armstrong was how Google planned to make money with a browser. This led to some discussion of the browser becoming the computer, part of what is sometimes known as cloud computing.
The first way noted by Mr. Armstrong to make money on a free browser was of course through advertising. The holy grail of advertising these days seems to be targeted ads. For example, search terms, cookies, browser history and email contents can provide a lot of context to be used for targeting ads. This type of information can certainly be used for unintended purposes, not just targeting ads or selling anonymized user data.
The second answer on how to make money from a browser, more significant from a security perspective, was to build a customer base in anticipation of services moving to the cloud. Cloud computing and security has been discussed quite a bit by others [Examples: 1,2,3,4]. Mr. Armstrong went on to discuss three important facets of a browser, particularly in relation to cloud computing. To paraphrase from the NPR story:
- Speed - If you are going to move applications to the cloud, the browser better be fast enough to compare with a local application.
- Stability - No matter if an application is in the cloud or local, users and developers don't like their applications to crash. If the application runs in a browser, the browser needs to be stable.
- Security - Moving applications to the cloud obviously means that you're moving data through and to the cloud, dramatically changing a number of security implications.
Regardless of whether you buy into the hype, cloud security is an issue because users will make decisions on their own. Google's applications are an excellent example. Their search, calendar, documents, and more have potential to put sensitive company or personal information in the cloud. Google documents makes it so easy to share documents with my coworkers! Google Desktop makes it so easy to find and keep track of my files! What do you mean we already have a way to do that? What do you mean we aren't allowed to store those documents on the Internet?
Just to make sure I'm not singling out Google, another instance of cloud computing is Amazon's Elastic Compute Cloud (EC2), which basically allows you to build a virtual machine in Amazon's cloud and then pay for processor time. EC2 is a great example of cloud computing allowing flexibility and computing power without breaking the bank. You can run any number of "small" to "High-CPU very large" instances, scaling resources as needed and starting at only $0.10 per hour per instance!
This scenario was mentioned by someone else, but consider the security implications of putting information in Amazon's cloud, for instance using EC2 to crack a Windows SAM during a security assessment. Sure, you could easily come up with an accurate quote for the EC2 computing time to crack a SAM, but it would be important to disclose the use of EC2 to the customer. If you're using Amazon's processing capability, how are you going to make sure the data stays secure as it floats somewhere in their cloud? How many new avenues of attack have you introduced by using cloud computing? Is it even possible to quantify these risks with the amount of information you have about their cloud?
There are security issues with cloud computing even if the data is not explicitly sensitive like a Windows SAM. Data to be crunched? Video to be rendered? Documents to be stored? Is your data worth money? Who will protect the data and how?
Cloud computing is here in varying degrees depending on your organization, but using it means giving up some amounts of control over your data. Whether the convenience of the cloud is worth it is definitely a risk versus reward issue, and a lot of the risk depends on the sensitivity of the data.
Edit: InfoWorld posted an article about government concerns with cloud computing, particularly who owns data that is stored in the cloud, whether law enforcement should have a lower threshold for gaining access to data in the cloud, and whether government should embrace cloud computing for its needs.
Posted by Nathaniel Richmond at 21:03 0 comments
05 September, 2008
Modified: Pulling IP addresses from Snort rules
This is a modification of another script I wrote to pull IP addresses from Snort rules. Since the last time I posted it, I added support for multiple rules files and also automatically insert the IP addresses into the database. After that, I can run a query to compare the IP addresses with sancp data to see what systems connected or attempted connections to the suspicious hosts.
I think this version will only work with MySQL 5.x, not 4.x, because of the section that adds the IP addresses to the database. In the older versions, the IP addresses had to be added manually. Note that the table used for the $tablename variable is not part of the Sguil database schema and must be created before running the script. Make sure not to use a table that is part of the Sguil DB schema because this script deletes all data from the table!
The script is fairly slow since it expands CIDR blocks to individual IP addresses and then puts them in the database one by one. If anyone has ideas for a more efficient method, I'd love to hear it, but since I'm using Sguil I need to be able to compare the suspicious IP addresses with individual IP addresses in the Sguil sancp table.
#To search for connections or attempted connections to the suspicious IP addresses, you could use a query like the following. Obviously, the query should change based on criteria like the the ports that interest you, time of activity, protocol, etc.
# Script to pull IP address from Snort rules file
# by nr
# 2007-08-30
# 2007-09-14 Added CIDR expansion
# 2007-12-19 Added support for multiple rule files
# 2008-05-22 Added MySQL support to insert and convert IPs
use strict;
use Mysql;
my $dir = "/nsm/rules"; # rules directory
my @rulefile = qw( emerging-compromised.rules emerging-rbn.rules );
my @rule; # unprocessed rules
foreach my $rulefile(@rulefile) {
# Open file to read
die "Can't open rulefile.\n" unless open RULEFILE, "<", "$dir/$rulefile";
chomp(@rule = (@rule,<RULEFILE>)); # put each rules into array
close RULEFILE; # close current rule file
}
# Open mysql connection
my $host = "localhost";
my $database = "sguildb";
my $tablename = "suspicious_hosts";
my $colname = "dst_ip";
my $user = "sguil";
my $pw = "PASSWORD";
# perl mysql connect()
my $sql_connect = Mysql->connect($host, $database, $user, $pw);
# clear out old IP addresses first
my $sql_delete = "DELETE FROM $tablename";
my $execute = $sql_connect->query($sql_delete) or die "$!";
# For each rule
foreach my $rule (@rule) {
# Match only rules with IP addresses so we don't get comments etc
# This string match does not check for validity of IP addresses
if ( $rule =~ /\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}/ ) {
$rule =~ s/.*\[//g; # Remove [ character and before
$rule =~ s/\].*//g; # Remove ] character and after
# Split the remaining data using the commas
# and put it into ip_address array
my @ip_address = split /\,/, $rule;
# For each IP address in array
foreach my $ip_address (@ip_address) {
# Match on slash
if ( $ip_address =~ /.*\/.*/ ) {
# Expand CIDR to all IP addresses in range modified from
# http://www.perlmonks.org/?displaytype=print;node_id=190497
use NetAddr::IP;
my $newip = new NetAddr::IP($ip_address);
# While less than broadcast address
while ( $newip < $newip->broadcast) {
# Strip trailing slash and netmask from IP
my $temp_ip = $newip;
$temp_ip =~ s/\/.*//g;
# sql statement to insert IP
my $sql_statement = "INSERT INTO $tablename SET $colname = INET_ATON('$temp_ip')";
# execute statement
my $execute = $sql_connect->query($sql_statement); # Run statement
$newip ++; # Increment to next IP
}
}
# For non-CIDR, simply print IP
else {
# sql statement to insert IP
my $sql_statement = "INSERT INTO $tablename SET $colname = INET_ATON('$ip_address')";
# execute statement. maybe make this a function or otherwise clean
# since it is repeated inside the if
my $execute = $sql_connect->query($sql_statement); # Run statement
}
}
}
}
SELECT sancp.sid,INET_NTOA(sancp.src_ip),sancp.src_port,INET_NTOA(sancp.dst_ip),sancp.dst_port, \
sancp.start_time FROM sancp INNER JOIN suspicious_hosts ON (sancp.dst_ip = suspicious_hosts.dst_ip) WHERE \
sancp.start_time >= DATE_SUB(UTC_DATE(),INTERVAL 24 HOUR) AND sancp.dst_port = '80';
Posted by Nathaniel Richmond at 22:03 0 comments
29 August, 2008
Snort DNS preprocessor
Scott Campbell of NERSC posted to the snort-devel mailing list today about his DNS preprocessor that is designed to detect DNS cache poisoning and DNS fast flux. His write-up on both features looks interesting and I hope to play with the preprocessor on my lab setup. Note that he recommends not running this in production because it is an early beta.
For full details check his write-up, but the following quotes explain that the preprocessor is checking three basic conditions for DNS cache poisoning:
The explanation of fast flux detection is a little more involved, and he also mentions that it will detect sites that are designed to behave in a similar way as fast flux, for example ntp.pool.org and chat.freenode.net.
- Multiple responses to a query where the DNS server IP and query name match, but the transaction ID varies.
- Multiple responses to a query where the DNS server IP, query name and transaction ID match.
- Unexpected responses where there is no observed question.
If I get the chance to play with the preprocessor, I will definitely document my experience.
Posted by Nathaniel Richmond at 19:38 0 comments
02 August, 2008
July Dailydave
The Dailydave mailing list was full of interesting and fun posts during the month of July. The "Immunity Certified Network Offense Professional" thread and all the threads about Dan Kaminsky's DNS cache poisoning were interesting. That said, the cache poisoning has certainly not been under-analyzed and I'm happy to read about other topics at this point.
Posted by Nathaniel Richmond at 17:34 0 comments
16 June, 2008
Using afterglow to make pretty pictures
I recently had a need to visualize some network connections and thought there were probably plenty of existing tools to draw me a picture based on data in a CSV file since Sguil can export query results to a CSV. MySQL can also output to a CSV file, so the following could be scripted even more easily on a sguild server than a client. The client requires more manual steps, but I decided to try it that way first.
C.S. Lee recommended trying afterglow. Looking at his blog, I saw that he had a short write-up on afterglow. I followed similar steps to install everything that was needed.
$ sudo apt-get install libgraphviz-perl libtext-csv-perlNext, I downloaded the afterglow source and extracted it.
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
graphviz libio-pty-perl libipc-run-perl
libparse-recdescent-perl
libversion-perl libxml-twig-perl
Suggested packages:
graphviz-doc msttcorefonts libunicode-map8-perl
libunicode-string-perl
xml-twig-tools
Recommended packages:
libtie-ixhash-perl libxml-xpath-perl
The following NEW packages will be installed:
graphviz libgraphviz-perl libio-pty-perl libipc-run-perl
libparse-recdescent-perl libversion-perl libxml-twig-perl
libtext-csv-perl
$ tar xvzf afterglow-1.5.9.tar.gzFor these examples, I connected to the Sguil demo server and exported one sancp query result. Afterglow expects three columns, which include the source IP, destination IP, and destination port. This is where running queries on the sguild server could make more sense since I could just select src_ip, dst_ip and dst_port and write the results to a CSV file. With results from the Sguil client, I have to take the CSV and remove everything but the desired columns.
To remove the other columns, I used sed. Perl or awk would work fine, too, and it is pretty easy to script. Here is an example I used without scripting. I am writing to a new file so I keep the original file intact. If you exported from Sguil and included the column names on the first line, the following command will delete the first and last lines to clean up the data before removing the extra columns.
$ sed '1d' sancpquery_1.csv | sed 's/^\([^,]*,[^,]*,[^,]*,[^,]*,\)\([^,]*,\)\([^,]*,\)With a CSV exported from the results of a sancp query, that will leave you with the required three columns. Next, from the directory where I extracted afterglow, I can feed it the CSV. The results are using different values for "-p", zero being the default. Values of "-p 1" and "-p 3" made identical images.
\([^,]*,[^,]*\)\(,[^,]*,[^,]*,[^,]*,[^,]*\)/\2\4/' > sancpquery_1_3col.csv
$ cat /home/nr/sancpquery_1_3col.csv | src/perl/graph/afterglow.pl -v -p 0 -e 1.5Afterglow is an interesting tool. I can definitely see how it could help when looking at data on a spreadsheet isn't enough to visualize where and systems were connecting and on which ports. I can definitely think of some more features that might be useful, like showing timestamps or time ranges on connections, or even animating the images to show a sequence of events, but adding features could also make it unnecessarily complex.
-c src/perl/parsers/color.properties | neato -Tgif -o ~/sancpquery_p0.gif
Posted by Nathaniel Richmond at 19:28 3 comments
Labels: afterglow, sancp, session data, sguil, visualization
23 May, 2008
Snort 2.8.1 changes and upgrading
Snort 2.8.1 has been out since April, so this post is a little late. I wanted to upgrade some Red Hat/CentOS systems from Snort 2.8.0.2 to 2.8.1. When I write RHEL or Red Hat, it will include CentOS since what is applies to one should apply to the other.
I quickly found that the upgrade on RHEL4 was not exactly straightforward because the version of pcre used on RHEL4 is pcre 4.5, which is years old. Snort 2.8.1 requires at least pcre 6.0. For reference, RHEL5 is using pcre 6.6.
PCRE Changes
I had assumed an upgrade from 2.8.0.2 to 2.8.1 was minor, but the pcre change indicated there were definitely significant changes between versions. A few of the pcre changes made it into the Snort ChangeLog, and there is more in the Snort manual. I also had a brief discussion about the pcre changes with a few SourceFire developers that work on Snort and some other highly technical Snort users.
The most significant changes seems to be adding limits to pcre matching to help prevent performance problems and denial of service through pcre overload. At regular expression compile time, there will be a maximum number of state changes that can be passed and on evaluation libpcre will only change states that many times. There is a global config option that sets the maximum number of state changes and the limit can also be disabled per regular expression if needed.
The related configuration options for snort.conf are pcre_match_limit:
Restricts the amount of backtracking a given PCRE option. For example, it will limit the number of nested repeats within a pattern. A value of -1 allows for unlimited PCRE, up to the PCRE library compiled limit (around 10 million). A value of 0 results in no PCRE evaluation. The snort default value is 1500.and pcre_match_limit_recursion:
Restricts the amount of stack used by a given PCRE option. A value of -1 allows for unlimited PCRE, up to the PCRE library compiled limit (around 10 million). A value of 0 results in no PCRE evaluation. The snort default value is 1500. This option is only useful if the value is less than the pcre_match_limitThe main discussion between the SourceFire folks and everyone else was whether it was wise to have the limits turned on by default and where the default of 1500 came from. I think leaving the pcre limits on by default makes sense because those that don't fiddle much with the Snort configuration probably need the protection of the limits more than someone that would notice when Snort performance was suffering.
The argument against having the limits on by default is that it could make certain rules ineffective. By cutting short the pcre, the rule may not trigger on traffic that should cause a Snort alert. I suspect that not many rules will hit the default pcre limit.
Other Changes Included in Snort 2.8.1
From the release notes:
[*] New AdditionsThe SSL/TLS preprocessor helps performance by allowing Snort to ignore encrypted traffic rather than inspecting it. I haven't looked at the target-based support yet, but it definitely sounds interesting.
* Target-Based support to allow rules to use an attribute table
describing services running on various hosts on the network.
Eliminates reliance on port-based rules.
* Support for GRE encapsulation for both IPv4 & IPv6.
* Support for IP over IP tunneling for both IPv4 & IPv6.
* SSL preprocessor to allow ability to not inspect encrypted traffic.
* Ability to read mulitple PCAPs from the command line.
I also noticed something from 2007-12-10 in the ChangeLog:
* configure.in:That should be good for those running Phil Wood's pcap who may not have been seeing accurate statistics about packet loss.
Add check for Phil Woods pcap so that pcap stats are computed
correctly. Thanks to John Hally for bringing this to our
attention.
A last note of interest about Snort 2.8.1 is that it fixes a vulnerability related to improper reassembly of fragmented IP packets.
Upgrading
Upgrading on RHEL5 was pretty simple, but upgrading on RHEL4 required downloading the source RPM for the pcre included with RHEL5 and building a RPM for RHEL4.
First, I installed compilers and the rpm-build package and the RPM source package for pcre-6.6. (Note that I'm using CentOS4 in the example, but the procedures for Red Hat should be nearly identical except for URLs). Next I built the pcre-6.6 RPMs and installed both pcre and pcre-devel.
# up2date -i cpp gcc gcc-c++ rpm-buildNow I can configure, make and make install Snort as usual. The snort.conf file will also need to be updated to reflect new options like the SSL/TLS preprocessor and non-default pcre checks. Note that the README.ssl also recommends changes to the stream5 preprocessor based on the SSL configuration.
# rpm -Uvh http://isoredirect.centos.org/centos/5/updates/SRPMS/pcre-6.6-2.el5_1.7.src.rpm
$ rpmbuild -bb /usr/src/redhat/SPECS/pcre.spec
# rpm -U /usr/src/redhat/RPMS/i386/pcre-*rpm
# rpm -q pcre pcre-devel
pcre-6.6-2.7
pcre-devel-6.6-2.7
$ ./configure --enable-dynamicplugin --enable-stream4udp --enable-perfprofilingIf you are building on a production box instead of a development box, some would recommend removing the compilers afterwards.
$ make
# make install
Here is an example of a SSL/TLS and stream5 configuration using default ports. By adding the default SSL/TLS ports to stream5_tcp settings, you also have to enumerate the default stream5_tcp ports or they will not be included. I bolded the default SSL/TLS ports, and the rest are default stream5_tcp ports:
preprocessor stream5_global: track_tcp yes, track_udp yes, track_icmp yesI have tested to make sure Snort runs with this configuration for these preprocessors, but don't just copy it without checking my work. It is simply an example after reading the related README documentation. Note that the Stream 5 TCP Policy Reassembly Ports output to your message log when starting Snort will be truncated after the first 20 ports but all the ports listed will be included by Stream 5.
preprocessor stream5_tcp: policy first, ports both 21 23 25 42 53 80 \
110 111 135 136 137 139 143 443 445 465 513 563 636 \
989 992 993 994 995 1433 1521 3306
preprocessor stream5_udp
preprocessor stream5_icmp
preprocessor ssl: \
noinspect_encrypted
16 May, 2008
Query sguildb for alert name then do name lookup
I wrote a Perl script to query for a specific alert name and then get the NetBIOS or DNS name of the systems that triggered the alert. This script is useful to me mainly as an automated reporting tool. An example would be finding a list of systems associated with a specific spyware alert that is auto-categorized within Sguil. The alert will never show up in the RealTime alerts but I use the script to get a daily email on the systems triggering the alert.
Some alerts aren't important enough to require a real-time response but may need remediation or at least to be included in statistics. I don't know if this would be useful for others as it is written, but parts of it might be.
As always with Perl, I welcome suggestions for improvement since I'm by no means an expert. (By the way, does anyone else have problems with Blogger formatting when using <pre> tags?)
#!/usr/bin/perl
#
# by nr
# 2008-05-11
# Script to query db for a specific alert
# and do name lookup based on source IP address in results
use strict;
# Requires MySQL, netbios and DNS modules
use Mysql;
use Net::NBName;
use Net::DNS;
my $host = "localhost";
my $database = "sguildb";
my $tablename = "event";
my $user = "sguil";
my $pw = "PASSWORD";
my $alert = 'ALERT NAME Here';
# Set the query
my $sql_query = "SELECT INET_NTOA(src_ip) as src_ip,count(signature) as count FROM $tablename \
WHERE $tablename.timestamp > DATE_SUB(UTC_DATE(),INTERVAL 24 HOUR) AND $tablename.signature \
= '$alert' GROUP BY src_ip";
# perl mysql connect()
my $sql_connect = Mysql->connect($host, $database, $user, $pw);
print "\"$alert\" alerts in the past 24 hours: \n\n";
print "Count IP Address Hostname\n\n";
my $execute = $sql_connect->query($sql_query); # Run query
# Fetch query results and loop
while (my @result = $execute->fetchrow) {
my @hostname; # 1st element of this array is used later for name queries
my $ipresult = $result[0]; # Set IP using query result
my $count = $result[1]; # Set alert count using query result
my $nb_query = Net::NBName->new; # Setup new netbios query
my $nb = $nb_query->node_status($result[0]);
# If there is an answer to netbios query
if ($nb) {
my $nbresults = $nb->as_string; # Get query result
# Split at < will make $hostname[0] the netbios name
# Is there a better way to do this using a substitution?
@hostname = split /</, $nbresults;
} else {
# Do a reverse DNS lookup if no netbios response
# May want to add checks to make sure external IPs are ignored
my $res = Net::DNS::Resolver->new;
my $namequery = $res->query("$result[0]","PTR");
if ($namequery) {
my $dnsname = ($namequery->answer)[0];
$hostname[0] = $dnsname->rdatastr;
} else {
$hostname[0] = "UNKNOWN"; # If no reverse DNS result
}
}
format STDOUT =
@>>> @<<<<<<<<<<<<<< @*
$count, $ipresult, $hostname[0]
.
write;
}
Posted by Nathaniel Richmond at 21:23 0 comments
30 April, 2008
NoVASec: Memory Forensics
Richard Bejtlich arranged a NoVASec meeting on memory forensics for Thursday, April 24. Aaron Walters of Volatile Systems was the scheduled speaker. George Garner of GMG Systems, Inc., also showed up, so we were lucky enough to get two speakers for the price of one. (If you aren't aware, NoVASec is actually free). Aaron primarily talked about performing forensics and analysis on memory dumps, and afterwards Richard asked George to come up from the audience and talk about the challenges of actually acquiring the memory dumps.
Both Aaron and George were very knowledgeable and had a lot of interesting things to discuss. In fact, most of us didn't leave until after 22:00 so there was a good two and a half hours of technical discussion. It wouldn't do them justice for me to try and recap their talks, but I will mention a couple brief thoughts I jotted down while listening. If I'm getting anything wrong here, someone please pipe up and let me know.
First is that I saw some parallels between points mentioned by Aaron and Network Security Monitoring. Aaron stated that a live response on a system requires some trust of the system's operating system, is obtrusive, and is unverifiable. Dumping the RAM and performing an analysis using a trusted system helps mitigate these problems though I don't think he meant it solves them completely. Similarly, in NSM we use information that is gathered by our most trustworthy systems, network sensors that allow limited access, rather than trusting what we find on the host. In forensics and NSM, steps are taken to increase the trustworthiness and verifiability of information that is gathered.
Second, Aaron and George both seemed to agree that acquiring memory contents is not easy. Not only can it be difficult, but even a successful acquisition has issues. George pointed out that if you don't isolate the system, an attacker could be altering the system or memory as you acquire it. He also pointed out that dumping the memory is actually sampling, not an image, because the RAM contents are always changing even on a system that has been isolated from the network. One memory dump is just one sample of what resided in memory at a given time. More evidence and more sampling will increase the reliability of the evidence attained. Also, gathering evidence from multiple sources, for instance hard drive forensics, memory forensics and NSM, increases the probability evidence will be accurate and verifiable.
There was also some discussion of PCI and video devices as they relate to both exploiting systems and memory forensics. Acquiring memory can be an issue on systems using PAE since reading from the space used by PCI devices can crash the system. On the exploit side, the GPU and RAM on video cards can be used to help facilitate attacks, as can certain PCI devices. There is a lot of interesting work going on in this field, and George even mentioned that he has been working on tools for acquiring the contents of memory from video cards.
It was an excellent meeting.
Posted by Nathaniel Richmond at 19:07 0 comments