19 September, 2008

Setting up OpenLDAP for centralized accounts

I recently decided to move to centralized account management on some RHEL systems. There are a number of ways to do this, including NIS, kerberos, Active Directory, LDAP, or some combination thereof. Ease of configuration, administrative overhead, and security were the top priorities. I chose LDAP because it is relatively simple and has some support for authentication, encryption, and password hashing. There are definitely security issues, but I plan a separate post to specifically address LDAP security.

This document:


External resources:

Server configuration

First, I install the needed software on the server. Use yum for RHEL/CentOS v5 and up2date for v4. Once the software was installed, I generated a hashed password for LDAP's root user. There are a number of hashing schemes to choose from in the 'slappasswd' manual.
# yum install openldap-servers openldap-clients openldap
# cd /etc/openldap/
# slappasswd
After typing in the password twice, you'll get a hash that should be pasted into /etc/openldap/slapd.conf, along with some other edits.
suffix "dc=security,dc=test,dc=com"
rootdn "cn=Manager,dc=security,dc=test,dc=com"
rootpw {SSHA}hashed_password
The default access control policy of allowing anonymous reads with Manager getting write access will need to be changed. Users need write access to their own passwords for 'passwd' to work, anonymous users should only be able to authenticate, and I will let authenticated users read everything but passwords. There are some problems with the default and example configurations in the slapd.conf comments. The following is what I used to enforce saner settings. The rootdn always has write access to everything. I will update if needed since I am still playing with OpenLDAP's access controls.
access to attrs=userpassword
by anonymous auth
by self write
by * none

access to *
by self read
by users read
by anonymous auth
Once the basic server configuration is done, I start the LDAP daemon.
service ldap start

Client configuration
yum install authconfig openldap-clients nss_ldap
Configuring the client is somewhat confusing because Red Hat and many other distributions have two ldap.conf files. The one needed by nss_ldap is in /etc and OpenLDAP's client configuration file is in /etc/openldap.

I edited /etc/ldap.conf and /etc/openldap/ldap.conf for host, base, and the binddn and bindpw of proxyuser. proxyuser will be used for read access by nss_ldap since anonymous reads are disallowed. I also added Manager as rootbinddn, which requires creating /etc/ldap.secret with the plain text password, owned by root, and chmod 600. Both ldap.conf files need to be chmod 644.

OpenLDAP's client configuration file is much smaller and only needs a few changes. Most of the settings are the same though I did notice the TLS directives are different.

Next, I ran 'authconfig' or 'authconfig-tui' to edit /etc/pam.d/system-auth. From the menu, I selected to use LDAP Authentication and use LDAP for user information. Enabling LDAP Authentication will make local accounts unusable when LDAP is down! The server and base can be set here or manually edited in a ldap.conf file. 'authconfig' will edit /etc/nsswitch.conf to add ldap.
passwd:     files ldap
shadow: files ldap
group: files ldap

Testing, using and modifying LDAP


Note that it is easier to test first using the default slapd.conf access controls that allows anonymous users to read rather than the controls above that I am testing. The below search is performed without authenticating.
# ldapsearch -x -b '' -s base '(objectclass=*)' namingContexts
# extended LDIF
#
# LDAPv3
# base <> with scope base
# filter: (objectclass=*)
# requesting: namingContexts

#
dn:
namingContexts: dc=security,dc=test,dc=com

# search result
search: 2
result: 0 Success

# numResponses: 2
# numEntries: 1
Now that it's working, I need to create an ldif (LDAP Data Interchange Format) file that will hold information to be put into LDAP. It includes a proxy user account that will have read access to LDAP using a password. You can use slappasswd to generate hashes for the password fields in the ldif. Note that my test user in the below LDIF is in the wheel group.
dn: dc=security,dc=test,dc=com
objectclass: top
objectclass: organization
objectclass: dcObject
o: NR Test Group
dc: security

dn: ou=groups,dc=security,dc=test,dc=com
objectclass: organizationalUnit
ou: groups

dn: ou=people,dc=security,dc=test,dc=com
objectclass: organizationalUnit
ou: people

dn: ou=role,dc=security,dc=test,dc=com
objectclass: organizationalUnit
ou: role

dn: cn=proxyuser,ou=role,dc=security,dc=test,dc=com
cn: proxyuser
objectclass: top
objectclass: person
objectclass: posixAccount
objectclass: shadowAccount
uid: proxyuser
uidNumber: 1001
gidNumber: 100
homeDirectory: /home
loginShell: /sbin/nologin
userpassword: don't use plain text
sn: proxyuser
description: Account for read-only access

dn: cn=N R,dc=security,dc=test,dc=com
cn: N R
objectclass: top
objectclass: person
objectclass: posixAccount
objectclass: shadowAccount
uid: nr
uidNumber: 1002
gidNumber: 10
homeDirectory: /home/nr
loginShell: /bin/bash
userpassword: don't use plain text
sn: R
To load the information from the file into LDAP. Add '-ZZ' to issue StartTLS from any of the ldap* commands:
ldapadd -x -ZZ -D "cn=Manager,dc=security,dc=test,dc=com" -W -f ldapinfo.ldif
If there are errors, they should give a hint about the reason. Once the command succeeded, I did a search to display all the information.
ldapsearch -x -ZZ -D "cn=Manager,dc=security,dc=test,dc=com" -W -b "dc=security,dc=test,dc=com"
You can also specify search terms.
ldapsearch -x -ZZ -D "cn=Manager,dc=security,dc=test,dc=com" -W -b "dc=security,dc=test,dc=com" "cn=proxyus*" uidNumber
Enter LDAP Password:
# extended LDIF
#
# LDAPv3
# base with scope sub
# filter: cn=proxyu*
# requesting: uidNumber

# proxyuser, role, security.test.com
dn: cn=proxyuser,ou=role,dc=security,dc=test,dc=com
uidNumber: 1001

# search result
search: 3
result: 0 Success

# numResponses: 2
# numEntries: 1
Once the clients are configured with LDAP and working properly, creating an account in LDAP will allow that account to SSH to all systems functioning as LDAP clients. Before trying to authenticate to systems other than the LDAP server, I setup TLS.

TLS

Right now, without any encryption, password hashes and a ton of other information would be streaming through the network if I was not using a local client. To check, I ran 'tcpdump' on the loopback and searched LDAP again.
tcpdump -v -i lo -X -s 0
The results of the search could clearly be seen in the traffic as shown from the following snippet (with the hash removed).
LoginShell1..    /bin/bash08..userPassword1(.&{SSHA}
The process for configuring TLS is addressed in Red Hat's FAQ. For /etc/openldap/slapd.conf I used the following:
TLSCertificateFile /etc/pki/tls/certs/slapd.pem
TLSCertificateKeyFile /etc/pki/tls/certs/slapd.pem
If not requiring certificate checks because of self-signing, /etc/openldap/ldap.conf will need TLS_REQCERT allow and according to the comments /etc/ldap.conf is set to default of tls_checkpeer no. I still needed to explicitly set tls_checkpeer no to fix a problem with sudo not finding the uid in the passwd file. Both client configuration files need ssl start_tls entries.

Administrivia

There are other things to consider when using LDAP accounts to login with SSH. For instance, I edited /etc/pam.d/sshd to have user home directories created when users log in the first time with SSH on a particular system:
#%PAM-1.0
auth required pam_stack.so service=system-auth
auth required pam_nologin.so
account required pam_stack.so service=system-auth
password required pam_stack.so service=system-auth
session required pam_stack.so service=system-auth
session required pam_mkhomedir.so skel=/etc/skel/ umask=0077
session required pam_loginuid.so
For this to work, you will also need to have UsePAM yes in your sshd_config. On RHEL, you can tell if you are getting errors related to PAM not being enabled in sshd_config.
# tail -n 1 /var/log/secure
Sep 18 17:11:58 slapd sshd[24274]: fatal: PAM: pam_open_session(): Module is unknown
At one point in the process, I was getting an error at login:
id: cannot find name for user ID 1002
[I have no name!@slapd ~]$
To fix this, I checked the permissions of /etc/ldap.conf and also made sure the proxyuser was working properly. Without LDAP read permissions, the user name can't be mapped to the user ID. Since I am denying anonymous reads, I have to make sure proxyuser is set up properly.

You may need to modify iptables rules to allow linux systems to connect on 389. Using TLS will not change the port by default.

There is a Red Hat FAQ about getting sudo to work with LDAP. It will not necessarily work out of the box with LDAP users and groups.

12 September, 2008

Grass is green, sky is blue, news at 11

Study: Hotel Networks Put Corporate Users at Risk. You think?

Is it any surprise that most hotels use unencrypted or weak encryption for wireless? Is it any surprise that a substantial number still use hubs instead of switched networks?

It would surprise me more if hotels consistently worried about security for their guests' networks. If only 21 percent of hotels had reports of "wrongdoing" on their guest networks, that means the percentage of guests that report attacks is actually much lower. There is little financial incentive for the hotels to upgrade hardware and configure networks to prevent malicious activity. Most road warriors are more worried about convenience than security.

I bet encryption on hotel wireless networks causes more complaints than unencrypted wireless.

09 September, 2008

My take on cloud computing

I heard an interesting story about Google's Chrome browser on NPR's Morning Edition for 09 September. While it was billed as a review of Chrome, the first question asked by host Renee Montagne to commentator Mario Armstrong was how Google planned to make money with a browser. This led to some discussion of the browser becoming the computer, part of what is sometimes known as cloud computing.

The first way noted by Mr. Armstrong to make money on a free browser was of course through advertising. The holy grail of advertising these days seems to be targeted ads. For example, search terms, cookies, browser history and email contents can provide a lot of context to be used for targeting ads. This type of information can certainly be used for unintended purposes, not just targeting ads or selling anonymized user data.

The second answer on how to make money from a browser, more significant from a security perspective, was to build a customer base in anticipation of services moving to the cloud. Cloud computing and security has been discussed quite a bit by others [Examples: 1,2,3,4]. Mr. Armstrong went on to discuss three important facets of a browser, particularly in relation to cloud computing. To paraphrase from the NPR story:

  1. Speed - If you are going to move applications to the cloud, the browser better be fast enough to compare with a local application.
  2. Stability - No matter if an application is in the cloud or local, users and developers don't like their applications to crash. If the application runs in a browser, the browser needs to be stable.
  3. Security - Moving applications to the cloud obviously means that you're moving data through and to the cloud, dramatically changing a number of security implications.
My take on moving applications to the cloud is that it is useful but over-hyped, just like lots of "next generation" technology. There are just too many drawbacks for most of us to move all our applications to the cloud. I don't think it's likely that the browser or the cloud will ever become the computer, just like the computer will never be disassociated from the cloud.

Regardless of whether you buy into the hype, cloud security is an issue because users will make decisions on their own. Google's applications are an excellent example. Their search, calendar, documents, and more have potential to put sensitive company or personal information in the cloud. Google documents makes it so easy to share documents with my coworkers! Google Desktop makes it so easy to find and keep track of my files! What do you mean we already have a way to do that? What do you mean we aren't allowed to store those documents on the Internet?

Just to make sure I'm not singling out Google, another instance of cloud computing is Amazon's Elastic Compute Cloud (EC2), which basically allows you to build a virtual machine in Amazon's cloud and then pay for processor time. EC2 is a great example of cloud computing allowing flexibility and computing power without breaking the bank. You can run any number of "small" to "High-CPU very large" instances, scaling resources as needed and starting at only $0.10 per hour per instance!

This scenario was mentioned by someone else, but consider the security implications of putting information in Amazon's cloud, for instance using EC2 to crack a Windows SAM during a security assessment. Sure, you could easily come up with an accurate quote for the EC2 computing time to crack a SAM, but it would be important to disclose the use of EC2 to the customer. If you're using Amazon's processing capability, how are you going to make sure the data stays secure as it floats somewhere in their cloud? How many new avenues of attack have you introduced by using cloud computing? Is it even possible to quantify these risks with the amount of information you have about their cloud?

There are security issues with cloud computing even if the data is not explicitly sensitive like a Windows SAM. Data to be crunched? Video to be rendered? Documents to be stored? Is your data worth money? Who will protect the data and how?

Cloud computing is here in varying degrees depending on your organization, but using it means giving up some amounts of control over your data. Whether the convenience of the cloud is worth it is definitely a risk versus reward issue, and a lot of the risk depends on the sensitivity of the data.

Edit: InfoWorld posted an article about government concerns with cloud computing, particularly who owns data that is stored in the cloud, whether law enforcement should have a lower threshold for gaining access to data in the cloud, and whether government should embrace cloud computing for its needs.

05 September, 2008

Modified: Pulling IP addresses from Snort rules

This is a modification of another script I wrote to pull IP addresses from Snort rules. Since the last time I posted it, I added support for multiple rules files and also automatically insert the IP addresses into the database. After that, I can run a query to compare the IP addresses with sancp data to see what systems connected or attempted connections to the suspicious hosts.

I think this version will only work with MySQL 5.x, not 4.x, because of the section that adds the IP addresses to the database. In the older versions, the IP addresses had to be added manually. Note that the table used for the $tablename variable is not part of the Sguil database schema and must be created before running the script. Make sure not to use a table that is part of the Sguil DB schema because this script deletes all data from the table!

The script is fairly slow since it expands CIDR blocks to individual IP addresses and then puts them in the database one by one. If anyone has ideas for a more efficient method, I'd love to hear it, but since I'm using Sguil I need to be able to compare the suspicious IP addresses with individual IP addresses in the Sguil sancp table.

#
# Script to pull IP address from Snort rules file
# by nr
# 2007-08-30
# 2007-09-14 Added CIDR expansion
# 2007-12-19 Added support for multiple rule files
# 2008-05-22 Added MySQL support to insert and convert IPs

use strict;
use Mysql;

my $dir = "/nsm/rules"; # rules directory
my @rulefile = qw( emerging-compromised.rules emerging-rbn.rules );
my @rule; # unprocessed rules

foreach my $rulefile(@rulefile) {
# Open file to read
die "Can't open rulefile.\n" unless open RULEFILE, "<", "$dir/$rulefile";
chomp(@rule = (@rule,<RULEFILE>)); # put each rules into array
close RULEFILE; # close current rule file
}

# Open mysql connection
my $host = "localhost";
my $database = "sguildb";
my $tablename = "suspicious_hosts";
my $colname = "dst_ip";
my $user = "sguil";
my $pw = "PASSWORD";

# perl mysql connect()
my $sql_connect = Mysql->connect($host, $database, $user, $pw);

# clear out old IP addresses first
my $sql_delete = "DELETE FROM $tablename";
my $execute = $sql_connect->query($sql_delete) or die "$!";

# For each rule
foreach my $rule (@rule) {
# Match only rules with IP addresses so we don't get comments etc
# This string match does not check for validity of IP addresses
if ( $rule =~ /\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}/ ) {
$rule =~ s/.*\[//g; # Remove [ character and before
$rule =~ s/\].*//g; # Remove ] character and after
# Split the remaining data using the commas
# and put it into ip_address array
my @ip_address = split /\,/, $rule;

# For each IP address in array
foreach my $ip_address (@ip_address) {

# Match on slash
if ( $ip_address =~ /.*\/.*/ ) {

# Expand CIDR to all IP addresses in range modified from
# http://www.perlmonks.org/?displaytype=print;node_id=190497
use NetAddr::IP;
my $newip = new NetAddr::IP($ip_address);

# While less than broadcast address
while ( $newip < $newip->broadcast) {
# Strip trailing slash and netmask from IP
my $temp_ip = $newip;
$temp_ip =~ s/\/.*//g;
# sql statement to insert IP
my $sql_statement = "INSERT INTO $tablename SET $colname = INET_ATON('$temp_ip')";
# execute statement
my $execute = $sql_connect->query($sql_statement); # Run statement
$newip ++; # Increment to next IP
}
}
# For non-CIDR, simply print IP
else {
# sql statement to insert IP
my $sql_statement = "INSERT INTO $tablename SET $colname = INET_ATON('$ip_address')";
# execute statement. maybe make this a function or otherwise clean
# since it is repeated inside the if
my $execute = $sql_connect->query($sql_statement); # Run statement
}
}
}
}
To search for connections or attempted connections to the suspicious IP addresses, you could use a query like the following. Obviously, the query should change based on criteria like the the ports that interest you, time of activity, protocol, etc.
SELECT sancp.sid,INET_NTOA(sancp.src_ip),sancp.src_port,INET_NTOA(sancp.dst_ip),sancp.dst_port, \
sancp.start_time FROM sancp INNER JOIN suspicious_hosts ON (sancp.dst_ip = suspicious_hosts.dst_ip) WHERE \
sancp.start_time >= DATE_SUB(UTC_DATE(),INTERVAL 24 HOUR) AND sancp.dst_port = '80';