fiddler, Uncategorized

FiddlerCore and Brotli compression

Recently, a developer asked me how to enable Brotli content-compression support in FiddlerCore applications, so that APIs like oSession.GetResponseBodyAsString() work properly when the entity body has been compressed using brotli.

Right now, support requires two steps:

  1. Put brotli.exe (installed by Fiddler or off Github) into a Tools subfolder of the folder containing your application’s executable.
  2. Ensure that the Environment.SpecialFolder.MyDocuments folder exists and contains a FiddlerCore subfolder (e.g. C:\users\username\documents\FiddlerCore).

Step #1 allows FiddlerCore to find brotli.exe. Alternatively, you can set the fiddler.config.path.Tools preference to override the folder.

Step #2 allows FiddlerCore to create necessary temporary files. Sadly, this folder cannot presently be overridden [Bug].

One day, Fiddler might not need brotli.exe any longer, as Brotli compression is making its way into the framework.

 

 

Standard
Uncategorized

Content-Types Matter More Than You Think

Every non-empty response from a web server should contain a Content-Type response header that declares the type of content contained in the response. This declaration helps the browser understand how to process the response and can help prevent a number of serious security vulnerabilities.

Setting this header properly is more important than ever.

The Old Days

Many years ago, an easy way to exploit a stored-XSS vulnerability on a web server that accepted file uploads was to simply upload a file containing a short HTML document with embedded JavaScript. You could then send potential victims a link to http://vulnerable.example.com/uploads/123/NotReallyA.jpeg and when the victim’s browser rendered the document, it would find the JavaScript and run it in the security context of vulnerable.example.com, allowing it to steal the contents of cookies and storage, reconfigure your account, rewrite pages, etc.

Sites caught on and started rejecting uploads that lacked the “magic bytes” indicating a JPEG/GIF/PNG at the start of the file. Unfortunately, browsers were so eager to render HTML that they would “sniff” the bytes of the file to see if they could find some HTML to render. Bad guys realized they could shove HTML+Script into metadata fields of the image binary, and the attack would still work. Ugh.

In later years, browsers got smarter and stopped sniffing HTML from files served with an image/ MIME type, and introduced a new response header:

  X-Content-Type-Options: nosniff

…that declared that a browser should not attempt to sniff HTML from a document at all.

Use of the nosniff directive was soon expanded to help prevent responses from being interpreted as CSS or JavaScript, because clever attackers figured out that the complicated nature of Same Origin Policy meant that an attacking page could execute a cross-origin response and use side-effects (e.g. the exception thrown when trying to parse a HTML document as JavaScript) to read secrets out of that cross-origin response.

Browser makers have long dreamed of demanding that a response declare Content-Type: application/javascript in order for the response to be treated as JavaScript, but unfortunately telemetry tells us that this would break a non-trivial number of pages. So for now, it’s important to continue sending X-Content-Type-Options: nosniff on responses to mitigate this threat.

The Modern World

Chrome’s security sandbox helps ensure that a compromised (e.g. due to a bug in V8 or Blink) Renderer Process cannot steal or overwrite data on your device. However, until recently, a renderer compromise was inherently a UXSS vector, allowing data theft from every website your browser can reach.

Nearly a decade ago, Microsoft Research proposed a browser with stronger isolation between web origins, but as the Security lead for Internet Explorer, I thought it hopelessly impractical given the nature of the web. Fast forward to 2017, and Chrome’s Site Isolation project has shipped after a large number of engineer-years of effort.

Site Isolation allows the browser to isolate sites from one another in different processes, allowing the higher-privilege Browser Process to deny resources and permissions to low-privilege Renderers that should not have access. Sites that have been isolated are less vulnerable to renderer compromises, because the compromised renderer cannot load protected resources into its own process.

Isolation remains tricky because of complex nature of Same Origin Policy, which allows a cross-origin response to Execute without being directly Read. To execute a response (e.g. render an image, run a JavaScript, load a frame), the renderer process must itself be able to read that response, but it’s forced to rely upon its own code to prevent JavaScript from reading the bytes of that response. To address this, Chrome’s Site Isolation project hosts cross-origin frames inside different processes, and (crucially) rejects the loading of cross-origin documents into inappropriate contexts. For instance, the Browser process should not allow a JSON file (lacking CORS headers) to be loaded by an IMG tag in a cross-origin frame, because this scenario isn’t one that a legitimate site could ever use. By keeping cross-site data out of the (potentially compromised) renderer process, the impact of an arbitrary-memory-read vulnerability is blunted.

Of course, for this to work, sites must correctly mark their resources with the correct Content-Type response header and a X-Content-Type-Options: nosniff directive. (See the latest guidance on Chromium.org)

When Site Isolation blocks a response, a notice is shown in the Developer Tools console:

IsolationMessage

Console Message: Blocked current origin from receiving cross-site document

The Very Modern World

You may have heard about the recent “speculative execution” attacks against modern processors, in which clever attackers are able to read memory to which they shouldn’t normally have access. A sufficiently clever attacker might be able to execute such an attack from JavaScript in the renderer and steal the memory from that process. Such an attack on the CPU’s behavior results in the same security impact as a renderer compromise, without the necessity of finding a bug in the Chrome code.

In a world where a malicious JavaScript can read any byte in the process memory, the renderer alone has no hope of enforcing “No Read” semantics. So we must rely upon the browser process to enforce isolation, and for that, browsers need the help of web developers.

You can read more about Chrome’s efforts to combat speculative execution attacks here.

Guidance: Serve Content Securely

If your site serves JSON or similar content that contains non-public data, it is absolutely crucial that you set a proper MIME type and declare that the content should not be sniffed. For example:

 Content-Type: application/json; charset=utf-8
 X-Content-Type-Options: nosniff

Of course, you’ll also want to ensure that any Access-Control-Allow-Origin response headers are set appropriately (lest an attacker just steal your document through the front door!).

 

Thanks for your help in securing the web!

-Eric

Standard
Uncategorized

Taking Off Your NameTag

Recently, there’s been some excitement over the discovery that some sites are (ab)using browser password managers to identify users even when they’re not logged in.

This technique (I call it the “NameTag vulnerability”) isn’t new or novel, but the research showing that it’s broadly being used “in the wild” is certainly interesting1, and may motivate changes in password managers to combat the abuse.

Most browser password managers already protect against the NameTag vulnerability when you surf in the browsers’ Incognito or InPrivate modes. When IE11 shipped and accidentally removed the mitigation, I complained and it was silently patched. Similarly, we patched a version of this issue in Chrome 54.

Because users often wish to use the password manager even in while Incognito, the feature isn’t disabled, but instead enters a mode called “Fill on account select” (FOAS) whereby the user must manually select an account from a dropdown in order to fill the username and password. This helps prevent a site from silently identifying a user.

If you’d prefer to use the FOAS mode even when you’re not browsing in Incognito, you can enable this via a flag. Navigate to chrome://flags/#fill-on-account-select and change the setting to Enabled and restart.FOAS

To make a similar change in Firefox, navigate to about:config and change the signon.autofillForms setting to false.

Beyond the NameTag use-case, enabling FOAS can serve as a defense-in-depth against XSS attacks and similar vulnerabilities.

The Chrome team is discussing other potential mitigations in https://crbug.com/798492; feel free to “star” the issue to follow along.

-Eric
[1] Similarly, a recent study found that many sites also have third-party scripts that spy on users’ interactions with pages, something every developer knows is possible, but most humans never think about.

Standard
browsers, security, Uncategorized

Google Internet Authority G3

For some time now, operating behind the scenes and going mostly unnoticed, Google has been changing the infrastructure used to provide HTTPS certificates for its sites and services.

You’ll note that I said mostly. Over the last few months, I’ve periodically encountered complaints from users who try to load a Google site and get an unexpected error page:

certerror

Now, there are a variety of different problems that can cause errors like this one– in most cases, the problem is that the user has some software (security software or malware) installed locally that is generating fake certificates that are deemed invalid for various reasons.

However, when following troubleshooting steps, we’ve determined that a small number of users encountering this NET::ERR_CERT_AUTHORITY_INVALID error page are hitting it for the correct and valid Google certificates that chain through Google’s new intermediate Google Internet Authority G3. That’s weird.

What’s going on?

The first thing to understand is that Google operates a number of different certificate trust chains, and we have multiple trust chains deployed at the same time. So a given user will likely encounter some certificate chains that go through the older Google Internet Authority G2 chain and some that go through the newer Google Internet Authority G3 chain– this isn’t something the client controls.

G2vG3

You can visit this GIA G3-specific test page to see if the G3 root is properly trusted by your system.

More surprisingly, it’s also the case that you might be getting a G3 chain for a particular Google site (e.g. https://mail.google.com) while some other user is getting a G2 chain for the same URL. You might even end up with a different chain simply based on what Google sites you’ve visited first, due to a feature called HTTP/2 connection coalescing.

In order to see the raw details of the certificate encountered on an error page, you can click the error code text on the blocking page. (If the site loaded without errors, you can view the certificate like so).

Google’s new certificate chain is certainly supposed to be trusted automatically– if your operating system (e.g. Windows 7) didn’t already have the proper certificates installed, it’s expected to automatically download the root certificate from the update servers (e.g. Microsoft WindowsUpdate) and install it so that the certificate chain is recognized as trusted. In rare instances, we’ve heard of this process not working– for instance, some network administrators have disabled root certificate updates for their enterprise’s PCs.

On modern versions of Windows, you can direct Windows to check its trusted certificate list against the WindowsUpdate servers by running the following from a command prompt:

certutil -f -verifyCTL AuthRootWU

Older versions of Windows might not support the -verifyCTL command. You might instead try downloading the R2 GlobalSign Root Certificate directly and then installing it in your Trusted Root Certification Authorities:

InstallBtnLocalMachineTrustedRootFinishyay

Overall, the number of users reporting problems here is very low, but I’m determined to help ensure that Chrome and Google sites work for everyone.

-Eric

Standard
browsers, security, Uncategorized

Stealing your own password is not a vulnerability

By far, the most commonly-reported “vulnerability” reported to the Chrome Vulnerability Rewards program boils down to “I can steal my own password.” Despite having its very own FAQ entry, this gets reported to the VRP at varying levels of breathlessness, sometimes multiple times per day.

You can see this “attack” in action:

UnmaskPassword

Yes, it’s true, you can use Chrome to steal your own password.

You can also grab a knife and stab yourself in the leg. I wonder how often knife-makers get letters to that effect?

-Eric

Standard
Uncategorized

Working with “Big Data” in .NET

For simplicity (and because I didn’t know any better at the time), Fiddler uses plain public byte[] array fields to represent the request and response bodies. This makes working with the body data trivial for authors of extensions and FiddlerScript, but it also creates significant shortcomings. Using fields rather than properties improves performance in some scenarios, but it muddles the contract about mutability of the data and means that developers can easily accidentally create inconsistent state (e.g. by decompressing the body but forgetting to change the Content-Length, Content-Encoding, and Transfer-Encoding headers).

The more serious problem with the use of byte arrays is that they require contiguous memory allocations. In a 64-bit process, this isn’t a major problem, but in a 32-bit process, address space fragmentation means that finding an open address range larger than a few hundred megabytes is often impossible:

Address space fragmentation means there's no place for the data

If Fiddler cannot allocate contiguous memory of the required size, the resulting .NET System.OutOfMemoryException kills the Web Session. While 64-bit processes rarely suffer from address space fragmentation, the use of byte arrays still leads to a problem with large downloads—the .NET Framework imposes a cap of 0x7FFFFFC7 elements in an array, meaning that even 64-bit Fiddler is limited to storing request and response bodies that are just under two gigabytes. In practice, this is rarely a huge problem, but it’s occasionally annoying.

From an API point-of-view, I should have exposed message bodies as a Stream, so the backing data structure could be selected (and changed) as needed for performance and reliability reasons.

Now, as it happens, Fiddler internally uses a Stream buffer when it’s reading the body from the socket—it uses a MemoryStream for this purpose. Unfortunately, the MemoryStream built into the .NET Framework itself uses a plain byte[] to store the data, which means it suffers from the same problems described before, and some additional problems. Its biggest problem is that the growth algorithm for byte array backing the MemoryStream, and I’ve written at length about the issue and how I worked around it in Fiddler by creating a PipeReadBuffer object with smarter growth rules.

I thought things were as good as they could be without swapping to use a different object underlying the PipeReadBuffer, but last night Rafael Rivera pointed out a scenario that’s really broken in Fiddler today. His client was trying to download a 13.6gb ZIP file through Fiddler, and at just below 2gb the download slowed to a crawl. Looking at Fiddler.exe in Process Monitor, nearly 50% of the time was logged in garbage collection.

What’s going on?

The problem is that 64-bit Fiddler defaults to Stream and Forget bodies only when they reach 0x7FFFFFC7 bytes. For various reasons (which are likely not very compelling), Fiddler doesn’t trust the Content-Length response header and will instead keep buffering the response body until the StreamAndForget threshold is reached, at which point the response bytes will be streamed to the client, dropped, and subsequent bytes will be blindly streamed to the client without recording to a buffer. Despite that wasted buffering for the first 2 gigabytes, however, everything ought to work reasonably quickly.

Except.

When I coded the PipeReadBuffer, I made it grow by 64mb at a time, until we got to within 64mb of the .NET max array index of 0x7FFFFFC7. When we got within 64mb of the end, instead of correctly growing to 0x7FFFFFC7 bytes, it instead grows to exactly the length needed with no slack bytes. Which means that when the next network read comes along a millisecond later, the MemoryStream’s byte array has no free space and must be reallocated. And it gets reallocated to exactly the needed size, leaving no slack. This process repeats, with each network read meaning that .NET must:

  • Allocate an array of just under 2gb
  • Copy the 2 billion bytes from the old array to the new array
  • Copy in the ~16kb from the network read to the end of the new array
  • Free the old array

This is not a fast pattern, but things get even worse. Ordinarily, those last 64mb below the threshold will download reasonably quickly, the StreamAndForget threshold will get hit, and then all of the memory is freed and the download will proceed without buffering.

But.

TCP/IP includes a behavior called flow control, which means that the server tries to send data only as fast as the client is able to read it. When Fiddler hits the bad reallocation behavior described, it dramatically slows down how quickly it reads from the network. This, in turn, causes the server to send smaller and smaller packets. Which means Fiddler performs more and more network reads of smaller and smaller data, slowing the download of the 64mb to a virtual crawl.

Before Telerik ships a fix for this bug, anyone hitting this can avoid it with a trivial workaround—just set the Stream and Forget threshold inside Tools > Fiddler Options > Performance to something smaller than 2gb (for most users, 100mb would actually work great).

-Eric

Standard