Uncategorized

Be skeptical of client-reported MIME Content-Types

Over the 14 years that I’ve been working on browsers and the web platform, I’ve seen a lot of bugs where the client’s configuration causes a problem with a website.

By default, Windows maintains File Extension to Content Type and Content Type to File Extension mappings mappings in the registry. You can find the former mappings in subkeys named for each file extension, e.g. HKEY_CLASSES_ROOT\.ext, and the latter as subkeys under the HKEY_CLASSES_ROOT\MIME\Database\Content Type key:

PDFMapping

These mappings are how Internet Explorer, Edge, and other browsers know that a file delivered as Content-Type: application/pdf should be saved with a .pdf extension, and that a local file named example.html ought to be treated as Content-Type: text/html.

Unfortunately, these mappings are subject to manipulation by locally-installed software, which means you might find that installing Microsoft Excel causes your .CSV file upload to have a Content-Type of application/vnd.ms-excel instead of the text/csv your website was expecting.

Similarly, you might be surprised to discover that some popular file extensions do not have a MIME type registered by default on Windows. Perhaps the most popular of these is files in JavaScript Object Notation format; these generally should have the file extension .json and a MIME type of application/json but Windows treats these as an unknown type by default.

Today, I looked at a site which allows the user to upload a JSON file containing data exported from some other service. The upload process fails in Edge with an error saying that the file must be JSON. Looking at the script in the site, it contains the following:

validateFile = function(file) {
  if (file.type !== "application/json") // BUG BUG BUG
    { alert('That is not a valid json file.'); return; }

This function fails in Edge– the file.type attribute is the empty string because Windows has no mapping between .json and application/json.

This site usually works in Chrome because Chrome has a MIME-type determination system which first checks a fixed list of mappings, then, if no fixed mapping was found, consults the system registry, and finally, if the registry does not specify a MIME type for a given extension, Chrome consults a “fallback” list of mappings (kSecondaryMappings), and .JSON is in that final fallback list. However, even Chrome users would be broken if the file had the wrong extension (e.g. data.jso) or if the user’s registry contained a different mapping (e.g. .json=>”text/json”).

As a consequence, client JavaScript and server-side upload processing logic should be very skeptical of the MIME type contained in the file.type attribute or Content-Type header, as the MIME value reported could easily be incorrect by accident (or malice!).

-Eric Lawrence
PS: End users can workaround the problem with sites that expect particular MIME types for JSON by importing the following Registry Script (save the text as FixJSON.reg and double-click the file):

Windows Registry Editor Version 5.00

[HKEY_CLASSES_ROOT\.json]
"Content Type"="application/json"

[HKEY_CLASSES_ROOT\MIME\Database\Content Type\application/json]
"Extension"=".json"

 

Standard
security, Uncategorized

Fight Phish with Facebook (and Certificate Transparency)

As of April 30th, Chrome now requires that all certificates issued by a public certificate authority be logged in multiple public Certificate Transparency (CT) logs, ensuring that anyone can audit all certificates that have been issued. CT logs allow site owners and security researchers to much more easily detect if a sloppy or compromised Certificate Authority has issued a certificate in error.

For instance, I own bayden.com, a site where I distribute freeware applications. I definitely want to hear about it if any CA issues a certificate for my site, because that’s a strong indication that my site’s visitors may be under attack. What’s cool is that CT also allows me to detect if someone got a certificate for a domain name that was suspiciously similar to my domain, for instance bȧyden.com.

Now, for the whole thing to work, I have to actually pay attention to the CT logs, and who’s got time for that? Someone else’s computer, that’s who.

The folks over at Facebook Security have built an easy-to-use interface that allows you to subscribe to notifications any time a domain you care about has a new certificate issued. Just enter a hostname and decide what sorts of alerts you’d like:

CTMonitor

You can even connect their system into webhooks if you’re looking for something more elaborate than email, although mail works just fine for me:

Notification

Beyond Facebook, there will likely be many other CT Monitoring services coming online over the next few years. For instance, the good folks at Hardenize have already integrated one into their broader security monitoring platform.

The future is awesome.

-Eric

Standard
fiddler, Uncategorized

FiddlerCore and Brotli compression

Recently, a developer asked me how to enable Brotli content-compression support in FiddlerCore applications, so that APIs like oSession.GetResponseBodyAsString() work properly when the entity body has been compressed using brotli.

Right now, support requires two steps:

  1. Put brotli.exe (installed by Fiddler or off Github) into a Tools subfolder of the folder containing your application’s executable.
  2. Ensure that the Environment.SpecialFolder.MyDocuments folder exists and contains a FiddlerCore subfolder (e.g. C:\users\username\documents\FiddlerCore).

Step #1 allows FiddlerCore to find brotli.exe. Alternatively, you can set the fiddler.config.path.Tools preference to override the folder.

Step #2 allows FiddlerCore to create necessary temporary files. Sadly, this folder cannot presently be overridden [Bug].

One day, Fiddler might not need brotli.exe any longer, as Brotli compression is making its way into the framework.

 

 

Standard
Uncategorized

Content-Types Matter More Than You Think

Every non-empty response from a web server should contain a Content-Type response header that declares the type of content contained in the response. This declaration helps the browser understand how to process the response and can help prevent a number of serious security vulnerabilities.

Setting this header properly is more important than ever.

The Old Days

Many years ago, an easy way to exploit a stored-XSS vulnerability on a web server that accepted file uploads was to simply upload a file containing a short HTML document with embedded JavaScript. You could then send potential victims a link to http://vulnerable.example.com/uploads/123/NotReallyA.jpeg and when the victim’s browser rendered the document, it would find the JavaScript and run it in the security context of vulnerable.example.com, allowing it to steal the contents of cookies and storage, reconfigure your account, rewrite pages, etc.

Sites caught on and started rejecting uploads that lacked the “magic bytes” indicating a JPEG/GIF/PNG at the start of the file. Unfortunately, browsers were so eager to render HTML that they would “sniff” the bytes of the file to see if they could find some HTML to render. Bad guys realized they could shove HTML+Script into metadata fields of the image binary, and the attack would still work. Ugh.

In later years, browsers got smarter and stopped sniffing HTML from files served with an image/ MIME type, and introduced a new response header:

  X-Content-Type-Options: nosniff

…that declared that a browser should not attempt to sniff HTML from a document at all.

Use of the nosniff directive was soon expanded to help prevent responses from being interpreted as CSS or JavaScript, because clever attackers figured out that the complicated nature of Same Origin Policy meant that an attacking page could execute a cross-origin response and use side-effects (e.g. the exception thrown when trying to parse a HTML document as JavaScript) to read secrets out of that cross-origin response.

Browser makers have long dreamed of demanding that a response declare Content-Type: application/javascript in order for the response to be treated as JavaScript, but unfortunately telemetry tells us that this would break a non-trivial number of pages. So for now, it’s important to continue sending X-Content-Type-Options: nosniff on responses to mitigate this threat.

The Modern World

Chrome’s security sandbox helps ensure that a compromised (e.g. due to a bug in V8 or Blink) Renderer Process cannot steal or overwrite data on your device. However, until recently, a renderer compromise was inherently a UXSS vector, allowing data theft from every website your browser can reach.

Nearly a decade ago, Microsoft Research proposed a browser with stronger isolation between web origins, but as the Security lead for Internet Explorer, I thought it hopelessly impractical given the nature of the web. Fast forward to 2017, and Chrome’s Site Isolation project has shipped after a large number of engineer-years of effort.

Site Isolation allows the browser to isolate sites from one another in different processes, allowing the higher-privilege Browser Process to deny resources and permissions to low-privilege Renderers that should not have access. Sites that have been isolated are less vulnerable to renderer compromises, because the compromised renderer cannot load protected resources into its own process.

Isolation remains tricky because of complex nature of Same Origin Policy, which allows a cross-origin response to Execute without being directly Read. To execute a response (e.g. render an image, run a JavaScript, load a frame), the renderer process must itself be able to read that response, but it’s forced to rely upon its own code to prevent JavaScript from reading the bytes of that response. To address this, Chrome’s Site Isolation project hosts cross-origin frames inside different processes, and (crucially) rejects the loading of cross-origin documents into inappropriate contexts. For instance, the Browser process should not allow a JSON file (lacking CORS headers) to be loaded by an IMG tag in a cross-origin frame, because this scenario isn’t one that a legitimate site could ever use. By keeping cross-site data out of the (potentially compromised) renderer process, the impact of an arbitrary-memory-read vulnerability is blunted.

Of course, for this to work, sites must correctly mark their resources with the correct Content-Type response header and a X-Content-Type-Options: nosniff directive. (See the latest guidance on Chromium.org)

When Site Isolation blocks a response, a notice is shown in the Developer Tools console:

IsolationMessage

Console Message: Blocked current origin from receiving cross-site document

The Very Modern World

You may have heard about the recent “speculative execution” attacks against modern processors, in which clever attackers are able to read memory to which they shouldn’t normally have access. A sufficiently clever attacker might be able to execute such an attack from JavaScript in the renderer and steal the memory from that process. Such an attack on the CPU’s behavior results in the same security impact as a renderer compromise, without the necessity of finding a bug in the Chrome code.

In a world where a malicious JavaScript can read any byte in the process memory, the renderer alone has no hope of enforcing “No Read” semantics. So we must rely upon the browser process to enforce isolation, and for that, browsers need the help of web developers.

You can read more about Chrome’s efforts to combat speculative execution attacks here.

Guidance: Serve Content Securely

If your site serves JSON or similar content that contains non-public data, it is absolutely crucial that you set a proper MIME type and declare that the content should not be sniffed. For example:

 Content-Type: application/json; charset=utf-8
 X-Content-Type-Options: nosniff

Of course, you’ll also want to ensure that any Access-Control-Allow-Origin response headers are set appropriately (lest an attacker just steal your document through the front door!).

 

Thanks for your help in securing the web!

-Eric

Standard
Uncategorized

Taking Off Your NameTag

Recently, there’s been some excitement over the discovery that some sites are (ab)using browser password managers to identify users even when they’re not logged in.

This technique (I call it the “NameTag vulnerability”) isn’t new or novel, but the research showing that it’s broadly being used “in the wild” is certainly interesting1, and may motivate changes in password managers to combat the abuse.

Most browser password managers already protect against the NameTag vulnerability when you surf in the browsers’ Incognito or InPrivate modes. When IE11 shipped and accidentally removed the mitigation, I complained and it was silently patched. Similarly, we patched a version of this issue in Chrome 54.

Because users often wish to use the password manager even in while Incognito, the feature isn’t disabled, but instead enters a mode called “Fill on account select” (FOAS) whereby the user must manually select an account from a dropdown in order to fill the username and password. This helps prevent a site from silently identifying a user.

If you’d prefer to use the FOAS mode even when you’re not browsing in Incognito, you can enable this via a flag. Navigate to chrome://flags/#fill-on-account-select and change the setting to Enabled and restart.FOAS

To make a similar change in Firefox, navigate to about:config and change the signon.autofillForms setting to false.

Beyond the NameTag use-case, enabling FOAS can serve as a defense-in-depth against XSS attacks and similar vulnerabilities.

The Chrome team is discussing other potential mitigations in https://crbug.com/798492; feel free to “star” the issue to follow along.

Update: Chrome 65.0.3316 introduces a partial mitigation for this issue. Chrome has long had a feature called the PasswordValueGatekeeper that prevents JavaScript on a page from reading the .value property of an autofilled Password field until the user has interacted with the page in some way (a keystroke or mouse click). The Gatekeeper is designed to provide a (weak) mitigation against automated password harvesting attacks (e.g. in the event of a malicious router or UXSS vulnerability). In Chrome 65, the protection of the PasswordValueGatekeeper has been extended to also cover auto-filled Username fields, providing some mitigation against the threat described in this post. The FOAS option provides stronger protections but remains off-by-default.

-Eric

[1] Similarly, a recent study found that many sites also have third-party scripts that spy on users’ interactions with pages, something every developer knows is possible, but most humans never think about.

Standard
browsers, security, Uncategorized

Google Internet Authority G3

For some time now, operating behind the scenes and going mostly unnoticed, Google has been changing the infrastructure used to provide HTTPS certificates for its sites and services.

You’ll note that I said mostly. Over the last few months, I’ve periodically encountered complaints from users who try to load a Google site and get an unexpected error page:

certerror

Now, there are a variety of different problems that can cause errors like this one– in most cases, the problem is that the user has some software (security software or malware) installed locally that is generating fake certificates that are deemed invalid for various reasons.

However, when following troubleshooting steps, we’ve determined that a small number of users encountering this NET::ERR_CERT_AUTHORITY_INVALID error page are hitting it for the correct and valid Google certificates that chain through Google’s new intermediate Google Internet Authority G3. That’s weird.

What’s going on?

The first thing to understand is that Google operates a number of different certificate trust chains, and we have multiple trust chains deployed at the same time. So a given user will likely encounter some certificate chains that go through the older Google Internet Authority G2 chain and some that go through the newer Google Internet Authority G3 chain– this isn’t something the client controls.

G2vG3

You can visit this GIA G3-specific test page to see if the G3 root is properly trusted by your system.

More surprisingly, it’s also the case that you might be getting a G3 chain for a particular Google site (e.g. https://mail.google.com) while some other user is getting a G2 chain for the same URL. You might even end up with a different chain simply based on what Google sites you’ve visited first, due to a feature called HTTP/2 connection coalescing.

In order to see the raw details of the certificate encountered on an error page, you can click the error code text on the blocking page. (If the site loaded without errors, you can view the certificate like so).

Google’s new certificate chain is certainly supposed to be trusted automatically– if your operating system (e.g. Windows 7) didn’t already have the proper certificates installed, it’s expected to automatically download the root certificate from the update servers (e.g. Microsoft WindowsUpdate) and install it so that the certificate chain is recognized as trusted. In rare instances, we’ve heard of this process not working– for instance, some network administrators have disabled root certificate updates for their enterprise’s PCs.

On modern versions of Windows, you can direct Windows to check its trusted certificate list against the WindowsUpdate servers by running the following from a command prompt:

certutil -f -verifyCTL AuthRootWU

Older versions of Windows might not support the -verifyCTL command. You might instead try downloading the R2 GlobalSign Root Certificate directly and then installing it in your Trusted Root Certification Authorities:

InstallBtnLocalMachineTrustedRootFinishyay

Overall, the number of users reporting problems here is very low, but I’m determined to help ensure that Chrome and Google sites work for everyone.

-Eric

Standard