tech

I Still ❤ The Web

I’ve been working on web security for a long time at this point, and spending most of my time looking at all of the bad stuff happening on the web can get pretty demoralizing. Fortunately, there’s also a lot of amazing stuff on the web that periodically reminds me of what an amazing tool it can be.

For instance, this afternoon a friend posted the following picture:

MysteryText

Now, I’ve loved mysteries for a long time, and this one seemed like it ought to be an easy one with all of the magical tech we have at our disposal these days.

I first gave it a shot with the Google Translate app on my phone, which frequently surprises me with the things it can do. However, while it can do translations via photo, that feature requires that you first specify the source language, and I don’t know it yet.

I gave this nice article on recognizing character sets a skim, but no perfect match leapt out at me, although it did eliminate some of my guesses. The answer’s actually in there, had I done more than skim it.

Fortunately, I remembered an amazing website that lets you draw a shape and it’ll tell you what Unicode characters you might be writing. I chose the most distinctive character I could and scrawled it in the box, and the first half of the puzzle fell into place:

Drawbox

After that, it was a simple matter of clicking through to the Georgian character set block and verifying that all of the characters were present. I then copied the text over to Google Translate, which reports that ელდარ translates to Eldar.

There’s a ton of amazing stuff out there on the web. Don’t let all the bad stuff get you down.

-Eric

Standard
browsers, security

SSLVersionMin Policy returns to Chrome 66

Chrome 66, releasing to stable this week, again supports the SSLVersionMin policy that enables administrators to control the minimum version of TLS that Chrome is willing to negotiate with a server.

If this policy is in effect and configured to permit, say, only TLS/1.2+ connections, attempting to connect to a site that only supports TLS/1.0 will result in an error page with the status code ERR_SSL_VERSION_OR_CIPHER_MISMATCH.

This policy existed until Chrome 52 and was brought back for Chrome 66 and later. It is therefore possible that your administrators configured it long ago, forgot about the setting, and will be impacted again with Chrome 66’s rollout.

-Eric

Standard
browsers, security

HSTS Preload and Subdomains

In order to be eligible for the HSTS Preload list, your site must usually serve a Strict-Transport-Security header with a includeSubdomains directive.

Unfortunately, some sites do not follow the best practices recommended and instead just set a one-year preload header with includeSubdomains and then immediately request addition to the HSTS Preload list. The result is that any problems will likely be discovered too late to be rapidly fixed– removals from the preload list may take months.

The Mistakes

In running the HSTS preload list, we’ve seen two common mistakes for sites using includeSubdomains:

Mistake: Forgetting Intranet Hosts

Some sites are set up with a public site (example.com) and an internal site only accessible inside the firewall (app.corp.example.com). When includeSubdomains is set, all sites underneath the specified domain must be accessible over HTTPS, including in this case app.corp.example.com. Some corporations have different teams building internal and external applications, and must take care that the security directives applied to the registrable domain are compatible with all of the internal sites running beneath it in the DNS hierarchy. Following the best practices of staged rollout (with gradually escalating max-age directives) will help uncover problems before you brick your internal sites and applications.

Of course, you absolutely should be using HTTPS on all of your internal sites as well, but HTTPS deployments are typically smoother when they haven’t been forced by Priority-Zero downtime.

Mistake: Forgetting Delegated Hosts

Some popular sites and services use third party companies for advertising, analytics, and marketing purposes. To simplify deployments, they’ll delegate handling of a subdomain under their main domain to the vendor providing the service. For instance, http://www.example.com will point to the company’s own servers, while the hostname mail-tracking.example.com will point to Experian or Marketo servers. A HSTS rule with includeSubdomains applied to example.com will also apply to those delegated domains. If your service provider has not enabled HTTPS support on their servers, all requests to those domains will fail when upgraded to HTTPS. You may need to change service providers entirely in order to unbrick your marketing emails!

Of course, you absolutely should be using HTTPS on all of your third-party apps as well, but HTTPS deployments are typically smoother when they haven’t been forced by Priority-Zero downtime.

Recovery

If you do find yourself in the unfortunate situation of having preloaded a TLD whose subdomains were not quite ready, you can apply for removal from the preload list, but, as noted previously, the removal can be expected to take a very long time to propagate. For cases where you only have a few domains out of compliance, you should be able to quickly move them to HTTPS. You might also consider putting a HTTPS front-end out in front of your server (e.g. Cloudflare’s free Flexible SSL option) to allow it to be accessed over HTTPS before the backend server is secured.

Deploy safely out there!

-Eric

Standard
security

NET::ERR_CERT_INVALID error

Some users report that after updating their Operating System or Chrome browser to a more recent version, they have problems accessing some sites (often internal sites with self-signed certificates) and the browser shows an error of NET::ERR_CERT_INVALID.

NET::ERR_CERT_INVALID means that a certificate was itself is so malformed that it’s not accepted at all– sometimes rejected by certificate logic in the underlying operating system or sometimes rejected by additional validity checks in Chrome. Common causes include

  1. malformed serial numbers (they should be 20 digits)
  2. Certificate versions (v1 certificates must not have extensions)
  3. policy constraints
  4. SHA-1 (on OS X 10.13.3+)
  5. validity date formatting (e.g. missing the seconds field in the ASN.1, or encoding using the wrong ASN.1 types)
  6. disk corruption

Click the “NET::ERR_CERT_INVALID” text such that the certificate’s base64 PEM data appears. Copy/paste that text (up to the first –END CERTIFICATE–) into the box at https://crt.sh/lintcert and the tool will generate a list of errors that can lead to this error in Chrome.

CertLint

In most cases, the site will need to generate and install a properly-formatted certificate in order to resolve the error.

If the certificate was generated incorrectly by a locally-running proxy (e.g. antivirus) or content-filtering device, the interceptor will need to be fixed.

Finally, Windows does not have a robust self-healing feature for its local Trusted Certificates store, meaning that if an on-disk certificate gets even a single bit flipped, every certificate chain that depends on that certificate will begin to fail. The only way to fix this problem is to use CertMgr.msc to delete the corrupted root or intermediate certificate. In a default configuration, Windows will subsequently automatically reinstall the correct certificate from WindowsUpdate.

-Eric

Standard
privacy, security

Understanding the Limitations of HTTPS

A colleague recently forwarded me an article about the hazards of browsing on public WiFi with the question: “Doesn’t HTTPS fix this?” And the answer is, “Yes, generally.” As with most interesting questions, however, the complete answer is a bit more complicated.

HTTPS is a powerful technology for helping secure the web; all websites should be using it for all traffic.

If you’re not comfortable with nitty-gritty detail, stop reading here. If your takeaway upon reading the rest of this post is “HTTPS doesn’t solve anything, so don’t bother using it!” you are mistaken, and you should read the post again until you understand why.

HTTPS is a necessary condition for secure browsing, but it is not a sufficient condition.

There are limits to the benefits HTTPS provides, even when deployed properly. This post explores those limits.

Deployment Limitations

HTTPS only works if you use it.

In practice, the most common “exploit against HTTPS” is failing to use it everywhere.

Specify HTTPS:// on every URL, including URLs in documentation, email, advertisements, and everything else. Use Strict-Transport-Security (preload!) and Content-Security-Policy’s Upgrade-Insecure-Requests directive (and optionally Block-All-Mixed-Content) to help mitigate failures to properly set URLs to HTTPS.

Mixed Content – By default, browsers will block non-secure scripts and CSS (called “Active Mixed Content”) from secure pages. However, images and other “Passive Mixed Content” are requested and displayed; the page’s lock icon is silently hidden.

Non-secure Links – While browsers have special code to deal with Active and Passive mixed content, most browsers do nothing at all for Latent Mixed Content, where a secure page contains a link to a non-secure resource. Email trackers are the worst.

Privacy Limitations

SNI / IP-Address – When you connect to a server over HTTPS, the URL you’re requesting is encrypted and invisible to network observers. However, observers can see both the IP address you’re connecting to, and the hostname you’re requesting on that server (via the Server Name Indication ClientHello extension).

TLS 1.3 proposes a means of SNI-encryption, but (unless you’re using something like Tor) an observer is likely to be able to guess which server you’re visiting using only the target IP address. In most cases, a network observer will also see the plaintext of the hostname when your client looks up its IP address via the DNS protocol (maybe fixed someday).

image

Data Length – When you connect to a server over HTTPS, the data you send and receive is encrypted. However, in the majority of cases, no attempt is made to mask the length of data sent or received, meaning that an attacker with knowledge of the site may be able to determine what content you’re browsing on that site. Protocols like HTTP/2 offer built-in options to generate padding frames to mask payload length, and sites can undertake efforts (Twitter manually pads avatar graphics to fixed byte lengths) to help protect privacy. More generally, traffic analysis attacks make use of numerous characteristics of your traffic to attempt to determine what you’re up to; these are used by real-world attackers like the Great Firewall of China. Attacks like BREACH make use of the fact that when compression is in use, leaking just the size of data can also reveal the content of the data; mitigations are non-trivial.

Ticket Linking – TLS tickets can be used to identify the client. (Addressed in TLS1.3)

Referer Header – By default, browsers send a page’s URL via the Referer header (also exposed as the document.referrer DOM property) when navigating or making resource requests from one HTTPS site to another. HTTPS sites wishing to control leakage of their URLs should use Referrer Policy.

Server Identity Limitations

Certificate Verification – During the HTTPS handshake, the server proves its identity by presenting a certificate. Most certificates these days are issued after what’s called “Domain Validation”, a process by which the requestor proves that they are in control of the domain name listed in the certificate.

This means, however, that a bad guy can usually easily get a certificate for a domain name that “looks like” a legitimate site. While an attacker shouldn’t be able to get a certificate for https://paypal.com, there’s little to stop them from getting a certificate for https://paypal.co.com. Bad guys abuse this.

Some sites try to help users notice illegitimate sites by deploying Extended Validation (EV) certificates and relying upon users to notice if the site they’re visiting has not undergone that higher-level of vetting. Sadly, a number of product decisions and abysmal real-world deployment choices mean that EV certificates are of questionable value in the real-world.

Even more often, attackers rely on the fact that users don’t understand URLs at all, and are willing to enter their data into any page containing the expected logos:

image

One Hop – TLS often protects traffic for only one “hop.” For instance, when you connect to my https://fiddlerbook.com, you’ll see that it’s using HTTPS. Hooray!

What you didn’t know is that this domain is fronted by Cloudflare CDN’s free tier. While your communication with the Content Delivery Network is secure, the request from the CDN to my server (http://fiddlerbook.com) is over plain HTTP because my server doesn’t have a valid certificate[1]. A well-positioned attacker could interfere with your connection to the backend site by abusing that non-secure hop. Overall, using Cloudflare for HTTPS fronting improves security in my site’s scenario (protecting against some attackers), but browser UI limits mean that the protection probably isn’t as good as you expected. Here’s a nice video on this.

Multi-hop scenarios exist beyond CDNs; for instance, a HTTPS server might pull in a HTTP web service or use a non-secure connection to a remote database on the backend.

DOM Mixing – When you establish a connection to https://example.com, you can have a level of confidence that the top-level page was delivered unmolested from the example.com server. However, returned HTML pages often pull in third-party resources from other servers, whose certificates are typically not user-visible. This is especially interesting in cases where the top-level page has an EV certificate (“lighting up the green bar”), but scripts or other resources are pulled from a third-party with a domain-validated certificate.

Sadly, in many cases, third-parties are not worthy of the high-level of trust they are granted by inclusion in a first-party page.

Server Compromise – HTTPS only aims to protect the bytes in transit. If a server has been compromised due to a bug or a configuration error, HTTPS does not help (and might even hinder detection of the compromised content, in environments where HTTP traffic is scanned for malware by gateway appliances, for instance). HTTPS does not stop malware.

Server Bugs – Even when not compromised, HTTPS doesn’t make server code magically secure. In visual form:
image

NoSilverBullets

Client Identity Limitations

Client Authentication – HTTPS supports a mode whereby the client proves their identity to the server by presenting a certificate during the HTTPS handshake; this is called “Client Authentication.” In practice, this feature is little used.

Client Tampering – Some developers assume that using HTTPS means that the bytes sent by the client have not been manipulated in any way. In practice, it’s trivial for a user to manipulate the outbound traffic from a browser or application, despite the use of HTTPS.

Features like Certificate Pinning could have made it slightly harder for a user to execute a man-in-the-middle attack against their own traffic, but browser clients like Firefox and Chrome automatically disable Certificate Pinning checks when the received certificate chains to a user-installed root certificate. This is not a bug.

In some cases, the human user is not a party to the attack. HTTPS aims to protect bytes in transit, but does not protect those bytes after they’re loaded in the client application. A man-in-the-browser attack occurs when the client application has been compromised by malware, such that tampering or data leaks are performed before encryption or after decryption. The spyware could take the form of malware in the OS, a malicious or buggy browser extension, etc.

Real-world Implementation Limitations

Early Termination Detection – The TLS specification offers a means for detecting when a data stream was terminated early to prevent truncation attacks. In practice, clients do not typically implement this feature and will often accept truncated content silently, without any notice to the user.

Validation Error Overrides – HTTPS deployment errors are so common that most user-agents allow the user to override errors reported during the certificate validation process (expired certificates, name mismatches, even untrusted CAs etc). Clients range in quality as to how well they present the details of the error and how effectively they dissuade users from making mistakes.

Further Reading

-Eric

[1] A few days after posting, someone pointed out that I can configure Cloudflare to use its (oddly named) “Full” HTTPS mode, which allows it to connect to my server over HTTPS using the (invalid) certificate installed on my server. I’ve now done so, providing protection from passive evesdroppers. But you, as an end-user, cannot tell the difference, which is the point of this post.

Standard
fiddler, Uncategorized

FiddlerCore and Brotli compression

Recently, a developer asked me how to enable Brotli content-compression support in FiddlerCore applications, so that APIs like oSession.GetResponseBodyAsString() work properly when the entity body has been compressed using brotli.

Right now, support requires two steps:

  1. Put brotli.exe (installed by Fiddler or off Github) into a Tools subfolder of the folder containing your application’s executable.
  2. Ensure that the Environment.SpecialFolder.MyDocuments folder exists and contains a FiddlerCore subfolder (e.g. C:\users\username\documents\FiddlerCore).

Step #1 allows FiddlerCore to find brotli.exe. Alternatively, you can set the fiddler.config.path.Tools preference to override the folder.

Step #2 allows FiddlerCore to create necessary temporary files. Sadly, this folder cannot presently be overridden [Bug].

One day, Fiddler might not need brotli.exe any longer, as Brotli compression is making its way into the framework.

 

 

Standard
Uncategorized

Content-Types Matter More Than You Think

Every non-empty response from a web server should contain a Content-Type response header that declares the type of content contained in the response. This declaration helps the browser understand how to process the response and can help prevent a number of serious security vulnerabilities.

Setting this header properly is more important than ever.

The Old Days

Many years ago, an easy way to exploit a stored-XSS vulnerability on a web server that accepted file uploads was to simply upload a file containing a short HTML document with embedded JavaScript. You could then send potential victims a link to http://vulnerable.example.com/uploads/123/NotReallyA.jpeg and when the victim’s browser rendered the document, it would find the JavaScript and run it in the security context of vulnerable.example.com, allowing it to steal the contents of cookies and storage, reconfigure your account, rewrite pages, etc.

Sites caught on and started rejecting uploads that lacked the “magic bytes” indicating a JPEG/GIF/PNG at the start of the file. Unfortunately, browsers were so eager to render HTML that they would “sniff” the bytes of the file to see if they could find some HTML to render. Bad guys realized they could shove HTML+Script into metadata fields of the image binary, and the attack would still work. Ugh.

In later years, browsers got smarter and stopped sniffing HTML from files served with an image/ MIME type, and introduced a new response header:

  X-Content-Type-Options: nosniff

…that declared that a browser should not attempt to sniff HTML from a document at all.

Use of the nosniff directive was soon expanded to help prevent responses from being interpreted as CSS or JavaScript, because clever attackers figured out that the complicated nature of Same Origin Policy meant that an attacking page could execute a cross-origin response and use side-effects (e.g. the exception thrown when trying to parse a HTML document as JavaScript) to read secrets out of that cross-origin response.

Browser makers have long dreamed of demanding that a response declare Content-Type: application/javascript in order for the response to be treated as JavaScript, but unfortunately telemetry tells us that this would break a non-trivial number of pages. So for now, it’s important to continue sending X-Content-Type-Options: nosniff on responses to mitigate this threat.

The Modern World

Chrome’s security sandbox helps ensure that a compromised (e.g. due to a bug in V8 or Blink) Renderer Process cannot steal or overwrite data on your device. However, until recently, a renderer compromise was inherently a UXSS vector, allowing data theft from every website your browser can reach.

Nearly a decade ago, Microsoft Research proposed a browser with stronger isolation between web origins, but as the Security lead for Internet Explorer, I thought it hopelessly impractical given the nature of the web. Fast forward to 2017, and Chrome’s Site Isolation project has shipped after a large number of engineer-years of effort.

Site Isolation allows the browser to isolate sites from one another in different processes, allowing the higher-privilege Browser Process to deny resources and permissions to low-privilege Renderers that should not have access. Sites that have been isolated are less vulnerable to renderer compromises, because the compromised renderer cannot load protected resources into its own process.

Isolation remains tricky because of complex nature of Same Origin Policy, which allows a cross-origin response to Execute without being directly Read. To execute a response (e.g. render an image, run a JavaScript, load a frame), the renderer process must itself be able to read that response, but it’s forced to rely upon its own code to prevent JavaScript from reading the bytes of that response. To address this, Chrome’s Site Isolation project hosts cross-origin frames inside different processes, and (crucially) rejects the loading of cross-origin documents into inappropriate contexts. For instance, the Browser process should not allow a JSON file (lacking CORS headers) to be loaded by an IMG tag in a cross-origin frame, because this scenario isn’t one that a legitimate site could ever use. By keeping cross-site data out of the (potentially compromised) renderer process, the impact of an arbitrary-memory-read vulnerability is blunted.

Of course, for this to work, sites must correctly mark their resources with the correct Content-Type response header and a X-Content-Type-Options: nosniff directive. (See the latest guidance on Chromium.org)

When Site Isolation blocks a response, a notice is shown in the Developer Tools console:

IsolationMessage

Console Message: Blocked current origin from receiving cross-site document

The Very Modern World

You may have heard about the recent “speculative execution” attacks against modern processors, in which clever attackers are able to read memory to which they shouldn’t normally have access. A sufficiently clever attacker might be able to execute such an attack from JavaScript in the renderer and steal the memory from that process. Such an attack on the CPU’s behavior results in the same security impact as a renderer compromise, without the necessity of finding a bug in the Chrome code.

In a world where a malicious JavaScript can read any byte in the process memory, the renderer alone has no hope of enforcing “No Read” semantics. So we must rely upon the browser process to enforce isolation, and for that, browsers need the help of web developers.

You can read more about Chrome’s efforts to combat speculative execution attacks here.

Guidance: Serve Content Securely

If your site serves JSON or similar content that contains non-public data, it is absolutely crucial that you set a proper MIME type and declare that the content should not be sniffed. For example:

 Content-Type: application/json; charset=utf-8
 X-Content-Type-Options: nosniff

Of course, you’ll also want to ensure that any Access-Control-Allow-Origin response headers are set appropriately (lest an attacker just steal your document through the front door!).

 

Thanks for your help in securing the web!

-Eric

Standard