privacy, security

Understanding the Limitations of HTTPS

A colleague recently forwarded me an article about the hazards of browsing on public WiFi with the question: “Doesn’t HTTPS fix this?” And the answer is, “Yes, generally.” As with most interesting questions, however, the complete answer is a bit more complicated.

HTTPS is a powerful technology for helping secure the web; all websites should be using it for all traffic.

If you’re not comfortable with nitty-gritty detail, stop reading here. If your takeaway upon reading the rest of this post is “HTTPS doesn’t solve anything, so don’t bother using it!” you are mistaken, and you should read the post again until you understand why.

HTTPS is a necessary condition for secure browsing, but it is not a sufficient condition.

There are limits to the benefits HTTPS provides, even when deployed properly. This post explores those limits.

Deployment Limitations

HTTPS only works if you use it.

In practice, the most common “exploit against HTTPS” is failing to use it everywhere.

Specify HTTPS:// on every URL, including URLs in documentation, email, advertisements, and everything else. Use Strict-Transport-Security (preload!) and Content-Security-Policy’s Upgrade-Insecure-Requests directive (and optionally Block-All-Mixed-Content) to help mitigate failures to properly set URLs to HTTPS.

Mixed Content – By default, browsers will block non-secure scripts and CSS (called “Active Mixed Content”) from secure pages. However, images and other “Passive Mixed Content” are requested and displayed; the page’s lock icon is silently hidden.

Non-secure Links – While browsers have special code to deal with Active and Passive mixed content, most browsers do nothing at all for Latent Mixed Content, where a secure page contains a link to a non-secure resource. Email trackers are the worst.

Privacy Limitations

SNI / IP-Address – When you connect to a server over HTTPS, the URL you’re requesting is encrypted and invisible to network observers. However, observers can see both the IP address you’re connecting to, and the hostname you’re requesting on that server (via the Server Name Indication ClientHello extension).

TLS 1.3 proposes a means of SNI-encryption, but (unless you’re using something like Tor) an observer is likely to be able to guess which server you’re visiting using only the target IP address. In most cases, a network observer will also see the plaintext of the hostname when your client looks up its IP address via the DNS protocol (maybe fixed someday).

image

Data Length – When you connect to a server over HTTPS, the data you send and receive is encrypted. However, in the majority of cases, no attempt is made to mask the length of data sent or received, meaning that an attacker with knowledge of the site may be able to determine what content you’re browsing on that site. Protocols like HTTP/2 offer built-in options to generate padding frames to mask payload length, and sites can undertake efforts (Twitter manually pads avatar graphics to fixed byte lengths) to help protect privacy. More generally, traffic analysis attacks make use of numerous characteristics of your traffic to attempt to determine what you’re up to; these are used by real-world attackers like the Great Firewall of China. Attacks like BREACH make use of the fact that when compression is in use, leaking just the size of data can also reveal the content of the data.

Ticket Linking – TLS tickets can be used to identify the client. (Addressed in TLS1.3)

Referer Header – By default, browsers send a page’s URL via the Referer header (also exposed as the document.referrer DOM property) when navigating or making resource requests from one HTTPS site to another. HTTPS sites wishing to control leakage of their URLs should use Referrer Policy.

Server Identity Limitations

Certificate Verification – During the HTTPS handshake, the server proves its identity by presenting a certificate. Most certificates these days are issued after what’s called “Domain Validation”, a process by which the requestor proves that they are in control of the domain name listed in the certificate.

This means, however, that a bad guy can usually easily get a certificate for a domain name that “looks like” a legitimate site. While an attacker shouldn’t be able to get a certificate for https://paypal.com, there’s little to stop them from getting a certificate for https://paypal.co.com. Bad guys abuse this.

Some sites try to help users notice illegitimate sites by deploying Extended Validation (EV) certificates and relying upon users to notice if the site they’re visiting has not undergone that higher-level of vetting. Sadly, a number of product decisions and abysmal real-world deployment choices mean that EV certificates are of questionable value in the real-world.

Even more often, attackers rely on the fact that users don’t understand URLs at all, and are willing to enter their data into any page containing the expected logos:

image

One Hop – TLS often protects traffic for only one “hop.” For instance, when you connect to my https://fiddlerbook.com, you’ll see that it’s using HTTPS. Hooray!

What you didn’t know is that this domain is fronted by Cloudflare CDN’s free tier. While your communication with the Content Delivery Network is secure, the request from the CDN to my server (http://fiddlerbook.com) is over plain HTTP. A well-positioned attacker could interfere with your connection to the backend site by abusing that non-secure hop. Overall, using Cloudflare for HTTPS fronting improves security in my site’s scenario (protecting against some attackers), but browser UI limits mean that the protection probably isn’t as good as you expected. Here’s a nice video on this.

Multi-hop scenarios exist beyond CDNs; for instance, a HTTPS server might pull in a HTTP web service or use a non-secure connection to a remote database on the backend.

DOM Mixing – When you establish a connection to https://example.com, you can have a level of confidence that the top-level page was delivered unmolested from the example.com server. However, returned HTML pages often pull in third-party resources from other servers, whose certificates are typically not user-visible. This is especially interesting in cases where the top-level page has an EV certificate (“lighting up the green bar”), but scripts or other resources are pulled from a third-party with a domain-validated certificate.

Sadly, in many cases, third-parties are not worthy of the high-level of trust they are granted by inclusion in a first-party page.

Server Compromise – HTTPS only aims to protect the bytes in transit. If a server has been compromised due to a bug or a configuration error, HTTPS does not help (and might even hinder detection of the compromised content, in environments where HTTP traffic is scanned for malware by gateway appliances, for instance). HTTPS does not stop malware.

Server Bugs – Even when not compromised, HTTPS doesn’t make server code magically secure. In visual form:
image

NoSilverBullets

Client Identity Limitations

Client Authentication – HTTPS supports a mode whereby the client proves their identity to the server by presenting a certificate during the HTTPS handshake; this is called “Client Authentication.” In practice, this feature is little used.

Client Tampering – Some developers assume that using HTTPS means that the bytes sent by the client have not been manipulated in any way. In practice, it’s trivial for a user to manipulate the outbound traffic from a browser or application, despite the use of HTTPS.

Features like Certificate Pinning could have made it slightly harder for a user to execute a man-in-the-middle attack against their own traffic, but browser clients like Firefox and Chrome automatically disable Certificate Pinning checks when the received certificate chains to a user-installed root certificate. This is not a bug.

In some cases, the human user is not a party to the attack. HTTPS aims to protect bytes in transit, but does not protect those bytes after they’re loaded in the client application. A man-in-the-browser attack occurs when the client application has been compromised by malware, such that tampering or data leaks are performed before encryption or after decryption.

Real-world Implementation Limitations

Early Termination Detection – The TLS specification offers a means for detecting when a data stream was terminated early to prevent truncation attacks. In practice, clients do not typically implement this feature and will often accept truncated content silently, without any notice to the user.

Validation Error Overrides – HTTPS deployment errors are so common that most user-agents allow the user to override errors reported during the certificate validation process (expired certificates, name mismatches, even untrusted CAs etc). Clients range in quality as to how well they present the details of the error and how effectively they dissuade users from making mistakes.

Further Reading

 

Standard
dev, security

Strict-Transport-Security for *.dev, *.app and more

Some web developers host their pre-production development sites by configuring their DNS such that hostnames ending in .dev point to local servers. Such configurations were not meaningfully impacted when .dev became an official Generic Top Level Domain a few years back, because even as smart people warned that developers should stop squatting on it, Google (the owner of the .dev TLD) was hosting few (if any) sites on the gTLD.

With Chrome 63, shipping to the stable channel in the coming days, things have changed. Chrome has added .dev to the HSTS Preload list (along with the .foo, .page, .app, and .chrome TLDs). This means that any attempt to visit http://anything.dev will automatically be converted to https://anything.dev.

hstspreload

Other major browsers use the same underlying HSTS Preload list, and we expect that they will also pick up the .dev TLD entry in the coming weeks and months.

Of course, if you were using HTTPS with valid and trusted certificates on your pre-production sites already (good for you!) the Chrome 63 change may not impact you very much right away. But you’ll probably want to move your preproduction sites off of .dev and instead use e.g. .test, a TLD reserved for this purpose.

Secure all the things!

-Eric

PS: Perhaps surprisingly, the dotless (“plain”) hostnames http://dev, http://page, http://app, http://chrome, http://foo are all impacted by new HSTS rules as well.

Standard
browsers, security, Uncategorized

Google Internet Authority G3

For some time now, operating behind the scenes and going mostly unnoticed, Google has been changing the infrastructure used to provide HTTPS certificates for its sites and services.

You’ll note that I said mostly. Over the last few months, I’ve periodically encountered complaints from users who try to load a Google site and get an unexpected error page:

certerror

Now, there are a variety of different problems that can cause errors like this one– in most cases, the problem is that the user has some software (security software or malware) installed locally that is generating fake certificates that are deemed invalid for various reasons.

However, when following troubleshooting steps, we’ve determined that a small number of users encountering this NET::ERR_CERT_AUTHORITY_INVALID error page are hitting it for the correct and valid Google certificates that chain through Google’s new intermediate Google Internet Authority G3. That’s weird.

What’s going on?

The first thing to understand is that Google operates a number of different certificate trust chains, and we have multiple trust chains deployed at the same time. So a given user will likely encounter some certificate chains that go through the older Google Internet Authority G2 chain and some that go through the newer Google Internet Authority G3 chain– this isn’t something the client controls.

G2vG3

You can visit this GIA G3-specific test page to see if the G3 root is properly trusted by your system.

More surprisingly, it’s also the case that you might be getting a G3 chain for a particular Google site (e.g. https://mail.google.com) while some other user is getting a G2 chain for the same URL. You might even end up with a different chain simply based on what Google sites you’ve visited first, due to a feature called HTTP/2 connection coalescing.

In order to see the raw details of the certificate encountered on an error page, you can click the error code text on the blocking page. (If the site loaded without errors, you can view the certificate like so).

Google’s new certificate chain is certainly supposed to be trusted automatically– if your operating system (e.g. Windows 7) didn’t already have the proper certificates installed, it’s expected to automatically download the root certificate from the update servers (e.g. Microsoft WindowsUpdate) and install it so that the certificate chain is recognized as trusted. In rare instances, we’ve heard of this process not working– for instance, some network administrators have disabled root certificate updates for their enterprise’s PCs.

On modern versions of Windows, you can direct Windows to check its trusted certificate list against the WindowsUpdate servers by running the following from a command prompt:

certutil -f -verifyCTL AuthRootWU

Older versions of Windows might not support the -verifyCTL command. You might instead try downloading the R2 GlobalSign Root Certificate directly and then installing it in your Trusted Root Certification Authorities:

InstallBtnLocalMachineTrustedRootFinishyay

Overall, the number of users reporting problems here is very low, but I’m determined to help ensure that Chrome and Google sites work for everyone.

-Eric

Standard
browsers, security

Chrome Field Trials

Back in April, we announced:

Beginning in October 2017, Chrome will show the “Not secure” warning in two additional situations: when users enter data on an HTTP page, and on all HTTP pages visited in Incognito mode.

This is true, but it’s perhaps a little misleading, based on some of the tweets we’ve seen:

Screen Shot 2017-10-18 at 8.05.25 AM

What isn’t mentioned in the blog post is exactly how this feature will roll out– many readers naturally assume that it’s as simple as: “If you have Chrome 62, then this feature is present.” After all, that’s how software usually works.

In Chrome, things are more interesting. Where possible, Chrome rolls out new features dynamically using the Field Trials platform. You can think of Field Trials as a set of server-controlled flags that allow Google to change Chrome’s behavior dynamically, at runtime, without shipping a new version.

We use Field Trials for two major purposes– for experimentation, and for feature rollouts.

Experimentally, we run many experiments where we create one or more experimental groups and then compare telemetry from those clients against a control group. If a feature isn’t performing as expected (e.g. its usage declines vs. the feature it replaces, or browser crashes increase, or memory usage increases or page load slows, etc), the feature is tuned or removed before it officially “ships.” Experiments are often conducted on the pre-release channels (e.g. Canary, Dev, and Beta) before deciding whether or not a feature should be rolled out to the Stable channel.

After a feature has proven itself via experiments, it’s ready to roll out to users in the Stable channel. Unfortunately, pre-release channels don’t get coverage nearly as broad as we’d like, so we have to take care when a feature is first rolled out to Stable. By using a field trial, we can enable the new feature for a huge number of Stable users while still keeping it at a low percentage of the Stable user base (e.g. 1% of a billion installs == 10 million clients). We keep an eagle eye on telemetry and user-feedback reports to spot any unexpected problems, and assuming we don’t find any, we can quickly ramp up the new feature rollout to 100% of users. If we discover the feature wasn’t fit to ship for whatever reason (e.g. introduces some serious bug), we can dial it back to 0% until a fix can be created.

Unfortunately, field trials are one of the very few inscrutable parts of the otherwise veryopen Chrome– Google does not publish information about the current percentage of users in a trial, and while chrome://version/ shows which trials are currently enabled for a given client, there’s no public mapping of the Variations tokens to the actual trials they control.

Rest assured that I’m eager to push the new Not Secure warnings to 100% and I expect to get to do so very soon. If you just can’t wait, you can override the field trial and turn it on yourself by changing chrome://flags/#mark-non-secure-as and restarting Chrome.

Screen Shot 2017-10-18 at 8.22.16 AM

Note that there’s not a 1:1 correspondence between Flags and Field Trials– while many trials can be overridden by flags, not all experiments have user-toggleable flags. If a feature is controlled by a flag and a field trial, when a flag hasn’t been manually configured, the field trial’s setting (if any) is not reflected in chrome://flags… the flag entry will just show Default.

Protecting your web traffic as fast as we can,

-Eric

Standard
browsers, security, Uncategorized

Stealing your own password is not a vulnerability

By far, the most commonly-reported “vulnerability” reported to the Chrome Vulnerability Rewards program boils down to “I can steal my own password.” Despite having its very own FAQ entry, this gets reported to the VRP at varying levels of breathlessness, sometimes multiple times per day.

You can see this “attack” in action:

UnmaskPassword

Yes, it’s true, you can use Chrome to steal your own password.

You can also grab a knife and stab yourself in the leg. I wonder how often knife-makers get letters to that effect?

-Eric

PS: “But… but… what if I don’t want to use this easy trick and instead steal my own passwords from myself by installing some software on my own PC? Isn’t that a security bug?”

No, although “Software can read passwords if I give it full permission to run on my computer” is probably the third or fourth most common non-bug reported to Chromium’s Security@ alias. https://crbug.com/678171 is one of many such filings.
Chrome does use encryption when storing passwords in most cases. However, Chrome has the decryption key (necessarily). You can read more about why an attacker who has physical access or has compromised your machine is outside of the browser’s threat model.
Standard
browsers, security

Chrome 59 on Mac and TeletexString Fields

Update: This change ended up getting backed out, after it was discovered that it impacted smartcard authentication. Thanks for self-hosting Chrome Dev builds, IT teams!

A change quietly went into Chrome 59 that may impact your certificates if they contain non-ASCII characters in a TeletexString field. Specifically, these certificates will fail to validate on Mac, resulting in either a ERR_SSL_SERVER_CERT_BAD_FORMAT error for server certificates or a ERR_BAD_SSL_CLIENT_AUTH_CERT error for client certificates. The change that rejects such certificates is presently only in the Mac version of Chrome, but it will eventually make its way to other platforms.

You can see whether your certificates are using teletexStrings using an ASN.1 decoder program, like this one. Simply upload the .CER file, and look for the TeletexString type in the output. If you find any such fields that contain non-ASCII characters, the certificate is impacted:

Non-ASCII character in string

Background: Certificates are encoded using a general-purpose data encoding scheme called ASN.1. ASN.1 specifies encoding rules, and strings may be encoded using any of a number of different data types (teletexString, printableString, universalString, utf8String, bmpString). Due to the complexity and underspecified nature of the TeletexString, as well as the old practice of shoving Latin1 strings in fields marked as TeletexString, the Chrome change takes a conservative approach to handling TeletexString, only allowing the ASCII subset. utf8String is a well-specified and well-supported standard and should be used in place of the obsolete teletexString type.

To correct the problem with the certificate, regenerate it using UTF8String fields to store non-ASCII data.

-Eric Lawrence

Standard