browsers, security

SSLVersionMin Policy returns to Chrome 66

Chrome 66, releasing to stable this week, again supports the SSLVersionMin policy that enables administrators to control the minimum version of TLS that Chrome is willing to negotiate with a server.

If this policy is in effect and configured to permit, say, only TLS/1.2+ connections, attempting to connect to a site that only supports TLS/1.0 will result in an error page with the status code ERR_SSL_VERSION_OR_CIPHER_MISMATCH.

This policy existed until Chrome 52 and was brought back for Chrome 66 and later. It is therefore possible that your administrators configured it long ago, forgot about the setting, and will be impacted again with Chrome 66’s rollout.

-Eric

Standard
browsers, security

HSTS Preload and Subdomains

In order to be eligible for the HSTS Preload list, your site must usually serve a Strict-Transport-Security header with a includeSubdomains directive.

Unfortunately, some sites do not follow the best practices recommended and instead just set a one-year preload header with includeSubdomains and then immediately request addition to the HSTS Preload list. The result is that any problems will likely be discovered too late to be rapidly fixed– removals from the preload list may take months.

The Mistakes

In running the HSTS preload list, we’ve seen two common mistakes for sites using includeSubdomains:

Mistake: Forgetting Intranet Hosts

Some sites are set up with a public site (example.com) and an internal site only accessible inside the firewall (app.corp.example.com). When includeSubdomains is set, all sites underneath the specified domain must be accessible over HTTPS, including in this case app.corp.example.com. Some corporations have different teams building internal and external applications, and must take care that the security directives applied to the registrable domain are compatible with all of the internal sites running beneath it in the DNS hierarchy. Following the best practices of staged rollout (with gradually escalating max-age directives) will help uncover problems before you brick your internal sites and applications.

Of course, you absolutely should be using HTTPS on all of your internal sites as well, but HTTPS deployments are typically smoother when they haven’t been forced by Priority-Zero downtime.

Mistake: Forgetting Delegated Hosts

Some popular sites and services use third party companies for advertising, analytics, and marketing purposes. To simplify deployments, they’ll delegate handling of a subdomain under their main domain to the vendor providing the service. For instance, http://www.example.com will point to the company’s own servers, while the hostname mail-tracking.example.com will point to Experian or Marketo servers. A HSTS rule with includeSubdomains applied to example.com will also apply to those delegated domains. If your service provider has not enabled HTTPS support on their servers, all requests to those domains will fail when upgraded to HTTPS. You may need to change service providers entirely in order to unbrick your marketing emails!

Of course, you absolutely should be using HTTPS on all of your third-party apps as well, but HTTPS deployments are typically smoother when they haven’t been forced by Priority-Zero downtime.

Recovery

If you do find yourself in the unfortunate situation of having preloaded a TLD whose subdomains were not quite ready, you can apply for removal from the preload list, but, as noted previously, the removal can be expected to take a very long time to propagate. For cases where you only have a few domains out of compliance, you should be able to quickly move them to HTTPS. You might also consider putting a HTTPS front-end out in front of your server (e.g. Cloudflare’s free Flexible SSL option) to allow it to be accessed over HTTPS before the backend server is secured.

Deploy safely out there!

-Eric

Standard
security

NET::ERR_CERT_INVALID error

Some users report that after updating their Operating System or Chrome browser to a more recent version, they have problems accessing some sites (often internal sites with self-signed certificates) and the browser shows an error of NET::ERR_CERT_INVALID.

NET::ERR_CERT_INVALID means that a certificate was itself is so malformed that it’s not accepted at all– sometimes rejected by certificate logic in the underlying operating system or sometimes rejected by additional validity checks in Chrome. Common causes include

  1. malformed serial numbers (they should be 20 digits)
  2. Certificate versions (v1 certificates must not have extensions)
  3. policy constraints
  4. SHA-1 (on OS X 10.13.3+)
  5. validity date formatting (e.g. missing the seconds field in the ASN.1, or encoding using the wrong ASN.1 types)
  6. disk corruption

Click the “NET::ERR_CERT_INVALID” text such that the certificate’s base64 PEM data appears. Copy/paste that text (up to the first –END CERTIFICATE–) into the box at https://crt.sh/lintcert and the tool will generate a list of errors that can lead to this error in Chrome.

CertLint

In most cases, the site will need to generate and install a properly-formatted certificate in order to resolve the error.

If the certificate was generated incorrectly by a locally-running proxy (e.g. antivirus) or content-filtering device, the interceptor will need to be fixed.

Finally, Windows does not have a robust self-healing feature for its local Trusted Certificates store, meaning that if an on-disk certificate gets even a single bit flipped, every certificate chain that depends on that certificate will begin to fail. The only way to fix this problem is to use CertMgr.msc to delete the corrupted root or intermediate certificate. In a default configuration, Windows will subsequently automatically reinstall the correct certificate from WindowsUpdate.

-Eric

Standard
privacy, security

Understanding the Limitations of HTTPS

A colleague recently forwarded me an article about the hazards of browsing on public WiFi with the question: “Doesn’t HTTPS fix this?” And the answer is, “Yes, generally.” As with most interesting questions, however, the complete answer is a bit more complicated.

HTTPS is a powerful technology for helping secure the web; all websites should be using it for all traffic.

If you’re not comfortable with nitty-gritty detail, stop reading here. If your takeaway upon reading the rest of this post is “HTTPS doesn’t solve anything, so don’t bother using it!” you are mistaken, and you should read the post again until you understand why.

HTTPS is a necessary condition for secure browsing, but it is not a sufficient condition.

There are limits to the benefits HTTPS provides, even when deployed properly. This post explores those limits.

Deployment Limitations

HTTPS only works if you use it.

In practice, the most common “exploit against HTTPS” is failing to use it everywhere.

Specify HTTPS:// on every URL, including URLs in documentation, email, advertisements, and everything else. Use Strict-Transport-Security (preload!) and Content-Security-Policy’s Upgrade-Insecure-Requests directive (and optionally Block-All-Mixed-Content) to help mitigate failures to properly set URLs to HTTPS.

Mixed Content – By default, browsers will block non-secure scripts and CSS (called “Active Mixed Content”) from secure pages. However, images and other “Passive Mixed Content” are requested and displayed; the page’s lock icon is silently hidden.

Non-secure Links – While browsers have special code to deal with Active and Passive mixed content, most browsers do nothing at all for Latent Mixed Content, where a secure page contains a link to a non-secure resource. Email trackers are the worst.

Privacy Limitations

SNI / IP-Address – When you connect to a server over HTTPS, the URL you’re requesting is encrypted and invisible to network observers. However, observers can see both the IP address you’re connecting to, and the hostname you’re requesting on that server (via the Server Name Indication ClientHello extension).

TLS 1.3 proposes a means of SNI-encryption, but (unless you’re using something like Tor) an observer is likely to be able to guess which server you’re visiting using only the target IP address. In most cases, a network observer will also see the plaintext of the hostname when your client looks up its IP address via the DNS protocol (maybe fixed someday).

image

Data Length – When you connect to a server over HTTPS, the data you send and receive is encrypted. However, in the majority of cases, no attempt is made to mask the length of data sent or received, meaning that an attacker with knowledge of the site may be able to determine what content you’re browsing on that site. Protocols like HTTP/2 offer built-in options to generate padding frames to mask payload length, and sites can undertake efforts (Twitter manually pads avatar graphics to fixed byte lengths) to help protect privacy. More generally, traffic analysis attacks make use of numerous characteristics of your traffic to attempt to determine what you’re up to; these are used by real-world attackers like the Great Firewall of China. Attacks like BREACH make use of the fact that when compression is in use, leaking just the size of data can also reveal the content of the data; mitigations are non-trivial.

Ticket Linking – TLS tickets can be used to identify the client. (Addressed in TLS1.3)

Referer Header – By default, browsers send a page’s URL via the Referer header (also exposed as the document.referrer DOM property) when navigating or making resource requests from one HTTPS site to another. HTTPS sites wishing to control leakage of their URLs should use Referrer Policy.

Server Identity Limitations

Certificate Verification – During the HTTPS handshake, the server proves its identity by presenting a certificate. Most certificates these days are issued after what’s called “Domain Validation”, a process by which the requestor proves that they are in control of the domain name listed in the certificate.

This means, however, that a bad guy can usually easily get a certificate for a domain name that “looks like” a legitimate site. While an attacker shouldn’t be able to get a certificate for https://paypal.com, there’s little to stop them from getting a certificate for https://paypal.co.com. Bad guys abuse this.

Some sites try to help users notice illegitimate sites by deploying Extended Validation (EV) certificates and relying upon users to notice if the site they’re visiting has not undergone that higher-level of vetting. Sadly, a number of product decisions and abysmal real-world deployment choices mean that EV certificates are of questionable value in the real-world.

Even more often, attackers rely on the fact that users don’t understand URLs at all, and are willing to enter their data into any page containing the expected logos:

image

One Hop – TLS often protects traffic for only one “hop.” For instance, when you connect to my https://fiddlerbook.com, you’ll see that it’s using HTTPS. Hooray!

What you didn’t know is that this domain is fronted by Cloudflare CDN’s free tier. While your communication with the Content Delivery Network is secure, the request from the CDN to my server (http://fiddlerbook.com) is over plain HTTP because my server doesn’t have a valid certificate[1]. A well-positioned attacker could interfere with your connection to the backend site by abusing that non-secure hop. Overall, using Cloudflare for HTTPS fronting improves security in my site’s scenario (protecting against some attackers), but browser UI limits mean that the protection probably isn’t as good as you expected. Here’s a nice video on this.

Multi-hop scenarios exist beyond CDNs; for instance, a HTTPS server might pull in a HTTP web service or use a non-secure connection to a remote database on the backend.

DOM Mixing – When you establish a connection to https://example.com, you can have a level of confidence that the top-level page was delivered unmolested from the example.com server. However, returned HTML pages often pull in third-party resources from other servers, whose certificates are typically not user-visible. This is especially interesting in cases where the top-level page has an EV certificate (“lighting up the green bar”), but scripts or other resources are pulled from a third-party with a domain-validated certificate.

Sadly, in many cases, third-parties are not worthy of the high-level of trust they are granted by inclusion in a first-party page.

Server Compromise – HTTPS only aims to protect the bytes in transit. If a server has been compromised due to a bug or a configuration error, HTTPS does not help (and might even hinder detection of the compromised content, in environments where HTTP traffic is scanned for malware by gateway appliances, for instance). HTTPS does not stop malware.

Server Bugs – Even when not compromised, HTTPS doesn’t make server code magically secure. In visual form:
image

NoSilverBullets

Client Identity Limitations

Client Authentication – HTTPS supports a mode whereby the client proves their identity to the server by presenting a certificate during the HTTPS handshake; this is called “Client Authentication.” In practice, this feature is little used.

Client Tampering – Some developers assume that using HTTPS means that the bytes sent by the client have not been manipulated in any way. In practice, it’s trivial for a user to manipulate the outbound traffic from a browser or application, despite the use of HTTPS.

Features like Certificate Pinning could have made it slightly harder for a user to execute a man-in-the-middle attack against their own traffic, but browser clients like Firefox and Chrome automatically disable Certificate Pinning checks when the received certificate chains to a user-installed root certificate. This is not a bug.

In some cases, the human user is not a party to the attack. HTTPS aims to protect bytes in transit, but does not protect those bytes after they’re loaded in the client application. A man-in-the-browser attack occurs when the client application has been compromised by malware, such that tampering or data leaks are performed before encryption or after decryption. The spyware could take the form of malware in the OS, a malicious or buggy browser extension, etc.

Real-world Implementation Limitations

Early Termination Detection – The TLS specification offers a means for detecting when a data stream was terminated early to prevent truncation attacks. In practice, clients do not typically implement this feature and will often accept truncated content silently, without any notice to the user.

Validation Error Overrides – HTTPS deployment errors are so common that most user-agents allow the user to override errors reported during the certificate validation process (expired certificates, name mismatches, even untrusted CAs etc). Clients range in quality as to how well they present the details of the error and how effectively they dissuade users from making mistakes.

Further Reading

-Eric

[1] A few days after posting, someone pointed out that I can configure Cloudflare to use its (oddly named) “Full” HTTPS mode, which allows it to connect to my server over HTTPS using the (invalid) certificate installed on my server. I’ve now done so, providing protection from passive evesdroppers. But you, as an end-user, cannot tell the difference, which is the point of this post.

Standard
dev, security

Strict-Transport-Security for *.dev, *.app and more

Some web developers host their pre-production development sites by configuring their DNS such that hostnames ending in .dev point to local servers. Such configurations were not meaningfully impacted when .dev became an official Generic Top Level Domain a few years back, because even as smart people warned that developers should stop squatting on it, Google (the owner of the .dev TLD) was hosting few (if any) sites on the gTLD.

With Chrome 63, shipping to the stable channel in the coming days, things have changed. Chrome has added .dev to the HSTS Preload list (along with the .foo, .page, .app, and .chrome TLDs). This means that any attempt to visit http://anything.dev will automatically be converted to https://anything.dev.

hstspreload

Other major browsers use the same underlying HSTS Preload list, and we expect that they will also pick up the .dev TLD entry in the coming weeks and months.

Of course, if you were using HTTPS with valid and trusted certificates on your pre-production sites already (good for you!) the Chrome 63 change may not impact you very much right away. But you’ll probably want to move your preproduction sites off of .dev and instead use e.g. .test, a TLD reserved for this purpose.

Secure all the things!

-Eric

PS: Perhaps surprisingly, the dotless (“plain”) hostnames http://dev, http://page, http://app, http://chrome, http://foo are all impacted by new HSTS rules as well.

Standard
browsers, security, Uncategorized

Google Internet Authority G3

For some time now, operating behind the scenes and going mostly unnoticed, Google has been changing the infrastructure used to provide HTTPS certificates for its sites and services.

You’ll note that I said mostly. Over the last few months, I’ve periodically encountered complaints from users who try to load a Google site and get an unexpected error page:

certerror

Now, there are a variety of different problems that can cause errors like this one– in most cases, the problem is that the user has some software (security software or malware) installed locally that is generating fake certificates that are deemed invalid for various reasons.

However, when following troubleshooting steps, we’ve determined that a small number of users encountering this NET::ERR_CERT_AUTHORITY_INVALID error page are hitting it for the correct and valid Google certificates that chain through Google’s new intermediate Google Internet Authority G3. That’s weird.

What’s going on?

The first thing to understand is that Google operates a number of different certificate trust chains, and we have multiple trust chains deployed at the same time. So a given user will likely encounter some certificate chains that go through the older Google Internet Authority G2 chain and some that go through the newer Google Internet Authority G3 chain– this isn’t something the client controls.

G2vG3

You can visit this GIA G3-specific test page to see if the G3 root is properly trusted by your system.

More surprisingly, it’s also the case that you might be getting a G3 chain for a particular Google site (e.g. https://mail.google.com) while some other user is getting a G2 chain for the same URL. You might even end up with a different chain simply based on what Google sites you’ve visited first, due to a feature called HTTP/2 connection coalescing.

In order to see the raw details of the certificate encountered on an error page, you can click the error code text on the blocking page. (If the site loaded without errors, you can view the certificate like so).

Google’s new certificate chain is certainly supposed to be trusted automatically– if your operating system (e.g. Windows 7) didn’t already have the proper certificates installed, it’s expected to automatically download the root certificate from the update servers (e.g. Microsoft WindowsUpdate) and install it so that the certificate chain is recognized as trusted. In rare instances, we’ve heard of this process not working– for instance, some network administrators have disabled root certificate updates for their enterprise’s PCs.

On modern versions of Windows, you can direct Windows to check its trusted certificate list against the WindowsUpdate servers by running the following from a command prompt:

certutil -f -verifyCTL AuthRootWU

Older versions of Windows might not support the -verifyCTL command. You might instead try downloading the R2 GlobalSign Root Certificate directly and then installing it in your Trusted Root Certification Authorities:

InstallBtnLocalMachineTrustedRootFinishyay

Overall, the number of users reporting problems here is very low, but I’m determined to help ensure that Chrome and Google sites work for everyone.

-Eric

Standard
browsers, security

Chrome Field Trials

Back in April, we announced:

Beginning in October 2017, Chrome will show the “Not secure” warning in two additional situations: when users enter data on an HTTP page, and on all HTTP pages visited in Incognito mode.

This is true, but it’s perhaps a little misleading, based on some of the tweets we’ve seen:

Screen Shot 2017-10-18 at 8.05.25 AM

What isn’t mentioned in the blog post is exactly how this feature will roll out– many readers naturally assume that it’s as simple as: “If you have Chrome 62, then this feature is present.” After all, that’s how software usually works.

In Chrome, things are more interesting. Where possible, Chrome rolls out new features dynamically using the Field Trials platform. You can think of Field Trials as a set of server-controlled flags that allow Google to change Chrome’s behavior dynamically, at runtime, without shipping a new version.

We use Field Trials for two major purposes– for experimentation, and for feature rollouts.

Experimentally, we run many experiments where we create one or more experimental groups and then compare telemetry from those clients against a control group. If a feature isn’t performing as expected (e.g. its usage declines vs. the feature it replaces, or browser crashes increase, or memory usage increases or page load slows, etc), the feature is tuned or removed before it officially “ships.” Experiments are often conducted on the pre-release channels (e.g. Canary, Dev, and Beta) before deciding whether or not a feature should be rolled out to the Stable channel.

After a feature has proven itself via experiments, it’s ready to roll out to users in the Stable channel. Unfortunately, pre-release channels don’t get coverage nearly as broad as we’d like, so we have to take care when a feature is first rolled out to Stable. By using a field trial, we can enable the new feature for a huge number of Stable users while still keeping it at a low percentage of the Stable user base (e.g. 1% of a billion installs == 10 million clients). We keep an eagle eye on telemetry and user-feedback reports to spot any unexpected problems, and assuming we don’t find any, we can quickly ramp up the new feature rollout to 100% of users. If we discover the feature wasn’t fit to ship for whatever reason (e.g. introduces some serious bug), we can dial it back to 0% until a fix can be created.

Unfortunately, field trials are one of the very few inscrutable parts of the otherwise veryopen Chrome– Google does not publish information about the current percentage of users in a trial, and while chrome://version/ shows which trials are currently enabled for a given client, there’s no public mapping of the Variations tokens to the actual trials they control.

Rest assured that I’m eager to push the new Not Secure warnings to 100% and I expect to get to do so very soon. If you just can’t wait, you can override the field trial and turn it on yourself by changing chrome://flags/#mark-non-secure-as and restarting Chrome.

Screen Shot 2017-10-18 at 8.22.16 AM

Note that there’s not a 1:1 correspondence between Flags and Field Trials– while many trials can be overridden by flags, not all experiments have user-toggleable flags. If a feature is controlled by a flag and a field trial, when a flag hasn’t been manually configured, the field trial’s setting (if any) is not reflected in chrome://flags… the flag entry will just show Default.

Protecting your web traffic as fast as we can,

-Eric

Standard