browsers, security, web

CORS and Vary

Yesterday, I started looking a site compatibility bug where a page’s layout is intermittently busted. Popping open the F12 Tools on the failing page, we see that a stylesheet is getting blocked because it lacks a CORS Access-Control-Allow-Origin response header:

NoStylesheetCORS

We see that the client demands the header because the LINK element that references it includes a crossorigin=anonymous directive:

crossorigin="anonymous" href="//s.axs.com/axs/css/90a6f65.css?4.0.1194" type="text/css" />

Aside: It’s not clear why the site is using this directive. CORS is required to use  SubResource Integrity, but this resource does not include an integrity attribute. Perhaps the goal was to save bandwidth by not sending cookies to the “s” (static content) domain?

In any case, the result is that the stylesheet sometimes fails to load as you navigate back and forward.

Looking at the network traffic, we find that the static content domain is trying to follow the best practice Include Vary: Origin when using CORS for access control.

Unfortunately, it’s doing so in a subtly incorrect way, which you can see when diffing two request/response pairs for the stylesheet:

VaryDiff

As you can see in the diff, the Origin token is added only to the response’s Vary directive when the request specifies an Origin header. If the request doesn’t specify an Origin, the server returns a response that lacks the Access-Control-* headers and also omits the Vary: Origin header.

That’s a problem. If the browser has the variant without the Access-Control directives in its cache, it will reuse that variant in response to a subsequent request… regardless of whether or not the subsequent request has an Origin header.

The rule here is simple: If your server makes a decision about what to return based on a what’s in a HTTP header, you need to include that header name in your Vary, even if the request didn’t include that header.

-Eric

PS: This seems to be a pretty common misconfiguration, which is mentioned in the fetch spec:


CORS protocol and HTTP caches

If CORS protocol requirements are more complicated than setting `Access-Control-Allow-Origin` to * or a static origin, `Vary` is to be used.

Vary: Origin

In particular, consider what happens if `Vary` is not used and a server is configured to send `Access-Control-Allow-Origin` for a certain resource only in response to a CORS request. When a user agent receives a response to a non-CORS request for that resource (for example, as the result of a navigation request), the response will lack `Access-Control-Allow-Origin` and the user agent will cache that response. Then, if the user agent subsequently encounters a CORS request for the resource, it will use that cached response from the previous non-CORS request, without `Access-Control-Allow-Origin`.

But if `Vary: Origin` is used in the same scenario described above, it will cause the user agent to fetch a response that includes `Access-Control-Allow-Origin`, rather than using the cached response from the previous non-CORS request that lacks `Access-Control-Allow-Origin`.

However, if `Access-Control-Allow-Origin` is set to * or a static origin for a particular resource, then configure the server to always send `Access-Control-Allow-Origin` in responses for the resource — for non-CORS requests as well as CORS requests — and do not use `Vary`.


 

Standard
browsers, security

Edge EV UI Requires SmartScreen

A user recently noticed that when loading Paypal.com in Microsoft Edge, the UI shown was the default HTTPS UI (a gray lock):

Non-EV-UI-For-Paypal

Instead of the fancier “green” UI shown for servers that present Extended Validation (EV) certificates:EV-for-Paypal

The user observed this on some Windows 10 machines but not others.

The variable that differed between those machines was the state of the Menu > Settings > Advanced > Windows Defender SmartScreen setting.

Edge only shows the green EV user interface when SmartScreen is enabled.

IE 11

Internet Explorer 11 on Windows 10 behaves the same way as prior versions of IE going back to IE7– the green EV UI requires either SmartScreen be enabled or that the option Tools > Internet Options > Advanced > Security > Check for Server Certificate Revocation be enabled.

Chrome

The Chrome team recently introduced a new setting, exposed via the chrome://flags/#simplify-https-indicator page, that controls how EV certificates are displayed in their Security Chip. A user (or a field trial) can configure sites with EV certificates to display using the default HTTPS UI.

ChromeEV

 

-Eric

Standard
browsers, security

Stop Spilling the Beans

I’ve written about Same Origin Policy a bunch over the years, with a blog series mapping it to the Read/Write/Execute mental model.

More recently, I wrote about why Content-Type headers matter for same-origin-policy enforcement.

I’ve just read a great paper on cross-origin infoleaks and current/future mitigations. If you’re interested in browser security, it’s definitely worth a read.

Standard
browsers, dev, security

Building your .APP website with NameCheap and GitHub Pages–A Visual Guide

I recently bought a few new domain names under the brand new .app top-level-domain (TLD). The .app TLD is awesome because it’s on the HSTSPreload list, meaning that browsers will automatically use only HTTPS for every request on every domain under .app, keeping connections secure and improving performance.

I’m not doing anything terribly exciting with these domains for now, but I’d like to at least put up a simple welcome page on each one. Now, in the old days of HTTP, this was trivial, but because .app requires HTTPS, that means I must get a certificate for each of my sites for them to load at all.

Fortunately, GitHub recently started supporting HTTPS on GitHub Pages with custom domains, meaning that I can easily get a HTTPS site up in running in just a few minutes.

1. Log into GitHub, go to your Repositories page and click New:

2. Name your new repository something reasonable:

3. Click to create a simple README file:

4. Edit the file

5. Click Commit new file

6. Click Settings on the repository

7. Scroll to the GitHub Pages section and choose master branch and click Save:

8. Enter your domain name in the Custom domain box and click Save

9. Login to NameCheap (or whatever DNS registrar you used) and click Manage for the target domain name:

10. Click the Advanced DNS tab:

11. Click Add New Record:

 

12. Enter four new A Records for host of @ with the list of IP addresses GitHub pages use:

13. Click Save All Changes.

14. Click Add New Record and add a new CNAME Record. Enter the host www and a target value of username.github.io. Click Save All Changes: 

15. Click the trash can icons to delete the two default DNS entries that NameCheap had for your domain previously:

16. Try loading your new site.

  • If you get a connection error, wait a few minutes for DNS to propagate and re-verify the DNS records you just added.
  • If you get a certificate error, look at the certificate. It’s probably the default GitHub certificate. If so, look in the GitHub Pages settings page and you may see a note that your certificate is awaiting issuance by LetsEncrypt.org. If so, just wait a little while.

  • After the certificate is issued, your site without errors:

 

Go forth and build great (secure) things!

-Eric Lawrence

Standard
security, Uncategorized

Fight Phish with Facebook (and Certificate Transparency)

As of April 30th, Chrome now requires that all certificates issued by a public certificate authority be logged in multiple public Certificate Transparency (CT) logs, ensuring that anyone can audit all certificates that have been issued. CT logs allow site owners and security researchers to much more easily detect if a sloppy or compromised Certificate Authority has issued a certificate in error.

For instance, I own bayden.com, a site where I distribute freeware applications. I definitely want to hear about it if any CA issues a certificate for my site, because that’s a strong indication that my site’s visitors may be under attack. What’s cool is that CT also allows me to detect if someone got a certificate for a domain name that was suspiciously similar to my domain, for instance bȧyden.com.

Now, for the whole thing to work, I have to actually pay attention to the CT logs, and who’s got time for that? Someone else’s computer, that’s who.

The folks over at Facebook Security have built an easy-to-use interface that allows you to subscribe to notifications any time a domain you care about has a new certificate issued. Just enter a hostname and decide what sorts of alerts you’d like:

CTMonitor

You can even connect their system into webhooks if you’re looking for something more elaborate than email, although mail works just fine for me:

Notification

Beyond Facebook, there will likely be many other CT Monitoring services coming online over the next few years. For instance, the good folks at Hardenize have already integrated one into their broader security monitoring platform.

The future is awesome.

-Eric

Standard
browsers, security

SSLVersionMin Policy returns to Chrome 66

Chrome 66, releasing to stable this week, again supports the SSLVersionMin policy that enables administrators to control the minimum version of TLS that Chrome is willing to negotiate with a server.

If this policy is in effect and configured to permit, say, only TLS/1.2+ connections, attempting to connect to a site that only supports TLS/1.0 will result in an error page with the status code ERR_SSL_VERSION_OR_CIPHER_MISMATCH.

This policy existed until Chrome 52 and was brought back for Chrome 66 and later. It is therefore possible that your administrators configured it long ago, forgot about the setting, and will be impacted again with Chrome 66’s rollout.

-Eric

Standard
browsers, security

HSTS Preload and Subdomains

In order to be eligible for the HSTS Preload list, your site must usually serve a Strict-Transport-Security header with a includeSubdomains directive.

Unfortunately, some sites do not follow the best practices recommended and instead just set a one-year preload header with includeSubdomains and then immediately request addition to the HSTS Preload list. The result is that any problems will likely be discovered too late to be rapidly fixed– removals from the preload list may take months.

The Mistakes

In running the HSTS preload list, we’ve seen two common mistakes for sites using includeSubdomains:

Mistake: Forgetting Intranet Hosts

Some sites are set up with a public site (example.com) and an internal site only accessible inside the firewall (app.corp.example.com). When includeSubdomains is set, all sites underneath the specified domain must be accessible over HTTPS, including in this case app.corp.example.com. Some corporations have different teams building internal and external applications, and must take care that the security directives applied to the registrable domain are compatible with all of the internal sites running beneath it in the DNS hierarchy. Following the best practices of staged rollout (with gradually escalating max-age directives) will help uncover problems before you brick your internal sites and applications.

Of course, you absolutely should be using HTTPS on all of your internal sites as well, but HTTPS deployments are typically smoother when they haven’t been forced by Priority-Zero downtime.

Mistake: Forgetting Delegated Hosts

Some popular sites and services use third party companies for advertising, analytics, and marketing purposes. To simplify deployments, they’ll delegate handling of a subdomain under their main domain to the vendor providing the service. For instance, http://www.example.com will point to the company’s own servers, while the hostname mail-tracking.example.com will point to Experian or Marketo servers. A HSTS rule with includeSubdomains applied to example.com will also apply to those delegated domains. If your service provider has not enabled HTTPS support on their servers, all requests to those domains will fail when upgraded to HTTPS. You may need to change service providers entirely in order to unbrick your marketing emails!

Of course, you absolutely should be using HTTPS on all of your third-party apps as well, but HTTPS deployments are typically smoother when they haven’t been forced by Priority-Zero downtime.

Recovery

If you do find yourself in the unfortunate situation of having preloaded a TLD whose subdomains were not quite ready, you can apply for removal from the preload list, but, as noted previously, the removal can be expected to take a very long time to propagate. For cases where you only have a few domains out of compliance, you should be able to quickly move them to HTTPS. You might also consider putting a HTTPS front-end out in front of your server (e.g. Cloudflare’s free Flexible SSL option) to allow it to be accessed over HTTPS before the backend server is secured.

Deploy safely out there!

-Eric

Standard