browsers, security

Stop Spilling the Beans

I’ve written about Same Origin Policy a bunch over the years, with a blog series mapping it to the Read/Write/Execute mental model.

More recently, I wrote about why Content-Type headers matter for same-origin-policy enforcement.

I’ve just read a great paper on cross-origin infoleaks and current/future mitigations. If you’re interested in browser security, it’s definitely worth a read.

Standard
browsers, security

HSTS Preload and Subdomains

In order to be eligible for the HSTS Preload list, your site must usually serve a Strict-Transport-Security header with a includeSubdomains directive.

Unfortunately, some sites do not follow the best practices recommended and instead just set a one-year preload header with includeSubdomains and then immediately request addition to the HSTS Preload list. The result is that any problems will likely be discovered too late to be rapidly fixed– removals from the preload list may take months.

The Mistakes

In running the HSTS preload list, we’ve seen two common mistakes for sites using includeSubdomains:

Mistake: Forgetting Intranet Hosts

Some sites are set up with a public site (example.com) and an internal site only accessible inside the firewall (app.corp.example.com). When includeSubdomains is set, all sites underneath the specified domain must be accessible over HTTPS, including in this case app.corp.example.com. Some corporations have different teams building internal and external applications, and must take care that the security directives applied to the registrable domain are compatible with all of the internal sites running beneath it in the DNS hierarchy. Following the best practices of staged rollout (with gradually escalating max-age directives) will help uncover problems before you brick your internal sites and applications.

Of course, you absolutely should be using HTTPS on all of your internal sites as well, but HTTPS deployments are typically smoother when they haven’t been forced by Priority-Zero downtime.

Mistake: Forgetting Delegated Hosts

Some popular sites and services use third party companies for advertising, analytics, and marketing purposes. To simplify deployments, they’ll delegate handling of a subdomain under their main domain to the vendor providing the service. For instance, http://www.example.com will point to the company’s own servers, while the hostname mail-tracking.example.com will point to Experian or Marketo servers. A HSTS rule with includeSubdomains applied to example.com will also apply to those delegated domains. If your service provider has not enabled HTTPS support on their servers, all requests to those domains will fail when upgraded to HTTPS. You may need to change service providers entirely in order to unbrick your marketing emails!

Of course, you absolutely should be using HTTPS on all of your third-party apps as well, but HTTPS deployments are typically smoother when they haven’t been forced by Priority-Zero downtime.

Recovery

If you do find yourself in the unfortunate situation of having preloaded a TLD whose subdomains were not quite ready, you can apply for removal from the preload list, but, as noted previously, the removal can be expected to take a very long time to propagate. For cases where you only have a few domains out of compliance, you should be able to quickly move them to HTTPS. You might also consider putting a HTTPS front-end out in front of your server (e.g. Cloudflare’s free Flexible SSL option) to allow it to be accessed over HTTPS before the backend server is secured.

Deploy safely out there!

-Eric

Standard
browsers, security

TLS Fallbacks are Dead

Gravestone reading RIP Fallbacks

tl;dr

 

Just over 5 years ago, I wrote a blog post titled “Misbehaving HTTPS Servers Impair TLS 1.1 and TLS 1.2.”

In that post, I noted that enabling versions 1.1 and 1.2 of the TLS protocol in IE would cause some sites to load more slowly, or fail to load at all. Sites that failed to load were sending TCP/IP RST packets when receiving a ClientHello message that indicated support for TLS 1.1 or 1.2; sites that loaded more slowly relied on the fact that the browser would retry with an earlier protocol version if the server had sent a TCP/IP FIN instead.

TLS version fallbacks were an ugly but practical hack– they allowed browsers to enable stronger protocol versions before some popular servers were compatible. But version fallback incurs real costs:

  • security – a MITM attacker can trigger fallback to the weakest supported protocol
  • performance – retrying handshakes takes time
  • complexity – falling back only in the right circumstances, creating new connections as needed
  • compatibility – not all clients are willing or able to fallback (e.g. Fiddler would never fallback)

Fortunately, server compatibility with TLS 1.1 and 1.2 has improved a great deal over the last five years, and browsers have begun to remove their fallbacks; first fallback to SSL 3 was disabled and now Firefox 37+ and Chrome 50+ have removed fallback entirely.

In the rare event that you encounter a site that needs fallback, you’ll see a message like this, in Chrome:

Google Chrome 52 error

or in Firefox:

Firefox 45 error

Currently, both Internet Explorer and Edge fallback; first a TLS 1.2 handshake is attempted:

TLS 1.2 ClientHello bytes

and after it fails (the server sends a TCP/IP FIN), a TLS 1.0 attempt is made:

TLS 1.0 ClientHello bytes

This attempt succeeds and the site loads in IE. If you analyze the affected site using SSLLabs, you can see that it has a lot of problems, the key one for this case is in the middle:

Grade F on SSLLabs

This is repeated later in the analysis:

TLS Version intolerant for versions 1.2 and 1.3

The analyzer finds that the server refuses not only TLS 1.2 but also the upcoming TLS 1.3.

Unfortunately, as an end-user, there’s not much you can safely do here, short of contacting the site owners and asking them to update their server software to support modern standards. Fortunately, this problem is rare– the Chrome team found that only 0.0017% of TLS connections triggered fallbacks, and this tiny number is probably artificially high (since a spurious connection problem will trigger fallback).

 

-Eric

Standard
security

Non-Secure Clicktrackers–The Fastest Path from A+ to F

HTTPS only works if you use it.

Coinbase is an online bitcoin exchange backed by $106M in venture capital investment. They’ve got a strong HTTPS security posture, including the latest ciphers, a 4096bit RSA key, and advanced features like browser-preloaded HSTS and HPKP.

SSLLabs grades Coinbase’s HTTPS deployment an A+:

A+ Grade from SSLLabs

This is a well-secured site with a professional security team.

Here’s the email they just sent me:

"Add a debit card"

Let’s run the MoarTLS Analyzer on that:

All Red

That’s right… every hyperlink in this email is non-secure and any click can be intercepted and sent anywhere by a network-based attacker.

Sadly, Coinbase is far from alone in snatching security defeat from the jaws of victory; my #HTTPSFAIL folder includes a lot of other big names:

HTTPSFailures

 

It doesn’t matter how well you secure your castle if you won’t help your visitors get to it securely. Use HTTPS everywhere.

 

-Eric

Update: I filed a bug with Coinbase on HackerOne. Their security team says that they “agree” that these links should be HTTPS, but the problem is Mailchimp (their email vendor) and they can’t fix it. Mailchimp offers a security vulnerability reporting form, delivered exclusively over HTTP:

MailChimp

Coinbase isn’t the first service whose security is bypassed because their emails are sent with non-secure links; the Brave browser download announcements suffered the same problem.

Standard
security

Bolstering HTTPS Security

When #MovingToHTTPS, the first step is to obtain the necessary certificates for your domains and enable HTTPS on your webserver. After your site is fully HTTPS, there are some other configuration changes you should consider to further enhance the site’s security.

Validate Basic Configuration

First, use SSLLab’s Server Test  to ensure that your existing HTTPS settings (cipher suites, etc) are configured well.

 

SECURE Cookies

After your site is fully secure, all of your cookies should be set with the SECURE attribute to prevent them from ever leaking over a non-secure channel. The (confusingly-named) HTTPOnly attribute should also be set on each cookie that doesn’t need to be accessible from JavaScript.

    Set-Cookie: SESSIONID=b12321ECLLDGH; secure; path=/

Strict Transport Security (HSTS)

Next, consider enabling HTTP Strict Transport Security. By sending a HSTS header from your HTTPS pages, you can ensure that all visitors navigate to your site via HTTPS, even if they accidentally type “http://” or follow an outdated, non-secure link.

HSTS also ensures that, if a network attacker were to try to intercept traffic to your HTTPS site using an invalid certificate, a non-savvy user can’t wreck their security by clicking through the browser’s certificate error page.

After you’ve tested out a short-lived HSTS declaration, validated that there are no problems, and ramped it up to a long-duration declaration (e.g. one year), you should consider requesting that browsers pre-load your declaration to prevent any “bootstrap” threats (whereby a user isn’t protected on their very first visit to your site).

    Strict-Transport-Security: max-age=631138519; includeSubDomains; preload

The includeSubdomains attribute indicates that all subdomains of the current page’s domain must also be secured with HTTPS. This is a powerful feature that helps protect cookies (which have weird scoping rules) but it is also probably the most common source of problems because site owners may “forget” about a legacy non-secure subdomain when they first enable this attribute.

Strict-Transport-Security is supported by most browsers.

Certificate Authority Authorization (CAA)

Certificate Authority Authorization (supported by LetsEncrypt) allows a domain owner to specify which Certificate Authorities should be allowed to issue certificates for the domain. All CAA-compliant certificate authorities should refuse to issue a certificate unless they are the CA of record for the target site. This helps reduce the threat of a bad guy tricking a Certificate Authority into issuing a phony certificate for your site.

The CAA rule is stored as a DNS resource record of type 257. You can view a domain’s CAA rule using a DNS lookup service. For instance, this record for Google.com means that only Symantec’s Certificate Authority may issue certificates for that host:

issue symantec.com

Configuration of CAA rules is pretty straightforward if you have control of your DNS records.

Public Key Pinning (HPKP)

Unfortunately, many CAs have made mistakes over the years (e.g. DigiNotar, India CCA, CNNIC CA, ANSSI CA) and the fact that browsers trust so many different certificate authorities presents a large attack surface.

To further reduce the attack surface from sloppy or compromised certificate authorities, you can enable Public Key Pinning. Like HSTS, HPKP rules are sent as HTTPS response headers and cached on the client. Each domain’s HPKP rule contains a list of Public Key hashes that must be found within the certificate chain presented by the server. If the server presents a certificate and none of the specified hashes are found in that certificate’s chain, the certificate is rejected and (typically) a failure report is sent to the owner for investigation.

Public Key Pinning powerful feature, and sites must adopt it carefully—if a site carelessly sends a long-life HPKP header, it could deny users access to the site for the duration of the HPKP rule.

    Public-Key-Pins: pin-sha256=”8Rw90Ej3Ttt8RRkrg+WYDS9n7IS03bk5bjP/UXPtaY8=”;
pin-sha256=”YLh1dUR9y6Kja30RrAn7JKnbQG/uEtLMkBgFF2Fuihg=”;
max-age=600; includeSubdomains;
report-uri=”https://report-uri.io/report/93cb21052c”

Free services like ReportUri.io can be used to collect HPKP violation reports.

HPKP is supported by some browsers.

Certificate Transparency

Certificate Transparency is a scheme where each CA must publicly record each certificate it has issued to a public, auditable log. This log can be monitored by site and brand owners for any unexpected certificates, helping expose fraud and phony domains that might have otherwise gone unnoticed.

Site owners who want a lightweight way to look for all CT-logged certificates for their domains can use the Certificate Transparency Lookup Tool. Expect that other tools will become available to allow site owners to subscribe to notifications about certificates of interest.

Since 2015, Chrome has required that all EV certificates issued be logged in CT logs. Future browsers will almost certainly offer a way for a site to indicate to visiting browsers that it should Expect or Require that the received certificate be found in CT logs, reporting or blocking the connection if it is not.

 

Thanks for your help in securing the web!

-Eric

Standard
fiddler, security

Silliness – Fiddler Blocks Malware

Enough malware researchers now depend upon Fiddler that some bad guys won’t even try to infect your system if you have Fiddler installed.

The Malware Bytes blog post has the details, but the gist of it is that the attackers use JavaScript to probe the would-be victim’s PC for a variety of software. Beyond Kaspersky, TrendMicro, and MBAM security software, the fingerprinting script also checks for VirtualBox, Parallels, VMWare, and Fiddler. If any of these programs are thought to be installed, the exploit attempt is abandoned and the would-be victim is skipped.

This isn’t the only malware we’ve seen hiding from Fiddler—earlier attempts use tricks to see whether Fiddler is actively running and intercepting traffic and only abandon the exploit if it is.

This behavior is, of course, pretty silly. But it makes me happy anyway.

Preventing Detection of Fiddler

Malware researchers who want to help ensure Fiddler cannot be detected by bad guys should take the following steps:

  1. Do not put Fiddler directly on the “victim” machine.
    – If you must, at least install it to a non-default path.
  2. Instead, run Fiddler as a proxy server external to the victim machine (either use a different physical machine or a VM).
    – Tick the Tools > Fiddler Options > Connections > Allow remote computers to connect checkbox. Restart Fiddler and ensure the machine’s firewall allows inbound traffic to port 8888.
    – Point the victim’s proxy settings at the remote Fiddler instance.
    – Visit http://fiddlerserverIP:8888/ from the victim and install the Fiddler root certificate
  3. Click Rules > Customize Rules and update the FiddlerScript so that the OnReturningError function wipes the response headers and body and replaces them with non-descript strings. Some fingerprinting JavaScript will generate bogus AJAX requests and then scan the response to see whether there are signs of Fiddler.

-Eric

Standard
dev, perf

Compression Context

ZIP is a great format—it’s extremely broadly deployed, relatively simple, and supports a wide variety of use-cases pretty well. ZIP is the underlying format beneath Java (.jar) Archives , Office (docx/xlsx/pptx) files, Fiddler (.saz) Session Archive ZIP files, and many more.

Even though some features (Unicode filenames, AES encryption, advanced compression engines) aren’t supported by all clients (particularly Windows Explorer), basic support for ZIP is omnipresent. There are even solid implementations in JavaScript (optionally utilizing asmjs), and discussion of adding related primitives directly to the browser.

I learned a fair amount about the ZIP format when building ZIP repair features in Fiddler’s SAZ file loader. Perhaps the most interesting finding is that each individual file within a ZIP is compressed on its own, without any context from files already added to the ZIP. This means, for instance, that you can easily remove files from within a ZIP file without recompressing anything—you need only delete the removed entries and recompute the index. However, this limitation also means that if the data you’re compressing contains a lot of interfile redundancy, the compression ratio does not improve as it would if there were intrafile redundancy.

This limitation can be striking in cases like Fiddler where there may be a lot of repeated data across multiple Sessions. In the extreme case, consider a SAZ file with 1000 near-identical Sessions. When that data is compressed to a SAZ, it is 187 megabytes. If the data were instead compressed with 7-Zip, which shares a compression context across embedded files, the output is 99.85% smaller!

ZIP almost 650x larger than 7z

In most cases, of course, the Session data is not identical, but Sessions on the whole tend to contain a lot of redundancy, particularly when you consider HTTP headers and the Session metadata XML files.

The takeaway here is that when you look at compression, the compression context is very important to the resulting compression ratio. This fact rears its head in a number of other interesting places:

  • brotli compression achieves high-compression ratios in part by using a 16 megabyte sliding window, as compared to the 32kb window used by nearly all DEFLATE implementations. This means that brotli content can “point back” much further in the already-compressed data stream when repeated data is encountered.
  • brotli also benefits by pre-seeding the compression context with a 122kb static dictionary of common web content; this means that even the first bytes of a brotli-compressed response can benefit from existing context.
  • SDCH compression achieves high-compression ratios by supplying a carefully-crafted dictionary of strings that result in an “ideal” compression context, so that later content can simply refer to the previously-calculated dictionary entries.
  • Adding context introduces a key tradeoff, however, as the larger a compression context grows, the more memory a compressor and decompressor tend to require.
  • While HTTP/2 reduces the need to concatenate CSS and JS files by reducing the performance cost of individual web requests, HTTP compression contexts are per-resource. That means larger files tend to yield higher-compression ratios. See “Bundling Improves Compression.”
  • Compression contexts can introduce information disclosure security vulnerabilities if a context is shared between “secret” and “untrusted” content. See also CRIME, BREACH, and HPACK. In these attacks, the bad guy takes advantage of the fact that if his “guess” matches some secret string earlier in the compression context, it will compress better (smaller) than if his guess is wrong. This attack can be combatted by isolating compression contexts by trust level, or by introducing random padding to frustrate size analysis.

 

Ready to learn a lot more about compression on the web? Check out my earlier Compressing the Web blog, my Brotli blog, and the excellent Google Developers CompressorHead video series.

-Eric

Standard