https

When I launched Chrome on Thursday, I saw something unexpected:

SSLKeyLogfile

While most users probably would have no idea what to make of this, I happened to know what it means– Chrome is warning me that the system configuration has instructed it to leak the secret keys it uses to encrypt and decrypt HTTPS traffic to a stream on the local computer.

Looking at the Chrome source code, this warning was newly added last week. More surprising was that I couldn’t find the SSLKeyLogFile setting anywhere on my system. Opening a new console showed that it wasn’t set:

C:\WINDOWS\system32>set sslkeylogfile
Environment variable sslkeylogfile not defined

…and opening the System Properties > Advanced > Environment Variables UI showed that it wasn’t set for either my user account or the system at large. Weird.

Fortunately, I understood from past investigations that a process can have different environment variables than the rest of the system, and Process Explorer can show the environment variables inside a running process. Opening Chrome.exe, we see that it indeed has an SSLKEYLOGFILE set:

SSLKeyLogfileEB

The unusual syntax with the leading \\.\ means that this isn’t a typical local file path but instead a named pipe, which means that it doesn’t point to a file on disk (e.g. C:\temp\sslkeys.txt) but instead to memory that another process can see.

My machine was in this state because earlier that morning, I’d installed Avast Antivirus to attempt to reproduce a bug a Chrome user encountered. Avast is injecting the SSLKEYLOGFILE setting so that it can conduct a monster-in-the-browser attack (MITB) and see the encrypted traffic going into Chrome.

Makers of antivirus products know that browsers are one of the primary vectors by which attackers compromise PCs, and as a consequence their security products often conduct MITB attacks in order to scan web content. Antivirus developers have two common techniques to scan content running in the browser:

  1. Code injection
  2. Network interception

Code Injection

The code injection technique relies upon injecting security code into the browser process. The problem with this approach is that native code injections are inherently fragile– any update to the browser might move its functions and data structures around such that the security code will fail and crash the process. Browsers discourage native code injection, and the bug I was looking at was related to a new feature, RendererCodeIntegrity, that directs the Windows kernel to block loading of any code not signed by Microsoft or Google into the browser’s renderer processes.

An alternative code-injection approach relies upon using a browser extension that operates within the APIs exposed by the browser– this approach is more stable, but can address fewer threats.

Even well-written code injections that don’t cause stability problems can cause significant performance regressions for browsers– when I last looked at the state of the industry, performance costs for top AV products ranged from 20% to 400% in browser scenarios.

Network Interception

The Network interception technique relies upon scanning the HTTP and HTTPS traffic that goes into the browser process. Scanning HTTP traffic is straightforward (a simple proxy server can do it), but scanning HTTPS traffic is harder because the whole point of HTTPS is to make it impossible for a network intermediary to view or modify the plaintext network traffic.

Historically, the most common mechanism for security-scanning HTTPS traffic was to use a monster-in-the-middle (MITM) proxy server running on the local computer. The MITM would instruct Windows to trust a self-signed root certificate, and it would automatically generate new interception certificates for every secure site you visit. I spent over a decade working on such a MITM proxy server, the Fiddler Web Debugger.

There are many problems with using a MITM proxy, however. The primary problem is that it’s very very hard to ensure that it behaves exactly as the browser does and that it does not introduce security vulnerabilities. For instance, if the MITM’s certificate verification logic has bugs, then it might accept a bogus certificate from a spoof server and the user would not be warned– Avast used to use a MITM proxy and had exactly this bug; they were not alone. Similarly, the MITM might not support the most secure versions of protocols supported by the browser and server (e.g. TLS/1.3) and thus using the MITM would degrade security. Some protocol features (e.g. Client Certificates) are incompatible with MITM proxies. And lastly, some security features (specifically certificate pinning) are fundamentally incompatible with MITM certificates and are disabled when MITM certificates are used.

Given the shortcomings of using a MITM proxy, it appears that Avast has moved on to a newer technique, using the SSLKeyLogFile to leak the secret keys HTTPS negotiates on each connection to encrypt the traffic. Firefox and Chromium support this feature, and it enables decryption of TLS traffic without using the MITM certificate generation technique. While browser vendors are wary of any sort of interception of HTTPS traffic, this approach is generally preferable to MITM proxies.

There’s some worry that Chrome’s new notification bar might drive security vendors back to using more dangerous techniques, so this notification might not make its way into the stable release of Chrome.

When it comes to browser architecture, tradeoffs abound.

-Eric

PS: I’m told that Avast may be monetizing the data they’re decrypting.

Appendix: Peeking at the Keys

If we point the SSLKeyLog setting at a regular file instead of a named pipe:

chrome --ssl-key-log-file=C:\temp\sslkeys.txt

…we can examine the file’s contents as we browse to reveal the encryption keys:

ExportedKeys

This file alone isn’t very readable for a human (even if you read Mozilla’s helpful file format documentation), but you can configure tools like Wireshark to make use of it and automatically decrypt captured TLS traffic back to plaintext.

Note: I expect to update this post over time. Last update: 8/29/2019.

Compatibility Deltas

As our new Edge Insider builds roll out to the public, we’re starting to triage reports of compatibility issues where Edge76+ (the new Chromium-based Edge, aka Anaheim) behaves differently than the old Edge (Edge18, aka Spartan) and/or Google Chrome.

In general, Edge76+ will behave very similarly to Chrome, with the caveat that, to date, only Beta, Dev and Canary channels have been released. When looking at Chrome behavior, be sure to compare against the corresponding Chrome Beta, Dev and Canary channels.

However, we expect there will be some behavioral deltas between Edge76+ and its Chrome-peer versions, so I’ll note those here too.

Note: I’ve previously blogged about interop issues between Edge18 and Chrome.

Navigation

  • For security reasons, Edge76 and Chrome block navigation to file:// URLs from non-file URLs. If a browser user clicks on a file: link on a webpage, nothing happens (except an error message in the Developer Tools console, noting “Not allowed to load local resource: file://host/whatever”). In contrast, Edge18 (like Internet Explorer before it) allowed HTTP/HTTPS-served pages in your Intranet Zone to navigate to URLs that use the file:// URL protocol; only pages in the Internet Zone were blocked from such navigations. No override for this block is available.
  • In Edge18 and Internet Explorer, attempting to navigate to an App Protocol with no handler installed shows a prompt to visit the Microsoft Store to find a handler. In Chrome/Edge76+, the navigation attempt is silently ignored.
  • Edge 18 and Internet Explorer offer a msLaunchUri API for launching and detecting App Protocols. This API is not available in Edge 76 or Chrome.
  • Edge 18 and Internet Explorer allow an App Protocol handler to opt-out of warning the user on open using the WarnOnOpen registry key. Edge 76 and Chrome do not support this registry key.

Downloads

  • Unlike IE/Edge18, Edge76/Chrome do not support DirectInvoke, a scheme whereby a download is converted into the launch of an application with a URL argument. DirectInvoke is most commonly used when launching Office documents and when running ClickOnce applications. For now, users can workaround the lack of ClickOnce support by installing an extensionUpdate: In Edge 78, see the edge://flags/#edge-click-once setting.
  • Edge76/Chrome do not support the proprietary msSaveBlob or msSaveOrOpenBlob APIs supported in Edge18. In most cases, you should instead use an A element with a download attribute.
  • Edge18 did not support navigation to or downloading from data URLs via the download attribute; Edge76/Chrome allow the download of data URLs up to 2mb in length. In most cases, you should prefer blob urls.

HTTPS – TLS Protocol

  • Edge76 and Chrome enable TLS/1.3 by default; Edge18 does not support TLS/1.3 prior to Windows 10 19H1, and even on that platform it is disabled by default (and known to be buggy).
  • Edge76 and Chrome support a different list of TLS ciphers than Edge18.
  • Edge76 and Chrome send GREASE tokens in HTTPS handshakes; Edge18 does not.
  • Edge76 and Chrome prohibit connections for HTTP/2 traffic from using banned (weak) ciphers, showing ERR_HTTP2_INADEQUATE_TRANSPORT_SECURITY if the server attempts to use such ciphers. Edge18 did not enforce this requirement. This has primarily impacted intranet websites served by IIS on Windows Server 2012 where the server was either misconfigured or does not have the latest updates installed. Patching the server and/or adjusting its TLS configuration will resolve the problem.

HTTPS – Certificates

  • Edge76 and Chrome require that a site’s certificate contain its domain name in the SubjectAltName (SAN) field. Edge 18 permits the certificate to omit the SAN and if the domain name is in the Subject Common Name (CN) field. (All public CAs use the SAN; certificates that chain to a local/enterprise trusted root may need to be updated).
  • Edge76 and Chrome require certificates that chain to trusted root CAs to be logged in Certificate Transparency (CT). This generally isn’t a problem because public roots are supposed to log in CT as a part of their baseline requirements. However, certain organizations (including Microsoft and CAs) have hybrid roots which are both publicly trusted and issue privately within the organization. As a result, loading pages may error out with NET::ERR_CERTIFICATE_TRANSPARENCY_REQUIRED. To mitigate this, such organizations must either start logging internal certificates in CT, or set one of three policies under HKLM\SOFTWARE\Policies\Microsoft\Edge\. Edge18 does not support CT.
  • Edge76 and Chrome use a custom Win32 client certificate picker UI, while Edge18 uses the system’s default certificate picker.

Cookies

  • Edge76 and Chrome support the Leave Secure Cookies Alone spec, which blocks HTTP pages from setting cookies with the Secure attribute and restricts the ways in which HTTP pages may interfere with cookies sent to HTTPS pages. Legacy Edge does not have these restrictions.
  • Edge76 and Chrome support Cookie prefixes (restrictions on cookies whose names begin with the prefixes __Secure- and __Host-). Legacy Edge does not enforce these restrictions.
  • Edge76, Chrome, and Firefox ignore Set-Cookie headers with values over 4096 characters in length (including cookie-controlling directives like SameSite). In contrast, IE and Edge18 permit cookies with name-value pairs up to 5118 characters in length.

Authentication and Login

  • In Edge76, Edge18, and Firefox, running the browser in InPrivate mode disables automatic Integrated Windows Authentication. Chrome and Internet Explorer do not disable automatic authentication in private mode. You can disable automatic authentication in Chrome by launching it with a command line argument: chrome.exe --auth-server-whitelist="_"
  • Edge18/Edge76 integrates a built-in single-sign-on (SSO) provider, such that configured account credentials are automatically injected into request headers for configured domains; this feature is disabled in InPrivate mode. Chrome does not have this behavior for Microsoft accounts.
  • Edge18 supports Azure Active Directory’s Conditional Access feature. For Chrome, an extension is required. Edge76 has not yet integrated support for this feature.

WebAPIs

  • Edge18 includes an API window.external.GetHostEnvironmentValue that returns some interesting information about the system, including whether it is running in the “Windows 10 S” lockdown mode. Edge76 and Chrome do not support this API. Update: Edge 78 restored this API with a limited set of tokens:
    {“os-architecture”:”AMD64″,”os-build”:”10.0.18362″,”os-sku”:”4″,”os-mode”:”2″}. The os-mode of 2 indicates a Windows 10 S configuration.
  • Google Chrome ships with the Portable Native Client plugin; Edge76 does not include this plugin. The plugin is little-used and you’re unlikely to encounter problems with its absence except on the Google Earth website. PNaCl is deprecated in favor of WebAssembly and is slated to be removed from Chrome in Q2 2019.
  • The Edge Platform Status site also includes a short list of features that are supported in Edge18 but not Chromium-derived browsers.

Group Policy and Command Line Arguments

By-default, Edge 76 shares almost all of the same Group Policies and command line arguments as Chrome 76.

If you’re using the registry to set a policy for Edge, put it under the

HKEY_CURRENT_USER\Software\Policies\Microsoft\Edge

…node instead of under the

HKEY_CURRENT_USER\Software\Policies\Google\Chrome

node.

If you’re trying to use a Chrome command line argument when launching in the new MSEdge.exe and it’s not working, check whether it has “blacklist” or “whitelist” in the name. If so, we probably renamed it.

For instance, want to tell Edge not to accept a 3DES ciphersuite for TLS? You need to use

msedge.exe --cipher-suite-denylist=0x000a

…instead of

chrome.exe --cipher-suite-blacklist=0x000a

….as you would with Chrome.

User-Agent

Browsers identify themselves to servers using a User-Agent header. A top source of compatibility problems is caused by sites that attempt to behave differently based on the User-Agent header and make incorrect assumptions about feature support, or fail to update their checks over time. Please, for the love of the web, avoid User-Agent Detection at all costs!

Chrome User-Agent string:
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.100 Safari/537.36

Edge77 Beta (Desktop) User-Agent string:
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.19 Safari/537.36 Edg/77.0.235.9

Edge18 User-Agent string:
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.18362

Edge73 Stable (Android) User-Agent string:
Mozilla/5.0 (Linux; Android 10; Pixel 3 XL) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.90 Mobile Safari/537.36 EdgA/42.0.4.3892

You’ll note that each of the Edge variants uses a different token at the end of the User-Agent string, but the string otherwise matches Chrome versions of the same build. Sites should almost never do anything with the Edge token information– treat Edge like Chrome. Failing to follow this advice almost always leads to bugs.

Sites are so bad about misusing the User-Agent header that Edge76 was forced to introduce a service-driven override list, which you can find at edge://compat/useragent. Alas, even that feature can cause problems in unusual cases. For testing, you can tell Edge to ignore the list by starting it thusly:

    msedge.exe --disable-domain-action-user-agent-override

Stay compatible out there!

-Eric

A user recently noticed that when loading Paypal.com in Microsoft Edge, the UI shown was the default HTTPS UI (a gray lock):

Non-EV-UI-For-Paypal

Instead of the fancier “green” UI shown for servers that present Extended Validation (EV) certificates:EV-for-Paypal

The user observed this on some Windows 10 machines but not others.

The variable that differed between those machines was the state of the Menu > Settings > Advanced > Windows Defender SmartScreen setting.

Edge only shows the green EV user interface when SmartScreen is enabled.

IE 11

Internet Explorer 11 on Windows 10 behaves the same way as prior versions of IE going back to IE7– the green EV UI requires either SmartScreen be enabled or that the option Tools > Internet Options > Advanced > Security > Check for Server Certificate Revocation be enabled.

Chrome

The Chrome team recently introduced a new setting, exposed via the chrome://flags/#simplify-https-indicator page, that controls how EV certificates are displayed in their Security Chip. A user (or a field trial) can configure sites with EV certificates to display using the default HTTPS UI.

ChromeEV

 

-Eric

I recently bought a few new domain names under the brand new .app top-level-domain (TLD). The .app TLD is awesome because it’s on the HSTSPreload list, meaning that browsers will automatically use only HTTPS for every request on every domain under .app, keeping connections secure and improving performance.

I’m not doing anything terribly exciting with these domains for now, but I’d like to at least put up a simple welcome page on each one. Now, in the old days of HTTP, this was trivial, but because .app requires HTTPS, that means I must get a certificate for each of my sites for them to load at all.

Fortunately, GitHub recently started supporting HTTPS on GitHub Pages with custom domains, meaning that I can easily get a HTTPS site up in running in just a few minutes.

1. Log into GitHub, go to your Repositories page and click New:

2. Name your new repository something reasonable:

3. Click to create a simple README file:

4. Edit the file

5. Click Commit new file

6. Click Settings on the repository

7. Scroll to the GitHub Pages section and choose master branch and click Save:

8. Enter your domain name in the Custom domain box and click Save

9. Login to NameCheap (or whatever DNS registrar you used) and click Manage for the target domain name:

10. Click the Advanced DNS tab:

11. Click Add New Record:

 

12. Enter four new A Records for host of @ with the list of IP addresses GitHub pages use:

13. Click Save All Changes.

14. Click Add New Record and add a new CNAME Record. Enter the host www and a target value of username.github.io. Click Save All Changes: 

15. Click the trash can icons to delete the two default DNS entries that NameCheap had for your domain previously:

16. Try loading your new site.

  • If you get a connection error, wait a few minutes for DNS to propagate and re-verify the DNS records you just added.
  • If you get a certificate error, look at the certificate. It’s probably the default GitHub certificate. If so, look in the GitHub Pages settings page and you may see a note that your certificate is awaiting issuance by LetsEncrypt.org. If so, just wait a little while.

  • After the certificate is issued, your site without errors:

 

Go forth and build great (secure) things!

-Eric Lawrence

As of April 30th, Chrome now requires that all certificates issued by a public certificate authority be logged in multiple public Certificate Transparency (CT) logs, ensuring that anyone can audit all certificates that have been issued. CT logs allow site owners and security researchers to much more easily detect if a sloppy or compromised Certificate Authority has issued a certificate in error.

For instance, I own bayden.com, a site where I distribute freeware applications. I definitely want to hear about it if any CA issues a certificate for my site, because that’s a strong indication that my site’s visitors may be under attack. What’s cool is that CT also allows me to detect if someone got a certificate for a domain name that was suspiciously similar to my domain, for instance bȧyden.com.

Now, for the whole thing to work, I have to actually pay attention to the CT logs, and who’s got time for that? Someone else’s computer, that’s who.

The folks over at Facebook Security have built an easy-to-use interface that allows you to subscribe to notifications any time a domain you care about has a new certificate issued. Just enter a hostname and decide what sorts of alerts you’d like:

CTMonitor

You can even connect their system into webhooks if you’re looking for something more elaborate than email, although mail works just fine for me:

Notification

Beyond Facebook, there will likely be many other CT Monitoring services coming online over the next few years. For instance, the good folks at Hardenize have already integrated one into their broader security monitoring platform.

The future is awesome.

-Eric

Chrome 66, releasing to stable this week, again supports the SSLVersionMin policy that enables administrators to control the minimum version of TLS that Chrome is willing to negotiate with a server.

If this policy is in effect and configured to permit, say, only TLS/1.2+ connections, attempting to connect to a site that only supports TLS/1.0 will result in an error page with the status code ERR_SSL_VERSION_OR_CIPHER_MISMATCH.

This policy existed until Chrome 52 and was brought back for Chrome 66 and later. It is therefore possible that your administrators configured it long ago, forgot about the setting, and will be impacted again with Chrome 66’s rollout.

-Eric

In order to be eligible for the HSTS Preload list, your site must usually serve a Strict-Transport-Security header with a includeSubdomains directive.

Unfortunately, some sites do not follow the best practices recommended and instead just set a one-year preload header with includeSubdomains and then immediately request addition to the HSTS Preload list. The result is that any problems will likely be discovered too late to be rapidly fixed– removals from the preload list may take months.

The Mistakes

In running the HSTS preload list, we’ve seen two common mistakes for sites using includeSubdomains:

Mistake: Forgetting Intranet Hosts

Some sites are set up with a public site (example.com) and an internal site only accessible inside the firewall (app.corp.example.com). When includeSubdomains is set, all sites underneath the specified domain must be accessible over HTTPS, including in this case app.corp.example.com. Some corporations have different teams building internal and external applications, and must take care that the security directives applied to the registrable domain are compatible with all of the internal sites running beneath it in the DNS hierarchy. Following the best practices of staged rollout (with gradually escalating max-age directives) will help uncover problems before you brick your internal sites and applications.

Of course, you absolutely should be using HTTPS on all of your internal sites as well, but HTTPS deployments are typically smoother when they haven’t been forced by Priority-Zero downtime.

Mistake: Forgetting Delegated Hosts

Some popular sites and services use third party companies for advertising, analytics, and marketing purposes. To simplify deployments, they’ll delegate handling of a subdomain under their main domain to the vendor providing the service. For instance, http://www.example.com will point to the company’s own servers, while the hostname mail-tracking.example.com will point to Experian or Marketo servers. A HSTS rule with includeSubdomains applied to example.com will also apply to those delegated domains. If your service provider has not enabled HTTPS support on their servers, all requests to those domains will fail when upgraded to HTTPS. You may need to change service providers entirely in order to unbrick your marketing emails!

Of course, you absolutely should be using HTTPS on all of your third-party apps as well, but HTTPS deployments are typically smoother when they haven’t been forced by Priority-Zero downtime.

Recovery

If you do find yourself in the unfortunate situation of having preloaded a TLD whose subdomains were not quite ready, you can apply for removal from the preload list, but, as noted previously, the removal can be expected to take a very long time to propagate. For cases where you only have a few domains out of compliance, you should be able to quickly move them to HTTPS. You might also consider putting a HTTPS front-end out in front of your server (e.g. Cloudflare’s free Flexible SSL option) to allow it to be accessed over HTTPS before the backend server is secured.

Deploy safely out there!

-Eric