compatibility

The Web Platform offers a great deal of power, and unfortunately evil websites go to great lengths to abuse it. One of the weakest (but simplest to implement) protections against such abuse is to block actions that were not preceded by a “User Gesture.” Such gestures (sometimes more precisely called User Activations) include a variety of simple actions, from clicking the mouse to typing a key; each interpreted as “The user tried to do something in this web content.”

A single user gesture can unlock any of a surprisingly wide array of privileged (“gated”) actions:

  • Allow a popup window to open
  • Allow an Application Protocol to be invoked
  • Allow an OnBeforeUnload dialog box to show
  • Allow the Vibration API to vibrate the device
  • Allow script to take the window fullscreen
  • Allow the password manager to fill the username/password into the page in a way that JavaScript can see them
  • Allow the page to prompt the user for a file to upload
  • Impact the behavior of file downloads (e.g. prompting)
  • and many more

So, when you see a site show a UI like this:

…chances are good that what they’re really trying to do is trick you into performing a gesture (mouse click) so they can perform a privileged action– in this case, open a popup ad in a new tab.

Some gestures are considered “consumable”, meaning that a single user action allows only one privileged action; subsequent privileged actions require another gesture.

Unfortunately, even this weak protection is subject to both false positives (an unwanted granting of privilege) and false negatives (an action is unexpectedly blocked).

You can learn more about this topic (and the complexity of dealing with nested frames, etc) in the original Chromium User Activation v2 spec, and the User-Activation section of HTML5.

-Eric

For the first few years of the web, developers pretty much coded whatever they thought was cool and shipped it. Specifications, if written at all, were an afterthought.

Then, for the next two decades, spec authors drafted increasingly elaborate specifications with optional features and extensibility points meant to be used to enable future work.

Unfortunately, browser and server developers often only implemented enough of the specs to ensure interoperability, and rarely tested that their code worked properly in the face of features and data allowed by the specs but not implemented in the popular clients.

Over the years, the web builders started to notice that specs’ extensibility points were rusting shut– if a new or upgraded client tried to make use of a new feature, or otherwise change what it sent as allowed by the specs, existing servers would fail when they saw the encountered the new values.

In light of this, spec authors came up with a clever idea: clients should send random dummy values allowed by the spec, causing spec-non-compliant servers that fail to properly ignore those values to fail immediately. This concept is called GREASE (with the backronym “Generate Random Extensions And Sustain Extensibility“), and was first implemented for the values sent by the TLS handshake. When connecting to servers, clients would claim to support new ciphersuites and handshake extensions, and intolerant servers would fail. Users would holler, and engineers could follow up with the broken site’s authors and developers about how to fix their code. To avoid “breaking the web” too broadly, GREASE is typically enabled experimentally at first, in Canary and Dev channels. Only after the scope of the breakages is better understood does the change get enabled for most users.

GREASE has proven such a success for TLS handshakes that the idea has started to appear in new places. Last week, the Chromium project turned on GREASE for HTTP2 in Canary/Dev for 50% of users, causing connection failures to many popular sites, including some run by Microsoft. These sites will need to be fixed in order to properly load in the new builds of Chromium.

// Enable "greasing" HTTP/2, that is, sending SETTINGS parameters with reserved identifiers and sending frames of reserved types, respectively. If greasing Frame types, an HTTP/2 frame with a reserved frame type will be sent after every HEADERS and SETTINGS frame. The same frame will be sent out on all connections to prevent the retry logic from hiding broken servers.
NETWORK_SWITCH(kHttp2GreaseSettings, "http2-grease-settings") NETWORK_SWITCH(kHttp2GreaseFrameType, "http2-grease-frame-type")

One interesting consequence of sending GREASE H2 Frames is that it requires moving the END_STREAM flag (recorded as fin=true in the netlog) from the HTTP2_SESSION_SEND_HEADERS frame into an empty (size=0) HTTP2_SESSION_SEND_DATA frame; unfortunately, the intervening GREASE Frame is not presently recorded in the netlog.

You can try H2 GREASE in Chrome Stable using command line flags that enable GREASE settings values and GREASE frames respectively:

chrome.exe --http2-grease-settings bing.com
chrome.exe --http2-grease-frame-type bing.com

Alternatively, you can disable the experiment in Dev/Canary:

chrome.exe --disable-features=Http2Grease

GREASE is baked into the new HTTP3 protocol (Cloudflare does it by default) and the new UA Client Hints specification (where it’s blowing up a fair number of sites). I expect we’ll see GREASE-like mechanisms appearing in most new web specs where there are optional or extensible features.

-Eric

Brave, Mozilla Firefox, Google Chrome and Microsoft Edge presented on our current privacy work at the Enigma 2020 conference in late January. The talks were mostly high-level, but there were a few feature-level slides for each browser.

My ~10 minute presentation on Microsoft Edge was first, followed by Firefox, Chrome, and Brave.

At 40 minutes in a 35min Q&A session starts, first with questions from the panel moderator, followed by questions from the audience.

In recent posts, I’ve explored mechanisms to communicate from web content to local (native) apps, and I explained how web apps can use the HTML5 registerProtocolHandler API to allow launching them from either local apps or other websites.

In today’s post, we’ll explore how local apps can launch web apps in the browser.

It’s Simple…

In most cases, it’s trivial for an app to launch a web app and send data to it. The app simply invokes the operating system’s “launch” API and passes it the desired URL for the web app.

Any data to be communicated to the web app is passed in the URL query string or the fragment component of the URL.

On Windows, such an invocation might look like this:

ShellExecute(hwnd, "open", "https://bayden.com/echo.aspx?DataTo=Pass#GoesHere", 0, 0, SW_SHOW);

Calling this API results in the user’s default browser being opened and a new tab navigated to the target URL.

This same simple approach works great on most operating systems and with virtually any browser a user might have configured as their default.

…Unless It’s Not

Unfortunately, this well-lit path adjoins a complexity cliff— if your scenario has requirements beyond the basic [Launch the default browser to this URL], things get much more challenging. The problem is that there is no API contract that provides a richer feature set and works across different browsers.

For instance, consider the case where you’d like your app to direct the browser to POST a form to a target server. Today, popular operating systems have no such concept– they know how to open a browser by passing it a URL, but they expose no API that says “Open the User’s browser to the following URL, sending the navigation request via the HTTP POST method and containing the following POST body data.

Over the years, a few workarounds have been used (e.g. see StackOverflow1 and StackOverflow2).

For instance, if the target webservice simply requires a HTTP POST and you cannot change it, your app could launch the browser to a webpage you control, passing the required data in the querystring component of a HTTP GET. Your web server could then reformat the data into the required POST body format and either proxy that request (server-side) to the target webservice, or it could return a web page with an auto-submitting form element with a method of POST and and action attribute pointed at the target webservice. The user’s browser will submit the form, posting the data to the target server.

Similarly, a more common approach involves having the app write a local HTML file in a temporary folder, then direct the Operating System to open that file using the appropriate API (again ShellExecute, in the case of Windows). Presuming that the user’s default HTML handler is also their default HTTPS protocol handler, opening the file will result in the default browser opening, and the HTML/script in the file will automatically submit the included form element to the target server. This “bounce through a local temporary form” approach has the advantage of making it possible to submit sizable of data to the server (e.g. the contents of a local file), unlike using a GET request’s size-limited querystring.

Caveats:

  • Unfortunately it is generally not possible to construct a HTML form that will submit a data field that exactly matches what you would get when sending an <input type=file> control. If the web service demands a format that was generated by a file upload control, you may not be able to emulate that.
  • Some browsers will not run JavaScript in local files by default.
  • Don’t forget to delete the temporary file!

If your scenario requires uploading files, an alternative approach is to:

  1. Upload the files directly from your app to a web service
  2. Have that web service return a secret token associated with the upload
  3. Have your app spawn a browser with a GET request whose querystring contains that secret token

Browser-Specific Approaches

Back in the Windows 7 days, the IE8 team created a very cool feature called Accelerators that would allow users to invoke web services in their browser from any other application. Interestingly, the API contract supported web services that required POST requests.

Because there was no API in Windows that supported launching the default browser with anything other than a URL, a different approach was needed. A browser that wished to participate as a handler for accelerators could implement a IOpenServiceActivityOutputContext::Navigate function which was expected to launch the browser and pass the data. The example implementation provided by our documentation called into Internet Explorer’s Navigate2() COM API, which accepted as a parameter the POST body to be sent in the navigation. As far as I know, no other browser ever implemented IOpenServiceActivityOutputContext.

These days, Accelerators are long dead, and no one should be using Internet Explorer anymore. In the intervening years, no browser-agnostic mechanism to transfer a POST request from an app to a browser has been created.

Perhaps the closest we’ve come is the W3C’s WebDriver Standard, designed for automated testing of websites across arbitrary browsers. Unfortunately, at present, there’s still no way for mainstream apps to take a dependency on WebDriver to deliver a reliable browser-agnostic solution enabling rich transfers from a local app to a web app. Similarly, Puppeteer can be used for some web automation scenarios in Chrome or Edge, and the new Microsoft Playwright enables automated testing in Chromium, WebKit, and Firefox.

Future Possibilities

While the current picture is bleak, the future is a bit brighter. That’s because a major goal of browsers’ investment in Progressive Web Apps is to make them rich enough to take the place of native apps. Today’s native apps have very rich mechanisms for passing data and files to one another and PWAs will need such capabilities in order to achieve their goals.

Perhaps one day, not too far in the future, your OS and your browser (regardless of vendor) will better interoperate.

-Eric

It’s an interesting time. Microsoft now maintains three different web browsers:

  • Internet Explorer 11
  • Microsoft Edge Legacy (Spartan, v18 and below)
  • Chromium-based Microsoft Edge (v79+)

If you’re using Internet Explorer 11, you should stop; sometimes, this is easier said than done.

If you’re using Legacy Microsoft Edge, you should upgrade to the new Microsoft Edge which is better in almost every way. When you install the Stable version of the new Microsoft Edge (either by downloading it or eventually by using WindowsUpdate), it will replace your existing Legacy Edge with the new version.

What if I still need to test in Legacy Edge?

If you’re a web developer and need to keep testing your sites and services in the legacy Microsoft Edge, you’ll need to set a registry key to prevent the Edge installer from removing the entry points to the old Edge.

Simply import this registry script before the new Edge is installed. When the AllowSxS key is set to 1, the new Edge installer will keep the old entry point, renaming it to “Microsoft Edge Legacy”:

Thereafter, you can use both versions of Edge on the same PC.

If you didn’t have this registry key set and your legacy Edge entry points have disappeared when you installed the new Edge, you can use the Add or Remove Programs applet in the system control panel to uninstall the new Edge, then set the registry key, then reinstall the new Edge.

Note: If you’re a Web Developer, you should also be testing in the Edge Beta or Edge Dev builds because these will allow you to see the changes coming to Edge before your users do. These builds install side-by-side (replacing no browser) and can be installed from https://MicrosoftEdgeInsider.com.

What if my company has sites that only work in Internet Explorer?

In order to help speed migration to the new Microsoft Edge, it offers an Internet Explorer Mode feature when running on Windows. IE Mode allows IT administrators to configure PCs running Windows 7, 8.1, and 10 such that specified sites will load inside a browser tab that uses the Internet Explorer 11 rendering engine.

  • IE Mode is not designed for or available to consumers.
  • Because IE Mode relies upon the IE11 binaries on the current machine, it is not available in Edge for MacOS, iOS, or Android.
  • IE Mode tabs run inside the legacy security sandbox (weaker than the regular Edge sandbox) and ActiveX controls like Silverlight are available to web pages.
  • IE Mode does not share a cache, cookies, or web storage with Microsoft Edge, so scenarios that depend upon using these storage mechanisms in a cross-site+cross-engine context will not work correctly. IT administrators should carefully set their policies such that user flows occur within a single engine.
  • Most Edge browser extensions will not work on IE Mode tabs–extensions which only look at the tab’s URL should work, but extensions which try to view or modify the page content will not function correctly.

In an ideal world, users will migrate to the latest version of Microsoft Edge as quickly as possible, and enjoy a faster, more compatible, more reliable browser. Nevertheless, Microsoft will continue to patch both Legacy Edge and Internet Explorer 11 according to their existing support lifecycle.

-Eric

Browsers As Decision Makers

As a part of every page load, browsers have to make dozens, hundreds, or even thousands of decisions — should a particular API be available? Should a resource load be permitted? Should script be allowed to run? Should video be allowed to start playing automatically? Should cookies or credentials be sent on network requests? The list is long.

In many cases, decisions are governed by two inputs: a user setting, and the URL of the page for which the decision is being made.

In the old Internet Explorer web platform, each of these decisions was called an URLAction, and the ProcessUrlAction(url, action,…) API allowed the browser or another web client to query its security manager for guidance on how to behave.

To simplify the configuration for the user or their administrator, the legacy platform classified sites into five1 different Security Zones:

  • Local Machine
  • Local Intranet
  • Trusted
  • Internet
  • Restricted

Users could use the Internet Control Panel to assign specific sites to Zones and to configure the permission results for each zone. When making a decision, the browser would first map the execution context (site) to a Zone, then consult the setting for that URLAction for that Zone to decide what to do.

Reasonable defaults like “Automatically satisfy authentication challenges from my Intranet” meant that most users never needed to change any settings away from their defaults.

INETCPL Configuration

In corporate or other managed environments, administrators can use Group Policy to assign specific sites to Zones (via “Site to Zone Assignment List” policy) and specify the settings for URLActions on a per-zone basis. This allowed Microsoft IT, for instance, to configure the browser with rules like “Treat https://mail.microsoft.com as a part of my Intranet and allow popups and file downloads without warning messages.

Beyond manual administrative or user assignment of sites to Zones, the platform used additional heuristics that could assign sites to the Local Intranet Zone. In particular, the browser would assign dotless hostnames (e.g. https://payroll) to the Intranet Zone, and if a Proxy Configuration script was used, any sites configured to bypass the proxy would be mapped to the Intranet Zone.

Applications hosting Web Browser Controls, by default, inherit the Windows Zone configuration settings, meaning that changes made for Internet Explorer are inherited by other applications. In relatively rare cases, the host application might supply its own Security Manager and override URL Policy decisions for embedded Web Browser Control instances.

The Trouble with Zones

While powerful and convenient, Zones are simultaneously problematic bug farms:

  • Users might find that their mission critical corporate sites stopped working if their computer’s Group Policy configuration was outdated.
  • Users might manually set configuration options to unsafe values without realizing it.
  • Attempts to automatically provide isolation of cookies and other data by Zone led to unexpected behavior, especially for federated authentication scenarios.

Zone-mapping heuristics are extra problematic

  • A Web Developer working on a site locally might find that it worked fine (Intranet Zone), but failed spectacularly for their users when deployed to production (Internet Zone).
  • Users were often completely flummoxed to find that the same page on a single server behaved very differently depending on how they referred to it — e.g. http://localhost/ (Intranet Zone) vs. http://127.0.0.1/ (Internet Zone).

The fact that proxy configuration scripts can push sites into the Intranet zone proves especially challenging, because:

  • A synchronous API call might need to know what Zone a caller is in, but determining that could, in the worst case, take tens of seconds — the time needed to discover the location of the proxy configuration script, download it, and run the FindProxyForUrl() function within it. This could lead to a hang and unresponsive UI.
  • A site’s Zone can change at runtime without restarting the browser (say, when moving a laptop between home and work networks, or when connecting or disconnecting from a VPN).
  • An IT Department might not realize the implications of returning DIRECT from a proxy configuration script and accidentally map the entire untrusted web into the highly-privileged Intranet Zone. (Microsoft IT accidentally did this circa 2011).
  • Some features like AppContainer Network Isolation are based on firewall configuration and have no inherent relationship to the browser’s Zone settings.

Legacy Edge

The legacy Edge browser (aka Spartan, Edge 18 and below) inherited the Zone architecture from its Internet Explorer predecessor with a few simplifying changes:

  • Windows’ five built-in Zones were collapsed to three: Internet (Internet), the Trusted Zone (Intranet+Trusted), and the Local Computer Zone. The Restricted Zone was removed.
  • Zone to URLAction mappings were hardcoded into the browser, ignoring group policies and settings in the Internet Control Panel.

Use of Zones in Chromium

Chromium goes further and favors making decisions based on explicitly-configured site lists and/or command-line arguments.

Nevertheless, in the interest of expediency, Chromium today uses Windows’ Security Zones by default in two places:

  1. When deciding how to handle File Downloads, and
  2. When deciding whether or not to release Windows Integrated Authentication (Kerberos/NTLM) credentials automatically.

For the first one, if you’ve configured the setting Launching applications and unsafe files to Disable in your Internet Control Panel’s Security tab, Chromium will block file downloads with a note: “Couldn’t download – Blocked.”

For the second, Chromium will process URLACTION_CREDENTIALS_USE to decide whether Windows Integrated Authentication is used automatically, or the user should instead see a manual authentication prompt. (Aside: the manual authentication prompt is really a bit of a mistake– the browser should instead just show a prompt: “Would you like to [Send Credentials] or [Stay Anonymous]” dialog box, rather than forcing the user to reenter the credentials that Windows already has.

Even Limited Use is Controversial

Respect for Zones2 in Chromium remains controversial—the Chrome team has launched and abandoned plans to remove them a few times, but ultimately given up under the weight of enterprise compat concerns. Their arguments for complete removal include:

  1. Zones are poorly documented, and Windows Zone behavior is poorly understood.
  2. The performance/deadlock risks mentioned earlier (Intranet Zone mappings can come from a system-discovered proxy script).
  3. Zones are Windows-only (meaning they prevent drop-in replacement of ChromeOS).

Note: By configuring an explicit site list policy for Windows Authentication, an administrator disables the browser’s URLACTION_CREDENTIALS_USE check, so Zones Policy is not consulted. A similar option is not presently available for Downloads.

Zones in the New Edge

Beyond the two usages of Zones inherited from upstream, the new Chromium-based Edge browser (v79+) adds one more:

  1. Administrators can configure Internet Explorer Mode to open all Intranet sites in IEMode. Those IEMode tabs are really running Internet Explorer, and they use Zones for everything that IE did.

Update: This is very much a corner case, but I’ll mention it anyway. On downlevel operating systems (Windows 7/8/8.1), logging into the browser for sync makes use of a Windows dialog box that contains a Web Browser Control (based on MSHTML) that loads the login page. If you adjust your Windows Security Zones settings to block JavaScript from running in the Internet Zone, you will find that you’re unable to log into the new browser. Oops.

Downsides/Limitations

While it’s somewhat liberating that we’ve moved away from the bug farm of Security Zones, it also gives us one less tool to make things convenient or compatible for our users and IT admins.

We’ve already heard from some customers that they’d like to have a different security and privacy posture for sites on their Intranet, with behavior like:

  • Disable the Tracking Prevention, “Block 3rd party cookie”, and other privacy-related controls for the Intranet (like IE/Edge did).
  • Allow navigation to file:// URIs from the Intranet (like IE/Edge did)
  • Disable “HTTP and mixed content are unsafe” and “TLS/1.0 and TLS/1.1 are deprecated” nags.
  • Skip SmartScreen checks for the Intranet.
  • Allow ClickOnce/DirectInvoke/Auto-opening Downloads from the Intranet without a prompt. Previously, Edge (Spartan)/IE respected the FTA_OpenIsSafe bit in the EditFlags for the application.manifest progid if-and-only-if the download source was in the Intranet/Trusted Sites Zone.
  • Allow launching application protocols from the Intranet without a prompt.
  • Drop all Referers when navigating from the Intranet to the Internet; leave Referers alone when browsing the Intranet.
  • Internet Explorer and legacy Edge will automatically send your client certificate to Intranet sites that ask for it. The AutoSelectCertificateForUrls policy permits Edge to send a client certificate to specified sites without a prompt, but this policy requires the administrator to manually list the sites.
  • Block all (or most) extensions from touching Intranet pages to reduce the threat of data leaks.
  • Guide all Intranet navigations into an appropriate profile or container (a la Detangle).
  • Upstream, there’s a longstanding desire to help protect intranets/local machine from cross-site-request-forgery attacks; blocking loads and navigations of private resources from the Internet Zone is somewhat simpler than blocking them from Intranet Sites.

At present, only AutoSelectCertificateForUrls, manual cookie controls, and mixed content nags support policy-pushed site lists, but their list syntax doesn’t have any concept of “Intranet” (dotless hosts, hosts that bypass proxy).

You’ll notice that each of these has potential security impact (e.g. an XSS on a privileged “Intranet” page becomes more dangerous; unqualified hostnames can result in name collisions), but having the ability to scope some features to only “Intranet” sites might also improve security by reducing attack surface.

As browser designers, we must weigh the enterprise impact of every change we make, and being able to say “This won’t apply to your intranet if you don’t want it to” would be very liberating. Unfortunately, building such an escape hatch is also the recipe for accumulating technical debt and permitting the corporate intranets to “rust” to the point that they barely resemble the modern public web.

Best Practices

Throughout Chromium, many features are designed respect an individual policy-pushed list of sites to control their behavior. If you were forward-thinking enough to structure your intranet such that your hostnames are of the form:

Congratulations, you’ve lucked into a best practice. You can configure each desired policy with a *.contoso-intranet.com entry and your entire Intranet will be opted in.

Unfortunately, while wildcards are supported, there’s presently no way (as far as I can tell) to express the concept of “any dotless hostname.”

Why is that unfortunate? For over twenty years, Internet Explorer and legacy Edge mapped domain names like https://payroll, https://timecard, and https://sharepoint/ to the Intranet Zone by default. As a result, many smaller companies have benefitted from this simple heuristic that requires no configuration changes by the user or the IT department.

Opportunity: Maybe such a DOTLESS_HOSTS token should exist in the Chromium policy syntax. TODO: figure out if this is worth doing.

Summary

  • Internet Explorer and Legacy Edge use a system of five Zones and 88+ URLActions to make security decisions for web content, based on the host of a target site.
  • Chromium (New Edge, Chrome) uses a system of Site Lists and permission checks to make security decisions for web content, based on the host of a target site.

There does not exist an exact mapping between these two systems, which exist for similar reasons but implemented using very different mechanisms.

In general, users should expect to be able to use the new Edge without configuring anything; many of the URLActions that were exposed by IE/Spartan have no logical equivalent in modern browsers.

If the new Edge browser does not behave in the desired way for some customer scenario, then we must examine the details of what isn’t working as desired to determine whether there exists a setting (e.g. a Group Policy-pushed SiteList) that provides the desired experience.

-Eric

1 Technically, it was possible for an administrator to create “Custom Security Zones” (with increasing ZoneIds starting at #5), but such a configuration has not been officially supported for at least fifteen years, and it’s been a periodic source of never-to-be-fixed bugs.

2 Beyond those explicit uses of Windows’ Zone Manager, various components in Chromium have special handling for localhost/loopback addresses, and some have special recognition of RFC1918 private IP Address ranges (e.g. SafeBrowsing handling) and Network Quality Estimation.

Within Edge, the EMIE List is another mechanism by which sites’ hostnames may result in different handling.

Prelude

In late 2004, I was the Program Manager for Microsoft’s clipart website, delivering a million pieces of clipart to Microsoft Office customers every day. It was great fun. But there was a problem– our “Clip of the Day” feature, meant to spotlight a new and topical piece of clipart every day, wasn’t changing as expected.

After much investigation (could the browser itself really be wrong?!?), I wrote to the IE team to complain about what looked like bugs in its caching implementation. In a terse reply, I was informed that the handful of people then left on the browser team were only working on critical security fixes, and my caching problems weren’t nearly important enough to even look at.

That night, unable to sleep, I tossed and turned and fumed at the seeming arrogance of the job link in the respondent’s email signature… “Want to change the world? Join the new IE team today!

Gradually, though, I calmed down and reasoned it through… While the product wasn’t exactly beloved, everyone I knew with a computer used Internet Explorer. Arrogant or not, it was probably accurate that there was nothing I could do with my career at that time that would have as big an impact as joining the IE team. And, I smugly realized that if I joined the team, I’d get access to the IE source code, and could go root out those caching bugs myself.

I reached out to the IE lead for an informational interview the following day, and passed an interview loop shortly thereafter.

After joining the team, I printed out the source code for the network stack and sat down with a red pen. There were no fewer than six different bugs causing my “Clip of the Day gets stuck” issue. When my devs fixed the last of them, I mentioned this and my story to my GPM (boss’ boss).

Does this mean you’re a retention risk?” Tony asked.

Maybe after we fix the rest of these…” I retorted, pointing at the pile of paper with almost a hundred red circles.

No one in the world loved IE as much as I did, warts and all. Investigating, documenting, and fixing problems in Internet Explorer was a nearly all-consuming passion throughout my twenties. Internet Explorer pioneered a broad range of (mostly overlooked) innovations, and in rediscovering them, I felt like one of the characters on Lost — a castaway in a codebase whose brilliant designers were long gone. IE9 was a fantastic, best-of-its-time browser, and I’ll forever be proud of it. But as IE9 wound down and the Windows 8 adventure began, it was already clear that its lead would not last against the Chrome juggernaut.

I shipped IE7, IE8, IE9, and IE10, leaving Microsoft in late 2012, shortly after IE10 was finished, to build Fiddler for Telerik.

In 2015, I changed my default browser to Chrome. In 2016, I joined the Chrome Security team. I left Google in the summer of 2018 and rejoined the Microsoft Edge team, and that summer and fall I spent 50% of my time rediscovering bugs that I’d first found in IE and blogged about a decade before.

Fortunately, Edge’s faster development pace meant that we actually got to fix some of the bugs this time, but Chrome’s advantages in nearly every dimension left Edge very much in an underdog status. Fortunately, the other half of my time was spent working on our (then) secret project to replatform the next version of our Edge browser atop the open-source Chromium project.

We’ve now shipped our best browser ever — the Chromium-based Microsoft Edge. I hope you’ll try it out.

It’s with love that I beg you… please let Internet Explorer retire to the great bitbucket in the sky. It’s time. It’s been time for a long time.

Burndown List

Last night, as I read the details of yet another 0-day security bug in Internet Explorer, I posted the following throwaway tweet, which netted a surprising number of interactions:

I expected the usual slew of “Yeah, IE is terrible,” and “IE was always terrible,” and “Somebody tell my {boss,school,parents}” responses, but I didn’t really expect serious replies. I got some, however, and they’re interesting.

Shared Credentials

Internet Explorer shares a common networking stack (WinINET) and Cookie Jar (for Intranet/Trusted sites) with many native code applications on Windows, including Windows Explorer. Tim identifies a scenario where Windows Explorer relies on an auth cookie being found in the WinINET cookie jar, put there by Internet Explorer. We’ve seen similar scenarios in some Microsoft Office flows.

Depending on a cookie set by Internet Explorer might’ve been somewhat reasonable in 2003, but Vista/IE7’s introduction of Protected Mode (and cookie jar partitioning) in 2006 made this a fragile architecture. The fact that anything depends upon it in 2020 is appalling.

Thoughts: I need to bang on some doors. This is depressing.

Certificate Issuance

Developers who apply digital signatures to their apps and server operators who expose their sites over HTTPS do so using a digital certificate. In ideal cases, getting a certificate is automatic and doesn’t involve a browser at all, but some Certificate Authorities require browser-based flows. Those flows often demand that the user use either Internet Explorer or Firefox because the former supports ActiveX Controls for certificate issuance, while Firefox, until recently, supported the Keygen element.

WebCrypto, now supported in all modern browsers, serves as a modern replacement for these deprecated approaches, and some certificate issuers are starting to build issuance flows atop it.

Thoughts: We all need to send some angry emails. Companies in the Trust space should not be built atop insecure technologies.

Banking, especially in Asia

A fascinating set of circumstances led to Internet Explorer’s dominance in Asian markets. First, early browsers had poor support for Unicode and East Asian character sets, forcing website developers to build their own text rendering atop native code plugins (ActiveX). South Korea mandated use of a locally-developed cipher (SEED) for banking transactions[1], and this cipher was not implemented by browser developers… ActiveX again to the rescue. Finally, since all users were using IE, and were accustomed to installing ActiveX controls, malware started running rampant, so banks and other financial institutions started bundling “security solutions” (aka rootkits) into their ActiveX controls. Every user’s browser was a battlefield with warring native code trying to get the upper hand. A series of beleaguered Microsoft engineers (including Ed Praitis, who helped inspire me to make my first significant code commits to the browser) spent long weeks trying to keep all of this mess working as we rearchitected the browser, built Protected Mode and later Enhanced Protected Mode, and otherwise modernized a codebase nearing its second decade.

Thoughts: IE marketshare in Asia may be higher than other places, but it can’t be nearly as high as it once was. Haven’t these sites all pivoted to mobile apps yet?

Reader Survey: Do you have any especially interesting scenarios where you’re forced to use Internet Explorer? Sound off in the comments below!

Q&A

Q: I get that IE is terrible, but I’m an enterprise admin and I own 400 websites running lousy websites written by a vendor in a hurry back in 2004. These sites will not be updated, and my employees need to keep using them. What can I do?

A: The new Chromium-based Edge has an IE Mode; you can configure your users so that Edge will use an Internet Explorer tab when loading those sites, directly within Edge itself.

Q: Uh, isn’t IE Mode a security risk?

A: Any use of an ancient web engine poses some risk, but IE Mode dramatically reduces the risk, by ensuring that only sites selected by the IT Administrator load in IE mode. Everything else seamlessly transitions back to the modern, performant and secure Chromium Edge engine.

Q: What about Web Browser Controls (WebOCs) inside my native code applications?

A: In many cases, WebOCs inside a native application are used to render trusted content delivered from the application itself, or from a server controlled by the application’s vendor. In such cases, and presuming that all content is loaded over HTTPS, the security risk of the use of a WebOC is significantly lower. Rendering untrusted HTML in a WebOC is strongly discouraged, as WebOCs are even less secure than Internet Explorer itself. For compatibility reasons, numerous security features are disabled-by-default in WebOCs, and the WebOC does not run content in any type of process sandbox.

Looking forward, the new Chromium-based WebView2 control should be preferred over WebOCs for scenarios that require the rendering of HTML content within an application.

Q: Does this post mean anything has changed with regard to Internet Explorer’s support lifecycle, etc?

A: No. Internet Explorer will remain a supported product until its support lifecycle runs out. I’m simply begging you to not use it except to download a better browser.

Footnotes

[1] The SEED cipher wasn’t just a case of the South Korean government suffering from not-invented-here, but instead a response to the fact that the US Government at the time forbid export of strong crypto.

UPDATE: Timelines in this post were updated on March 31, 2020 to reflect the best available information. Timelines remain somewhat in flux due to world events.

HTTPS traffic is encrypted and protected from snooping and modification by an underlying protocol called Transport Layer Security (TLS). Disabling outdated versions of the TLS security protocol will help move the web forward toward a more secure future. All major browsers (including Firefox, Chrome, Safari, Internet Explorer and Edge Legacy) have publicly committed to require TLS version 1.2 or later by default starting in 2020.

Starting in Edge 84, reaching stable in July 2020, the legacy TLS/1.0 and TLS/1.1 protocols will be disabled by default. These older protocol versions are less secure than the TLS/1.2 and TLS/1.3 protocols that are now widely supported by websites:

To help users and IT administrators discover sites that still only support legacy TLS versions, the edge://flags/#show-legacy-tls-warnings flag was introduced in Edge Canary version 81.0.392. Simply set the flag to Enabled and restart the browser for the change to take effect:

Subsequently, if you visit a site that requires TLS/1.0 or TLS/1.1, the lock icon will be replaced with a “Not Secure” warning in the address box, alongside the warning in the F12 Developer Tools Console:

As shown earlier in this post, almost all sites are already able to negotiate TLS/1.2. For those that aren’t, it’s typically either a simple configuration option in either the server’s registry or web server configuration file. (Note that you can leave TLS/1.0 and TLS/1.1 enabled on the server if you like, as browsers will negotiate the latest common protocol version).

In some cases, server software may have no support for TLS/1.2 and will need to be updated to a version with such support. However, we expect that these cases will be rare—the TLS/1.2 protocol is now over 11 years old.

Obsolete TLS Blocks Subdownloads

Often a website pulls in some page content (like script or images) from another server, which might be running a different TLS version. In cases where that content server does not support TLS/1.2 or later, the content will simply be missing from the parent page.

You can identify cases like this by watching for the message net::ERR_SSL_OBSOLETE_VERSION in the Developer Tools console:

Group Policy Details

Organizations with internal sites that are not yet prepared for this change can configure group policies to re-enable the legacy TLS protocols.

For the new Edge, use the SSLVersionMin Group Policy. This policy will remain available until the removal of the TLS/1.0 and TLS/1.1 protocols from Chromium in January 2021. Stated another way, the new Edge will stop supporting TLS/1.0+1.1 (regardless of policy) in January 2021.

For IE11 and Edge Legacy, the policy in question is the (dubiously-named) “Turn off encryption support” found inside Windows Components/Internet Explorer/Internet Control Panel/Advanced Page. Edge Legacy and IE will likely continue to support enabling these protocols via GP until they are broken from a security POV; this isn’t expected to happen for a few years.

IE Mode Details

The New Edge has the ability to load administrator-configured sites in Internet Explorer Mode.

IEMode tabs depend on the IE TLS settings, so if you need an IEMode site to load a TLS/1.0 website after September 2020, you’ll need to enable TLS/1.0 using the “Turn off encryption support” group policy found inside Windows Components/Internet Explorer/Internet Control Panel/Advanced Page.

Otherwise, Edge tabs depend on the Edge Chromium TLS settings, so if you need an Edge mode tab (the default) to load a TLS/1.0 website after July 2020, you’ll need to enable TLS/1.0 using the SSLMinVersion group policy.

If you need to support a TLS/1.0 site in both modes (e.g. the site is configured as “Neutral”), then you will need to set both policies.

Thanks for your help in securing the web!

-Eric

Note: TLS/1.0 and TLS/1.1 will be disabled by default in the new Chromium-based Edge starting in Edge 84. These older protocols will not be disabled in IE and Edge Legacy at that time — these protocols will remain on by default in IE/Legacy Edge until September 2020.

Support for the venerable FTP protocol is being removed from Chromium. Standardized in 1971, FTP is not a safe protocol for the modern internet. Its primary defect is lack of support for encryption (FTPS isn’t supported by any popular browsers), although poor support for authentication and other important features (download resumption, proxying) also have hampered the protocol over the years.

With removal first proposed by the networking lead nearly six years ago, FTP support has been gradually pared back, first blocking such urls for subresources in Chrome 59, and later forcing FTP resources to be treated as downloads in Chrome 72. Now FTP support be going away entirely, starting in version 80, although a flag (chrome://flags/#enable-ftp) will remain available to turn it back on for a limited time.

After FTP support is removed, clicking on a FTP link will either launch the operating system’s registered FTP handler (if any), or will silently fail to do anything (as Chrome fails silently when an application protocol handler is not installed).

If your scenario depends on FTP today, please switch over to HTTPS as soon as possible.

thanks!

-Eric

 

As your browser navigates from page to page, servers are informed of the URL from where you’ve come from using the Referer HTTP header1; the document.referrer DOM property reveals the same information to JavaScript.

Similarly, as the browser downloads the resources (images, styles, JavaScript) within webpages, the Referer header on the request allows the resource’s server to determine which page is requesting the resource.

The Referrer is omitted in some cases, including:

  • When the user navigates via some mechanism other than a link in the page (e.g. choosing a bookmark or using the address box)
  • When navigating from HTTPS pages to HTTP pages
  • When navigating from a resource served by a protocol other than HTTP(S)
  • When the page opts-out (details in a moment)

Usefulness

The Referrer mechanism can be very useful, because it helps a site owner understand from where their traffic is originating. For instance, WordPress automatically generates this dashboard which shows me where my blog gets its visitors:

BlogStats

I can see not only which Search Engines send me the most users, but also which specific posts on Reddit are driving traffic my way.

Privacy Implications

Unfortunately, this default behavior has a significant impact on privacy, because it can potentially leak private and important information.

Imagine, for example, that you’re reviewing a document your mergers and acquisitions department has authored, with the URL https://contoso.com/​Q4/PotentialAcquisitionTargetsUpTo5M.docx. Within that document, there might have a link to https://fabrikam.com​/financialdisclosures.htm. If you were to click that link, the navigation request to Fabrikam’s server will contain the full URL of the document that led you there, potentially revealing information that your firm would’ve preferred to keep quiet.

Similarly, your search queries might contain something you don’t mind Bing knowing (“Am I required to disclose a disease before signing up for HumongousInsurance.com?”) but that you didn’t want to immediately reveal to the site where you’re looking for answers.

If your web-based email reader puts your email address in the URL, or includes the subject of the current email, links you click in that email might be leaking information you wish to keep private.

The list goes on and on. This class of threat was noted almost thirty years ago:

Referrer Policy

Websites have always had ways to avoid leaking information to navigation targets, usually involving nonstandard navigation mechanisms (e.g. meta refresh) or by wrapping all links so that they go through an innocuous page (e.g. https://example.net/offsitelink.aspx).

However, these mechanisms were non-standard, cumbersome, and would not control the referrer information sent when downloading resources embedded in pages. To address these limitations, Referrer Policy was developed and implemented by most browsers2.

CanIUseRP

Referrer Policy allows a website to control what information is sent in Referer headers and exposed to the document.referrer property. As noted in the spec, the policy can be specified in several ways:

  • Via the Referrer-Policy HTTP response header.
  • Via a meta element with a name of referrer.
  • Via a referrerpolicy content attribute on an aareaimgiframe, or link element.
  • Via the noreferrer link relation on an aarea, or link element.
  • Implicitly, via inheritance.

The policy can be any of the following:

  • no-referrer – Do not send a Referer.
  • unsafe-url – Send the full URL (lacking only auth info and fragment), even on navigations from HTTPS to HTTP.
  • no-referrer-when-downgrade – Don’t send the Referer when navigating from HTTPS to HTTP. [The longstanding default behavior of browsers.]
  • strict-origin-when-cross-origin – For a same-origin navigation, send the URL. For a cross-origin navigation, send only the Origin of the referring page. Send nothing when navigating from HTTPS to HTTP. [Spoiler alert: The new default.]
  • origin-when-cross-origin For a same-origin navigation, send the URL. For a cross-origin navigation, send only the Origin of the referring page. Send the Referer even when navigating from HTTPS to HTTP.
  • same-origin – Send the Referer only for same-origin navigations.
  • origin – Send only the Origin of the referring page.
  • strict-origin – Send only the Origin of the referring page; send nothing when navigating from HTTPS to HTTP.
  • empty string – Inherit, or use the default

As you can see, there are quite a few policies. That’s partly due to the strict- variations which prevent leaking even the origin information on HTTPS->HTTP navigations.

Improving Defaults

With this background out of the way, the Chromium team has announced that they plan to change the default Referrer Policy from no-referrer-when-downgrade to strict-origin-when-cross-origin. This means that cross-origin navigations will no longer reveal path or query string information, significantly reducing the possibility of unexpected leaks.

As with other big privacy changes, this change is slated to ship in v80, the code has been in for five years and you can enable it in Chrome 78+ and Edge 78+:

  1. Visit chrome://flags/#reduced-referrer-granularity
  2. Set the feature to Enabled
  3. Restart your browser
Flag

I’ve published a few toy test cases for playing with Referrer Policy here.

As noted in their Intent To Implement, the Chrome team are not the first to make changes here. As of Firefox 70 (Oct 2019), the default referrer policy is set to strict-origin-when-cross-origin, but only for requests to known-tracking domains, OR while in Private mode. In Safari ITP, all cross-site HTTP referrers and all cross-site document.referrers are downgraded to origin. Brave forges the Referer (sending the Origin of the target, not the source) when loading 3rd party resources.

Understand the Limits

Note that this new default is “opt-out”– a page can still choose to send unrestricted referral URLs if it chooses. As an author, I selfishly hope that sites like Reddit and Hacker News might do so.

Also note that this new default does not in any way limit JavaScript’s access to the current page‘s URL. If your page at https://contoso.com/SuperSecretDoc.aspx includes a tracking script:

script

… the HTTPS request for track.js will send Referer: https://contoso.com/, but when the script runs, it will have access to the full URL of its execution context (https://contoso.com/SuperSecretDoc.aspx) via the window.location.href property.

Test Your Sites

If you’re a web developer, you should test your sites in this new configuration and update them if anything is unexpectedly broken. If you want the browser to behave as it used to, you can use any of the policy-specification mechanisms to request no-referrer-when-downgrade behavior for either an entire page or individual links.

Or, you might pick an even stricter policy (e.g. same-origin) if you want to prevent even the origin information from leaking out on a cross-site basis. You might consider using this on your Intranet, for instance, to help prevent the hostnames of your Intranet servers from being sent out to public Internet sites.

Stay private out there!

-Eric

1 The misspelling of the HTTP header name is a historical error which was never corrected.

2 Notably, Safari, IE11, and versions of Edge 18 and below only supported an older draft of the Referrer policy spec, with tokens never (matching no-referrer), always (matching unsafe-url), origin (unchanged) and default (matching no-referrer-when-downgrade). Edge 18 supported origin-when-cross-origin, but only for resource subdownloads.