While I do most of my work in an office, from time to time I work on code changes to Chromium at home. With the recent deprecation of Jumbo Builds, building the browser on my cheap 2016-era Dell XPS 8900 (i7-6700K) went from unpleasant to impractical. While I pondered buying a high-end Threadripper, I couldn’t justify the high cost, especially given the limited performance characteristics for low-thread workloads (basically, everything other than compilation).
The introduction of the moderately-priced (nominally $750), 16 Core Ryzen 3950X hit the sweet spot, so I plunked down my credit card and got a new machine from a system builder. Disappointingly, it took almost two months to arrive in a working state, but things seem to be good now.
The AMD Ryzen 3950X has 16 cores with two threads each, and runs around 3.95ghz when they’re all fully-loaded; it’s cooled by a CyberPowerPC DeepCool Castle 360EX liquid cooler. An Intel Optane 905P 480GB system drive holds the OS, compilers, and Chromium code. The key advantage of the Optane over more affordable SSDs is that it has a much higher random read rate (~400% as fast as the Samsung 970 Pro I originally planned to use):
Following the Chromium build instructions, I configured my environment and set up a 32bit component build with reduced symbols:
Atop Windows 10 1909, I disabled Windows Defender entirely, and didn’t do anything too taxing with the PC while the build was underway.
Ultimately, a clean build of the “chrome” target took just under 53 minutes, achieving 33.3x parallelism.
While this isn’t a fast result by any stretch of the definition, it’s still faster than my non-jumbo local build times back when I worked at Google in 2016/2017 and used a $6000 Xeon 48 thread workstation to build Chrome, at somewhere around half of the cost.
Cloud Compilation
When I first joined Google, I learned about the seemingly magical engineering systems available to Googlers, quickly followed by the crushing revelation that most of those magic tools were not available to those of us working on the Chromium open-source project.
The one significant exception was that Google Chrome engineers had access to a distributed build system called “Goma” which would allow compiling Chrome using servers in the Google cloud. My queries around the team suggested that only a minority of engineers took advantage of it, partly because (at the time) it didn’t generate very debuggable Windows builds. Nevertheless, I eventually gave it a shot and found that it cut perhaps five minutes off my forty-five minute jumbo build times on my Xeon workstation. I rationalized this by concluding that the build must not be very parallelizable, and the fact that I worked remotely from Austin, so any build-artifacts from the Goma cloud would be much further away than from my colleagues in Mountain View.
Given the complexity of the configuration, I stopped using Goma, and spent perhaps half of my tenure on Chrome with forty-five minute build times[1]. Then, one day I needed to do some development on my Macbook, and I figured its puny specs would benefit from Goma in a way my Xeon workstation never would. So I went back to read the Goma documentation and found a different reference than I saw originally. This one mentioned a then unknown to me “-j” command line argument that tells the build system how many cloud cores to use.
This new, better, documentation noted that by default the build system would just match your local core count, but when using Goma you should instead demand ~20x your local core count– so -j 960 for my workstation. With one command line argument, my typical compiles dropped from 45 minutes to around 6.
::suitable_meme_of_wonder_and_fury::
Returning to Edge
I returned to Microsoft as a Program Manager on the Edge team in mid-2018, unaware that replatforming atop Chromium was even a possibility until the day before I started. Just before I began, a lead sent me a 27 page PDF file containing the Edge-on-Chromium proposal. “What do you think?” he asked. I had a lot of thoughts (most of the form “OMG, yes!“) but one thing I told everyone who would listen is that we would never be able to keep up without having a cloud-compilation system akin to Goma. The Google team had recently open-sourced the Goma client, but hadn’t yet open-sourced the cloud server component. I figured the Edge team had engineering years worth of work ahead of us to replicate that piece.
When an engineer on the team announced two weeks later that he had “MSGoma” building Chromium using an Azure cloud backend, it was the first strong sign that this crazy bet could actually pay off.
And pay off it has. While I still build locally from time to time, I typically build Chromium using MSGoma from my late 2018 Lenovo X1 Extreme laptop, with build times hovering just over ten minutes. Cloud compilation is a game changer.
The Chrome team has since released a Goma Server implementation, and several other major Chromium contributors are using distributed build systems of their own design.
I haven’t yet tried using MSGoma from my new Ryzen workstation, but I’ve been told that the Optane drive is especially helpful when performing distributed builds, due to the high incidence of small random reads.
-Eric
[1] This experience recalled a much earlier one: my family moving to Michigan shortly after I turned 11. Our new house featured a huge yard. My dad bought a self-propelled lawn mower and my brother and I took turns mowing the yard weekly. The self-propelled mower was perhaps fifteen pounds heavier than our last mower, and the self-propelling system didn’t really seem to do much of anything.
After two years of weekly mows from my brother and I, my dad took a turn mowing. He pushed the lawn mower perhaps five feet before he said “That isn’t right,” reached under the control panel and flipped a switch. My brother and I watched in amazement and dismay as the mower began pulling him across the yard.
As a part of every page load, browsers have to make dozens, hundreds, or even thousands of decisions — should a particular API be available? Should a resource load be permitted? Should script be allowed to run? Should video be allowed to start playing automatically? Should cookies or credentials be sent on network requests? The list is long.
In many cases, decisions are governed by two inputs: a user setting, and the URL of the page for which the decision is being made.
In the old Internet Explorer web platform, each of these decisions was called an URLAction, and the ProcessUrlAction(url, action,…) API allowed the browser or another web client to query its security manager for guidance on how to behave.
To simplify the configuration for the user or their administrator, the legacy platform classified sites into five1 different Security Zones:
Local Machine
Local Intranet
Trusted
Internet
Restricted
Users could use the Internet Control Panel to assign specific sites to Zones and to configure the permission results for each zone. When making a decision, the browser would first map the execution context (site) to a Zone, then consult the setting for that URLAction for that Zone to decide what to do.
Reasonable defaults like “Automatically satisfy authentication challenges from my Intranet” meant that most users never needed to change any settings away from their defaults.
In corporate or other managed environments, administrators can use Group Policy to assign specific sites to Zones (via “Site to Zone Assignment List” policy) and specify the settings for URLActions on a per-zone basis. This allowed Microsoft IT, for instance, to configure the browser with rules like “Treat https://mail.microsoft.com as a part of my Intranet and allow popups and file downloads without warning messages.“
Beyond manual administrative or user assignment of sites to Zones, the platform used additional heuristics that could assign sites to the Local Intranet Zone. In particular, the browser would assign dotless hostnames (e.g. https://payroll) to the Intranet Zone, and if a Proxy Configuration script was used, any sites configured to bypass the proxy would be mapped to the Intranet Zone.
Applications hosting Web Browser Controls, by default, inherit the Windows Zone configuration settings, meaning that changes made for Internet Explorer are inherited by other applications. In relatively rare cases, the host application might supply its own Security Manager and override URL Policy decisions for embedded Web Browser Control instances.
A Web Developer working on a site locally might find that it worked fine (Intranet Zone), but failed spectacularly for their users when deployed to production (Internet Zone).
Users were often completely flummoxed to find that the same page on a single server behaved very differently depending on how they referred to it — e.g. http://localhost/ (Intranet Zone) vs. http://127.0.0.1/ (Internet Zone).
A synchronous API call might need to know what Zone a caller is in, but determining that could, in the worst case, take tens of seconds — the time needed to discover the location of the proxy configuration script, download it, and run the FindProxyForUrl() function within it. This could lead to a hang and unresponsive UI.
A site’s Zone can change at runtime without restarting the browser (say, when moving a laptop between home and work networks, or when connecting or disconnecting from a VPN).
An IT Department might not realize the implications of returning DIRECT from a proxy configuration script and accidentally map the entire untrusted web into the highly-privileged Intranet Zone. (Microsoft IT accidentally did this circa 2011, and Google IT accidentally did it circa 2016).
Some features like AppContainer Network Isolation are based on firewall configuration and have no inherent relationship to the browser’s Zone settings.
Legacy Edge
The legacy Edge browser (aka Spartan, Edge 18 and below) inherited the Zone architecture from its Internet Explorer predecessor with a few simplifying changes:
Windows’ five built-in Zones were collapsed to three: Internet (Internet), the Trusted Zone (Intranet+Trusted), and the Local Computer Zone. The Restricted Zone was removed.
Zone to URLAction mappings were hardcoded into the browser, ignoring group policies and settings in the Internet Control Panel.
Use of Zones in Chromium
Chromium goes further and favors making decisions based on explicitly-configured site lists and/or command-line arguments.
Nevertheless, in the interest of expediency, Chromium today uses Windows’ Security Zones by default in two places:
When deciding whether or not to release Windows Integrated Authentication (Kerberos/NTLM) credentials automatically.
For the first one, if you’ve configured the setting Launching applications and unsafe files to Disable in your Internet Control Panel’s Security tab, Chromium will block high-risk file downloads with a note: Couldn't download - Blocked. Similarly, using Save As on a loaded document might get blocked.
Similarly, because Chrome uses the Windows Attachment Execute Services API to write a Mark-of-the-Web on downloaded files, the Launching applications and unsafe files setting (aka URLACTION_SHELL_EXECUTE_HIGHRISK) for the download’s originating Zone controls whether the MoTW is written. If this setting is set to Enable (as it is for LMZ and Intranet), no MoTW is written to the file’s Zone.Identifier alternate data stream. If the Zone’s URLAction value is set to Prompt (as it is for Trusted Sites and Internet zones), the Security Zone identifier is written to the ZoneId property in the Zone.Identifier file.
By setting a policy, Administrators can optionally configure Edge or configure Chrome to skip SmartScreen/SafeBrowsing reputation checks for File Downloads that original from the Intranet/Trusted Zone.
For the second use of Zones, Chromium will process URLACTION_CREDENTIALS_USE to decide whether Windows Integrated Authentication is used automatically, or the user should instead see a manual authentication prompt. By setting the AuthServerAllowList policy, an admin may prevent Zone Mapping from being used to decide whether credentials should be sent.
Aside: the manual authentication prompt is really a bit of a mistake– the browser should instead just show a prompt: “Would you like to [Send Credentials] or [Stay Anonymous]” dialog box, rather than forcing the user to retype credentials that Windows already has.
Even Limited Use is Controversial
Any respect for Zones (or network addresses2) in Chromium remains controversial— the Chrome team has launched and abandoned plans to remove all support a few times, but ultimately given up under the weight of enterprise compat concerns. The arguments for complete removal include:
Zones are poorly documented, and Windows Zone behavior is poorly understood.
The performance/deadlock risks mentioned earlier (Intranet Zone mappings can come from a WPAD-discovered proxy script).
Zones are Windows-only (meaning they prevent drop-in replacement of Windows by ChromeOS).
Beyond the two usages of Zones inherited from upstream (Downloads and Auth), the new Chromium-based Edge browser adds three more:
Administrators can configure Internet Explorer Mode to open all Intranet sites in IEMode. Those IEMode tabs are really running Internet Explorer, and they use Zones for everything that IE did.
Update: This is very much a corner case, but I’ll mention it anyway. On downlevel operating systems (Windows 7/8/8.1), logging into the browser for sync makes use of a Windows dialog box that contains a Web Browser Control (based on MSHTML) that loads the login page. If you adjust your Windows Security Zones settings to block JavaScript from running in the Internet Zone, you will find that you’re unable to log into the new browser.
Downsides/Limitations
While it’s somewhat liberating that we’ve moved away from the bug farm of Security Zones, it also gives us one less tool to make things convenient or compatible for our users and IT admins.
We’ve already heard from some customers that they’d like to have a different security and privacy posture for sites on their “Intranet”, with behaviors like:
Disable the Tracking Prevention, “Block 3rd party cookie”, and other privacy-related controls for the Intranet (like IE/Edge did).
Disable “HTTP and mixed content are unsafe” and “TLS/1.0 and TLS/1.1 are deprecated” nags. (Update: Now pretty obsolete as these no longer exist)
Skip SmartScreen website checks for the Trusted/Intranet zones (available for Download checks only).
Allow ClickOnce/DirectInvoke/Auto-opening Downloads from the Intranet without a prompt. Previously, Edge (Spartan)/IE respected the FTA_OpenIsSafe bit in the EditFlags for the application.manifest progid if-and-only-if the download source was in the Intranet/Trusted Sites Zone. As of Edge 94, other policies can be used.
Drop all Referrers when navigating from the Intranet to the Internet; leave Referrers alone when browsing the Intranet. (Update: less relevant now).
Internet Explorer and legacy Edge automatically send your client certificate to Intranet sites that ask for it. The AutoSelectCertificateForUrls policy permits Edge to send a client certificate to specified sites without a prompt, but this policy requires the administrator to manually specify the site list.
Block all (or most) extensions from touching Intranet pages to reduce the threat of data leaks (runtime_blocked_hosts policy).
Guide all Intranet navigations into an appropriate profile or container (a la Detangle).
Upstream, there’s alongstanding desire to help protect intranets/local machine from cross-site-request-forgery attacks; blocking loads and navigations of private resources from the Internet Zone is somewhat simpler than blocking them from Intranet Sites. The current plan is to protect RFC1918-reserved address space.
You’ll notice that each of these has potential security impact (e.g. an XSS on a privileged “Intranet” page becomes more dangerous; unqualified hostnames can result in name collisions), but having the ability to scope some powerful features to only “Intranet” sites might also improve security by reducing attack surface.
As browser designers, we must weigh the enterprise impact of every change we make, and being able to say “This won’t apply to your intranet if you don’t want it to” would be very liberating. Unfortunately, building such an escape hatch is also the recipe for accumulating technical debt and permitting the corporate intranets to “rust” to the point that they barely resemble the modern public web.
Best Practices
Throughout Chromium, many features are designed respect an individual policy-pushed list of sites to control their behavior. If you were forward-thinking enough to structure your intranet such that your hostnames are of the form:
…Congratulations, you’ve lucked into a best practice. You can configure each desired policy with a *.contoso-intranet.comentry and your entire Intranet will be opted in.
Unfortunately, while wildcards are supported, there’s presently no way to express the concept of “any dotless hostname.”
Why is that unfortunate? For over twenty years, Internet Explorer and legacy Edge mapped domain names like https://payroll, https://timecard, and https://sharepoint/ to the Intranet Zone by default. As a result, many smaller companies have benefitted from this simple heuristic that requires no configuration changes by the user or the IT department.
Opportunity: Maybe such a DOTLESS_HOSTS token should exist in the Chromium policy syntax. This seems unlikely to happen. Edge has been on Chromium for over two years now, and there’s no active plan to introduce such a feature.
Summary
Internet Explorer and Legacy Edge use a system of five Zones and 88+ URLActions to make security decisions for web content, based on the host of a target site.
Chromium (New Edge, Chrome) uses a system of Site Lists and permission checks to make security decisions for web content, based on the hostname of a target site.
There does not exist an exact mapping between these two systems, which exist for similar reasons but implemented using very different mechanisms.
In general, users should expect to be able to use the new Edge without configuring anything; many of the URLActions that were exposed by IE/Spartan have no logical equivalent in modern browsers.
If the new Edge browser does not behave in the desired way for some customer scenario, then we must examine the details of what isn’t working as desired to determine whether there exists a setting (e.g. a Group Policy-pushed SiteList) that provides the desired experience.
-Eric
1 Technically, it was possible for an administrator to create “Custom Security Zones” (with increasing ZoneIds starting at #5), but such a configuration has not been officially supported for at least fifteen years, and it’s been a periodic source of never-will-be-fixed bugs.
2 Beyond those explicit uses of Windows’ Zone Manager, various components in Chromium have special handling for localhost/loopback addresses, and some have special recognition of RFC1918 private IP Address ranges, e.g. SafeBrowsing handling, navigation restrictions, and Network Quality Estimation. As of 2022, Chrome did a big refactor to allow determination of whether or not the target site’s IP address is in the public IP Address space or the private IP address space (e.g. inherently Intranet) as a part of the Private Network Access spec. This check should now be basically free (it’s getting used on every resource load) and it may make sense to start using it in a lot of places to approximate the “This target is not on the public Internet” check.
Within Edge, the EMIE List is another mechanism by which sites’ hostnames may result in different handling.
Ancient History
Security Zones were introduced with Internet Explorer 4, released back in 1997:
The UI has only changed a little bit since that time, with most of the changes happening in IE5. There were only tiny tweaks in IE6, 7, and 8.
In late 2004, I was the Program Manager for Microsoft’s clipart website, delivering a million pieces of clipart to Microsoft Office customers every day. It was great fun. But there was a problem– our “Clip of the Day” feature, meant to spotlight a new and topical piece of clipart every day, wasn’t changing as expected.
After much investigation (could the browser itself really be wrong?!?), I wrote to the IE team to complain about what looked like bugs in its caching implementation. In a terse reply, I was informed that the handful of people then left on the browser team were only working on critical security fixes, and my caching problems weren’t nearly important enough to even look at.
That night, unable to sleep, I tossed and turned and fumed at the seeming arrogance of the job link in the respondent’s email signature… “Want to change the world? Join the new IE team today!”
Gradually, though, I calmed down and reasoned it through… While the product wasn’t exactly beloved, everyone I knew with a computer used Internet Explorer. Arrogant or not, it was probably accurate that there was nothing I could do with my career at that time that would have as big an impact as joining the IE team. And, I smugly realized that if I joined the team, I’d get access to the IE source code, and could go root out those caching bugs myself.
I reached out to the IE lead for an informational interview the following day, and passed an interview loop shortly thereafter.
After joining the team, I printed out the source code for the network stack and sat down with a red pen. There were no fewer than six different bugs causing my “Clip of the Day gets stuck” issue. When my devs fixed the last of them, I mentioned this and my story to my GPM (boss’ boss).
“Does this mean you’re a retention risk?” Tony asked.
“Maybe after we fix the rest of these…” I retorted, pointing at the pile of paper with almost a hundred red circles.
No one in the world loved IE as much as I did, warts and all. Investigating, documenting, and fixing problems in Internet Explorer was a nearly all-consuming passion throughout my twenties. Internet Explorer pioneered a broad range of (mostly overlooked) innovations, and in rediscovering them, I felt like one of the characters on Lost — a castaway in a codebase whose brilliant designers were long gone. IE9 was a fantastic, best-of-its-time browser, and I’ll forever be proud of it. But as IE9 wound down and the Windows 8 adventure began, it was already clear that its lead would not last against the Chrome juggernaut.
I shipped IE7, IE8, IE9, and IE10, leaving Microsoft in late 2012, shortly after IE10 was finished, to build Fiddler for Telerik.
In 2015, I changed my default browser to Chrome. In 2016, I joined the Chrome Security team. I left Google in the summer of 2018 and rejoined the Microsoft Edge team, and that summer and fall I spent 50% of my time rediscovering bugs that I’d first found in IE and blogged about a decade before.
Fortunately, Edge’s faster development pace meant that we actually got to fix some of the bugs this time, but Chrome’s advantages in nearly every dimension left Edge very much in an underdog status. Fortunately, the other half of my time was spent working on our (then) secret project to replatform the next version of our Edge browser atop the open-source Chromium project.
We’ve now shipped our best browser ever — the Chromium-based Microsoft Edge. I hope you’ll try it out.
It’s with love that I beg you… please let Internet Explorer retire to the great bitbucket in the sky. It’s time. It’s been time for a long time.
Burndown List
Last night, as I read the details of yet another 0-day security bug in Internet Explorer, I posted the following throwaway tweet, which netted a surprising number of interactions:
I expected the usual slew of “Yeah, IE is terrible,” and “IE was always terrible,” and “Somebody tell my {boss,school,parents}” responses, but I didn’t really expect serious replies. I got some, however, and they’re interesting.
Shared Credentials
In order to map a SharePoint 365 shared library to Explorer, I MUST login with my AD credentials to IE first, typically daily or every reboot. As far as I’m aware, Edge can’t replace that necessity and Explorer can’t authenticate on its own still. Today, in 2020.
Internet Explorer shares a common networking stack (WinINET) and Cookie Jar (for Intranet/Trusted sites) with many native code applications on Windows, including Windows Explorer. Tim identifies a scenario where Windows Explorer relies on an auth cookie being found in the WinINET cookie jar, put there by Internet Explorer. We’ve seen similar scenarios in some Microsoft Office flows.
Depending on a cookie set by Internet Explorer might’ve been somewhat reasonable in 2003, but Vista/IE7’s introduction of Protected Mode (and cookie jar partitioning) in 2006 made this a fragile architecture. The fact that anything depends upon it in 2020 is appalling.
Thoughts: I need to bang on some doors. This is depressing.
Certificate Issuance
From Sectigos Website:
Note: Please use Microsoft Internet Explorer 11 or Mozilla Firefox to collect your certificate. Code Signing certificates cannot be generated using Apple Safari, Google Chrome, or Microsoft Edge.
Developers who apply digital signatures to their apps and server operators who expose their sites over HTTPS do so using a digital certificate. In ideal cases, getting a certificate is automatic and doesn’t involve a browser at all, but some Certificate Authorities require browser-based flows. Those flows often demand that the user use either Internet Explorer or Firefox because the former supports ActiveX Controls for certificate issuance, while Firefox, until recently, supported the Keygen element.
WebCrypto, now supported in all modern browsers, serves as a modern replacement for these deprecated approaches, and some certificate issuers are starting to build issuance flows atop it.
Thoughts: We all need to send some angry emails. Companies in the Trust space should not be built atop insecure technologies.
Banking, especially in Asia
IE is required for many government and banking websites in Asia.
A fascinating set of circumstances led to Internet Explorer’s dominance in Asian markets. First, early browsers had poor support for Unicode and East Asian character sets, forcing website developers to build their own text rendering atop native code plugins (ActiveX). South Korea mandated use of a locally-developed cipher (SEED) for banking transactions[1], and this cipher was not implemented by browser developers… ActiveX again to the rescue. Finally, since all users were using IE, and were accustomed to installing ActiveX controls, malware started running rampant, so banks and other financial institutions started bundling “security solutions” (aka rootkits) into their ActiveX controls. Every user’s browser was a battlefield with warring native code trying to get the upper hand. A series of beleaguered Microsoft engineers (including Ed Praitis, who helped inspire me to make my first significant code commits to the browser) spent long weeks trying to keep all of this mess working as we rearchitected the browser, built Protected Mode and later Enhanced Protected Mode, and otherwise modernized a codebase nearing its second decade.
Thoughts: IE marketshare in Asia may be higher than other places, but it can’t be nearly as high as it once was. Haven’t these sites all pivoted to mobile apps yet?
Reader Survey: Do you have any especially interesting scenarios where you’re forced to use Internet Explorer? Sound off in the comments below!
Q&A
Q: I get that IE is terrible, but I’m an enterprise admin and I own 400 websites running lousy websites written by a vendor in a hurry back in 2004. These sites will not be updated, and my employees need to keep using them. What can I do?
A: The new Chromium-based Edge has an IE Mode; you can configure your users so that Edge will use an Internet Explorer tab when loading those sites, directly within Edge itself. Consumers with unmanaged PCs can enable IEMode while users on Enterprise-managed PCs must have an IE Mode site list configured (or set twodebug policies).
A: Any use of an ancient web engine poses some risk, but IE Mode dramatically reduces the risk, by ensuring that only sites selected by the IT Administrator load in IE mode. Everything else seamlessly transitions back to the modern, performant and secure Chromium Edge engine.
Q: How do I debug? The Chromium F12 Developer Tools don’t work for IE Mode tabs?
A: Yes. You can either debug in full IE, or run C:\windows\system32\f12\IEChooser.exe, select the tab running in IEMode, and get an IE F12 Dev Tools window for that tab. This tool also works for WebOCs!
A: In many cases, WebOCs inside a native application are used to render trusted content delivered from the application itself, or from a server controlled by the application’s vendor. In such cases, and presuming that all content is loaded over HTTPS, the security risk of the use of a WebOC is significantly lower. Rendering untrusted HTML in a WebOC is strongly discouraged, as WebOCs are even less secure than Internet Explorer itself. For compatibility reasons, numerous security features are disabled-by-default in WebOCs, and the WebOC does not run content in any type of process sandbox.
Looking forward, the new Chromium-based WebView2 control should be preferred over WebOCs for scenarios that require the rendering of HTML content within an application.
Q: Does this post mean anything has changed with regard to Internet Explorer’s support lifecycle, etc?
A: No. Internet Explorer will remain a supported product until its support lifecycle runs out. I’m simply begging you to not use it except to download a better browser.
Footnotes
[1] The SEED cipher wasn’t just a case of the South Korean government suffering from not-invented-here, but instead a response to the fact that the US Government at the time forbid export of strong crypto.
Problems in accessing websites can often be found and fixed if the network traffic between the browser and the website is captured as the problem occurs and the resulting log file is shared with engineers.
This short post explains how to capture such log files.
Capturing Network Traffic Logs
If someone asked you to read this post, chances are good that you were asked to capture a web traffic log to track down a bug in a website or your web browser.
Fortunately, in Google Chrome or the new Microsoft Edge (version 76+), capturing traffic is simple:
Optional but helpful: Close all browser tabs but one.
Navigate the tab to about://net-export
In the UI that appears, press the Start Logging to Diskbutton.
Choose a filename to save the traffic to. Tip: Pick a location you can easily find later, like your Desktop.
Reproduce the networking problem in a new tab. If you close or navigate the //net-export tab, the logging will stop automatically.
After reproducing the problem, press the Stop Logging button.
Share the Net-Export-Log.json file with whomever will be looking at it. Optional: If the resulting file is very large, you can compress it to a ZIP file.
Network Capture UI
Privacy-Impacting Options
In some cases, especially when you dealing with a problem in logging into a website, you may need to set either the Include cookies and credentials or Include raw bytes options before you click the Start Logging button.
Note that there are important security & privacy implications to selecting these options– if you do so, your capture file will almost certainly contain private data that would allow a bad actor to steal your accounts or perform other malicious actions. Share the capture only with a person you trust and do not post it on the Internet in a public forum.
Tutorial Video
If you’re more of a visual learner, here’s a short video demonstrating the traffic capture process.
Alternatives
If you use Edge’s “Recreate my problem” button on the Feedback Wizard, the feedback tool will capture and include a network trace (in “include cookies and credentials” mode) as a part of your feedback report.
This method is the easiest way to get a network trace to Microsoft: the JSON is transmitted and stored securely without you having to find a way to encrypt and transfer the data. However, this method is inflexible: it does not allow you to send your traffic log to a friend or attach it to a bug in the Chromium bug database, and it does not expose the option to “include raw bytes.”
If you want to capture unsanitized cookies and authentication headers, but not the response bodies, use --net-log-capture-mode=IncludeSensitive instead. Omit the final parameter entirely if you do not want to include the raw data and want just the “Strip Private Information” mode of capture.
Appendix A.1: Capturing Electron and WebView2
Note: The command line argument approach also works for Electron JSapplications like Microsoft Teams:
Note: This will only capture the network traffic from the Chromium layer of Electron apps (e.g. web requests from the nodeJS side will not be captured) but it still may be very useful.
WebView2-based applications can either pass the --log-net-log command line into the WebView2 to initiate the capture, or they can add a second WebView control to their application (in the same context) and navigate it to about:net-export to allow the debugging user to manually trigger logging. It also seems that definining the WEBVIEW2_ADDITIONAL_BROWSER_ARGUMENTS environment variable ought to work as well (and that won’t require changing the WebView2 application).
Appendix B: Mobile Browsers
Appendix B.1: Android
The Net-Export feature works great on Android, but take care to ensure that you switch tabs to perform your repro after starting your capture– tab switching is less obvious on Android.
On mobile, when the capture completes, the resulting file is offered to be sent via email (because the mobile file system is not very accessible).
On modern versions of Android, the “Email Log” button will also allow you to upload the file to your Google Drive. Interestingly, if you subsequently download the file from your Drive, it will get a .eml file extension, but if you look at the file, it’s still the plain .json content; you can either fix the file extension, or you can just drag/drop the file into the NetLog viewer, which doesn’t care about the file extension.
Appendix B.2: iOS
Unfortunately, on iOS, the Network Export feature is somewhat unlikely to contain the data you need because the capture contains only the data sent by Chromium’s network stack, not the web content traffic (HTML, JS, CSS, images, etc) used inside the WkWebView control (embedded Safari). To capture data from the entire browser on iOS, you’ll need to use another approach, e.g. Telerik Fiddler.
Appendix C: Limitations
No HTTP POST Data
One important shortcoming in the current NetLog file format is that it does not contain any request body data, even if you select the “Include Raw Bytes” option. If you need the request body data, you may need to collect a HTTP Archive (HAR) file instead.
Hit F12 to open the Developer Tools.
Activate the Network tab.
Ensure the recording button at the top of the tab is red
Tick the Preserve log checkbox.
Reproduce the problem
Right-click entries in the the grid and choose Save all as HAR with content
Share the HAR file only with a person you trust and do not post it on the Internet in a public forum.
Alternatively, you might just capture the traffic using Fiddler.
Can’t Capture Requests that Don’t Reach the Network Stack
Perhaps surprisingly, the browser’s ServiceWorker feature lives above the network stack, so if the browser issues requests that are satisfied locally by the ServiceWorker, that “traffic” is not seen in the NetLog. (fetch requests that are sent from the ServiceWorker to the Network will appear in the log, however.) To see requests that are satisfied by the ServiceWorker, use the F12 Developer Tools.
Similar to the ServiceWorker case, the Blink engine has a memory cache for content that can be reused within a page. Certain requests (e.g. if there are ten image tags all pointed at the same URL) may be satisfied by this cache without sending the request down to the network stack.
Can’t Capture IE Mode
Pages running in Edge’s IE Mode tabs are loaded using URLMon and WinINET, the Windows Network Stacks used by Internet Explorer. Because this traffic does not go through the Chromium Network Stack, it is not recorded in NetLogs.
To work around this problem, you’re probably best off just capturing the traffic using Fiddler.
UPDATE: Timelines in this post were updated in March 2020, October 2020, April 2021, and October 2021 to reflect the best available information.
HTTPS traffic is encrypted and protected from snooping and modification by an underlying protocol called Transport Layer Security (TLS). Disabling outdated versions of the TLS security protocol will help move the web forward toward a more secure future. All major browsers (including Firefox, Chrome, Safari, Internet Explorer and Edge Legacy) have publicly committed to require TLS version 1.2 or later by default starting in 2020.
Starting in Edge 84, reaching stable in July 2020, the legacy TLS/1.0 and TLS/1.1 protocols will be disabled by default. These older protocol versions are less secure than the TLS/1.2 and TLS/1.3 protocols that are now widely supported by websites:
To help users and IT administrators discover sites that still only support legacy TLS versions, the edge://flags/#show-legacy-tls-warnings flag was introduced in Edge Canary version 81.0.392. Simply set the flag to Enabled and restart the browser for the change to take effect:
Subsequently, if you visit a site that requires TLS/1.0 or TLS/1.1, the lock icon will be replaced with a “Not Secure” warning in the address box, alongside the warning in the F12 Developer Tools Console:
As shown earlier in this post, almost all sites are already able to negotiate TLS/1.2. For those that aren’t, it’s typically either a simple configuration option in either the server’s registry or web server configuration file. (Note that you can leave TLS/1.0 and TLS/1.1 enabled on the server if you like, as browsers will negotiate the latest common protocol version).
In some cases, server software may have no support for TLS/1.2 and will need to be updated to a version with such support. However, we expect that these cases will be rare—the TLS/1.2 protocol is now over 11 years old.
Obsolete TLS Blocks Subdownloads
Often a website pulls in some page content (like script or images) from another server, which might be running a different TLS version. In cases where that content server does not support TLS/1.2 or later, the content will simply be missing from the parent page.
You can identify cases like this by watching for the message net::ERR_SSL_OBSOLETE_VERSION in the Developer Tools console:
Unfortunately, a shortcoming in this console notification means that it does not appear for blocked subframes; you’ll need to look in the Network Tab or a NetLog trace for such failures.
Group Policy Details
Organizations with internal sites that are not yet prepared for this change can configure group policies to re-enable the legacy TLS protocols.
For the new Edge, use the SSLVersionMin Group Policy. This policy will remain available until the removal of the TLS/1.0 and TLS/1.1 protocols from Chromium in May 2021. Stated another way, the new Edge will always show an error page for TLS/1.0+1.1 (regardless of policy) in Edge 91 in May 2021.
For IE11 and Edge Legacy, the policy in question is the (dubiously-named) “Turn off encryption support” found inside Windows Components/Internet Explorer/Internet Control Panel/Advanced Page. Edge Legacy and IE will likely continue to support enabling these protocols via GP until they are broken from a security POV; this isn’t expected to happen for a few years.
At that time, TLS/1.0 and TLS/1.1 will no longer be usable at all and the code implementing those protocols can be deleted from the codebase.
IE Mode Details
These older protocols will not be disabled in Internet Explorer and Edge Legacy until Spring 2022.
The New Edge has the ability to load administrator-configured sites in Internet Explorer Mode. IEMode tabs depend on the IE TLS settings, so if you need an IEMode site to load a TLS/1.0 website after Spring of 2022, you’ll need to enable TLS/1.0 using the “Turn off encryption support” group policy found inside Windows Components/Internet Explorer/Internet Control Panel/Advanced Page.
If you need to support a TLS/1.0 site in both Edge and IE Modes (e.g. the site is configured as “Neutral”), then you will need to set both policies (SSLVersionMin and “Turn off Encryption Support”).
If you’ve built an application using the old Web Browser Control (mshtml, aka Internet Explorer), you might notice that by default it does not support HTTP/2. For instance, a trivial WebOC host loading Akamai’s HTTP2 test page:
When your program is running on any build of Windows 10, you can set a Feature Control Key with your process’ name to opt-in to using HTTP/2.
For applications running at the OS-native bitness, write the key here:
For 32-bit applications running on 64-bit Windows, write it here: HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Microsoft\Internet Explorer\Main\FeatureControl\FEATURE_WEBOC_ENABLE_HTTP2
Like so:
After you make this change and restart the application, it will use HTTP/2 if the server and network path supports it.
Update: Windows’ Internet Control Panel has a “HTTP2” checkbox, but it only controls web platform apps (IE and Legacy Edge), and unfortunately, the setting does not work properly for AppContainer/LowIL processes, which enable HTTP2 by default. This means that the checkbox, as of Windows 10 version 1909, is pretty much useless for its intended purpose (as only Intranet Zone sites outside of Protected Mode run at MediumIL).
A bug has been filed.
Update: Users of the new Chromium-based Edge browser can launch an instance with HTTP2 disabled using the disable-http2 command line argument, e.g. ms-edge.exe --disable-http2.
I’m not aware of a straightforward way to disable HTTP2 for the new Chromium-Edge-based WebView2 control, which has HTTP2 enabled by default.
While there are many different ways for servers to stream data to clients, the Server-sent Events / EventSource Interface is one of the simplest. Your code simply creates an EventSource and then subscribes to its onmessage callback:
Implementing the server side is almost as simple: your handler just prefaces each piece of data it wants to send to the client with the string data: and ends it with a double line-ending (\n\n). Easy peasy. You can see the API in action in this simple demo.
I’ve long been sad that we didn’t manage to get this API into Internet Explorer or the Legacy Edge browser. While many polyfills for the API exist, I was happy that we finally have EventSource in the new Edge.
Yay! \o/
Alas, I wouldn’t be writing this post if I hadn’t learned something new yesterday.
Last week, a customer reached out to complain that the new Edge and Chrome didn’t work well with their webmail application. After they used the webmail site for a some indeterminate amount time, they noticed that its performance slowed to a crawl– switching between messages would take tens of seconds or longer, and the problem reproduced regardless of the speed of the network. The only way to reliably resolve the problem was to either close the tabs they’d opened from the main app (e.g. the individual email messages could be opened in their own tabs) or to restart the browser entirely.
As the networking PM, I was called in to figure out what was going wrong over video conference. I instructed the user to open the F12 Developer Tools and we looked at the network console together. Each time the user clicked on a message, new requests were created and sat in the (pending) state for a long time, meaning that the requests were getting queued and weren’t even going to the network promptly.
But why? Diagnosing this remotely wasn’t going to be trivial, so I had the user generate a Network Export log that I could examine later.
In examining the log using the online viewer, the problem became immediately clear. On the Sockets tab, the webmail’s server showed 19 requests in the Pending state, and 6 Active connections to the server, none of which were idle. The fact that there were six connections strongly suggested that the server was using HTTP/1.1 rather than HTTP/2, and a quick look at the HTTP/2 tab confirmed it. Looking at the Events tab, we see five outstanding URLRequests to a URL that strongly suggests that it’s being used as an EventSource:
Each of these sockets is in the READING_RESPONSE state, and each has returned just ten bytes of body data to each EventSource. The web application is using one EventSource instance of the app, and the user has five tabs open to the app.
And now everything falls into place. Nearly all browsers (including Chromium-derivatives) limit themselves to 6 concurrent connections per server (this will get a bit more complicated in the future). When the server supports HTTP/2, browsers typically need just one connection because HTTP/2 supports multiplexing many (Chromium default 100, server configurable to up to 256) concurrent streams onto a single connection. HTTP/1.1 doesn’t afford that luxury, so every long-lived connection used by a page decrements the available connections by one. So, for this user, all of their network traffic was going down a single HTTP/1.1 connection, and because HTTP/1.1 doesn’t allow multiplexing, it means that every action in the UI was blocked on a very narrow head-of-line-blocking pipe.
Looking in the Chrome bug tracker, we find this core problem (“SSE connections can starve other requests”) resolved “By Design” six years ago.
Now, I’m always skeptical when reading old bugs, because many issues are fixed over time, and it’s often the case that an old resolution is no longer accurate in the modern world. So I built a simple repro script for Meddler. The script returns one of four responses:
An HTML page that consumes an EventSource
An HTML page containing 15 frames pointed at the previous HTML page
An event source endpoint (text/event-stream)
A JPEG file (to test whether connection limits apply across both EventSources and other downloads)
And sure enough, when we load the page we see that only six frames are getting events from the EventSource, and the images that are supposed to load at the bottom of the frames never load at all:
Similarly, if we attempt to load the page in another tab, we find that it doesn’t even load, with a status message of “Waiting for available socket…”
The web app owners should definitely enable HTTP/2 on their server, which will make this problem disappear for almost all of their users.
However, even HTTP/2 is not a panacea, because the user might be behind a “break-and-inspect” proxy that downgrades connections to HTTP/1.1, or the browser might conceivably limit parallel requests on HTTP/2 connections for slow networks. As noted in the By Design issue, a server depending on EventSource in multiple tabs might use a BroadcastChannel or a SharedWorker to share a single EventSource connection with all of the tabs of the web application.
Alternatively, swapping an EventSource architecture with one based on WebSocket (even one that exposes itself as a EventSource polyfill) will also likely resolve the problem. That’s because, even if the client or server doesn’t support routing WebSockets over HTTP/2, the WebSockets-Per-Host limit is 255 in Chromium and 200 in Firefox.
I’ve previously written about Web-to-App communication via Application Protocols. App Protocols allow web content to invoke a native application outside of the browser.
WebApp advocates (like me!) want to continue to close the native/browser gaps that prevent web applications from becoming full-fledged replacements for native apps. To that end, I’ve recently spent some time looking at how the web platform allows JavaScript registration of a protocol handler, where the handling “app” is a same-origin web page.
Currently supported by Firefox and Chromium-based browsers (on platforms other than Android), the function navigator.registerProtocolHandler(scheme, url_template, description) enables a website to become a handler for a URL scheme.
Real-World Usage
The canonical use-case for this is web based email clients like Gmail. Gmail would like to be able to be the user’s handler for mailto links. When the user clicks a mailto link, the content of the link should be sent to a handler page on mail.google.com which responds accordingly (e.g. by creating a new email to the specified addressee).
The registerProtocolHandler API isn’t limited to the mailto scheme, however. It presently supports a short list of allowed schemes1, and any scheme named web+{one-or-more-lowercaseASCII}.
User Experience
I’ve built a page containing a number of test cases here. When you push the button to register a protocol handler, you receive a permission prompt from Chrome/Edge or Firefox:
To avoid annoying users, if the user declines Chrome’s prompt, the site is blocked from re-requesting permission to handle the protocol. A user must manually visit the settings page to unblock permission.
User-Activation Requirements
If a page attempts to call registerProtocolHandler() on load or before the user has interacted with the page (a so-called “gesture”), then Chromium-based browsers will not pop the permission prompt. Instead, an overlapping-diamonds icon is shown at the right-hand side of the address bar, with the text “This page wants to install a service handler.” Here’s what this looks like on Gmail:
Settings
Within Chrome, you can view your currently registered handlers (and sites blocked from asking to become the registered handler) by visiting chrome://settings/content/handlers.
Operating System Integration
One particularly interesting aspect of allowing web-based registration of protocol handlers is that it is desirable for the rest of the operating system outside of the browser to respect those protocol handler associations.
For example, clicking a mailto link in some other application should launch the browser to the web-based handler if registered. However, having the browser change system state in this manner is complicated, especially on modern versions of Windows whereby various protections have been put in place to try to prevent “hijacking” of file type and protocol handler associations.
Edge and Chrome will only attempt to become the systemwide handler for a protocol when running a channel that offers to become the default browser (e.g. Stable). On such a channel, if the browser wasn’t already the handler for the protocol, after the user clicks “Allow” on the Chrome permission bubble, a Windows UAC dialog is shown:
If the user accepts by clicking “Yes”, the RegisterChromeForProtocol function silently updates the registry:
Chrome, Edge, and Firefox disallow registration of protocol handlers in Private/Incognito/InPrivate modes; the call fails silently.
With my patch landed, Chrome, Edge, and Firefox disallow registration of protocol handlers from non-secure contexts (e.g. HTTP). Due to the same-origin requirement for the handler URL, this effectively prevents the use of a non-secure page as a handler.
Chromium-based browsers enable IT admins to set default scheme-to-web mappings using Group Policy.
Chromium-based browsers do not (as of v93) offer a Group Policy to disallow protocol handler registration; an end-user setting inside about://settings does allow an individual user to turn off the prompts.
Firefox does not support targeting a RPH registered protocol as the target of a form POST request; it silently drops the POST body.
Firefox does not implement the unregisterProtocolHandler API. Users must manually unregister protocol handlers using the browser UI.
On Windows at least, neither Firefox Stable nor Firefox Nightly seems to try to become the systemwide handler for a scheme.
If you have a custom scheme link in a subframe, you probably want to add a target=_blank attribute on it. Otherwise, the web app you’ve configured as the protocol handler might load within that subframe and get blocked due to privacy settings or X-Frame-Options directives.
Updated November 30, 2020 with new information about DoH in Edge, ECH, and HTTPSSVC records, and January 25, 2021 with a few remarks about Edge’s implementation.
Before connecting to the example.com server, your browser must convert “example.com” to the network address at which that server is located.
It does this lookup using a protocol called “DNS.” Today, most DNS transactions are conducted in plaintext (not encrypted) by sending UDP messages to the DNS resolver your computer is configured to use.
There are a number of problems with the 36-year-old DNS protocol, but a key one is that the unencrypted use of UDP traffic means that network intermediaries can see (and potentially modify) your lookups, such that attackers can know where you’re browsing, and potentially even direct your traffic to some other server.
The DNS-over-HTTPS (DoH) protocol attempts to address some of these problems by sending DNS traffic over a HTTPS connection to the DNS resolver. The encryption (TLS/QUIC) of the connection helps prevent network intermediaries from knowing what addresses your browser is looking up– your queries are private between your PC and the DNS resolver that is providing the answers. The expressiveness of HTTP (with request and response headers) provides interesting options for future extensibility, and the modern HTTP2 and HTTP3 protocols aim to provide high-performance and parallel transactions with a single connection.
An additional benefit of using securing DNS is that it enables the safe introduction of new security-related features to the DNS. That capability can help secure all forms of traffic, not just the target IP addresses. For instance, the proposed HTTPSSVC record will allow clients to securely receive useful information which can improve the security and performance of the HTTP2/HTTP3 connections used to download web pages.
Try It
Support for DNS-over-HTTPS is coming to many browsers and operating systems (including a future version of Windows). You can even try DoH out in the newest version of Microsoft Edge (v79+) by starting the browser with a special command line flag. The following command line will start the browser and instruct it to perform DNS lookups using the Cloudflare DoH server:
You can test to see whether the feature is working as expected by visiting https://1.1.1.1/help. Unfortunately, this command line flag presently only works on unmanaged PCs, meaning it doesn’t do anything from PCs that are joined to a Windows domain.
Microsoft Edge Fall 2020 Update
Starting in Edge 86, DNS-over-HTTPS configuration is now available inside edge://settings/privacy. Users may either leave the default at Use current service provider (in which case DoH will only be used if the user’s OS DNS provider is known to support DoH) or they may explicitly configure a DoH provider. Explicit configuration allows selection from a small set of popular DoH providers, or a fully-qualified HTTPS URL to the DoH provider may be entered in the text box.
Notably, for Enterprise/Managed PCs, the Edge Settings UI is disabled– only administrators may configure the settings using tworelated Group Policies.
A few remarks about Edge’s implementation:
If your OS-configured DNS provider is on a list known to support DoH, we try to start speaking DoH. If it fails, we fall back to non-secure DNS.
If you manually pick a DoH provider from the Settings Page’s list, we use that. There are only a few, and there’s no difference based on locale. If it fails, we do NOT fall back.
If you manually enter a DoH URL in the Settings page, you can use whatever provider you want. If it fails, we do NOT fall back.
If you’re an Enterprise User, there’s no end-user UI and everything is configured (URL, whether fallback is allowed) exclusively by tworelated Group Policies.
Unlike Firefox, we do not have any code to try to advertise/encourage the user to pick a DoH provider.
Long term, everyone is working on approaches whereby a DNS provider can “advertise” support for DoH rather than being added to a list in Chromium, but we’re probably many months away from anything happening there.
Long-time readers of this blog know that I want to “HTTPS ALL THE THINGS” and DNS is no exception. Unfortunately, as with most protocol transitions, this turns out to be very very complicated.
SNI
The privacy benefits of DNS-over-HTTPS are predicated on the idea that a network observer, blinded from your DNS lookups by encryption, will not be able to see where you’re browsing.
Unfortunately, network observers, by definition, can observe your traffic, even if the traffic encrypted.
The network observer will still see the IP addresses you’re connecting to, and that’s often sufficient to know what sites you’re browsing.
If your Internet Service Provider (say, for example, Comcast) is configured to offer DNS-over-HTTPS, and your browser uses their resolver, your network lookups are protected from observers on the local network, but not from the Comcast resolver.
Because the data handling practices of resolvers are often opaque, and because there are business incentives for resolvers to make use of lookup data (for advertising targeting or analytics revenue), it could be the case that the very actor you are trying to hide your traffic from (e.g. your ISP) is exactly the one holding the encryption key you’re using to encrypt the lookup traffic.
To address this, some users choose to send their traffic not to the default resolver their device is configured to use (typically provided by the ISP) but instead send the lookups to a “Public Resolver” provided by a third-party with a stronger privacy promise.
However, this introduces its own complexities.
Public Resolvers Don’t Know Private Addresses
A key problem in the deployment of DNS-over-HTTPS is that public resolvers (Google Public DNS, Cloudflare, Open DNS, etc) cannot know the addresses of servers that are within an intranet. If your browser attempts to look up a hostname on your intranet (say MySecretServer.intranet.MyCo.com) using the public resolver, the public resolver not only gets information about your internal network (e.g. now Google knows that you have a server called MySecretServer.intranet) but it also returns “Sorry, never heard of it.” At this point, your browser has to decide what to do next. It might fail entirely (“Sorry, site not found”) or it might “Fail open” and perform a plain UDP lookup using the system-configured resolver provided by e.g. your corporate network administrator.
This fallback means that a network attacker might simply block your DoH traffic such that you perform all of your queries in unprotected fashion. Not great.
Even alerting the user to such a problem is tricky: What could the browser even say that a human might understand? “Nerdy McNerdy Nerd Nerd Nerd Nerd Nerd Address Nerd Resolution Nerd Geek. Privacy. Network. Nerdery. Geekery. Continue?”
Centralization Isn’t Great
Centralizing DNS resolutions to the (relatively small) set of public DNS providers is contentious, at best. Some European jurisdictions are uncomfortable about the idea that their citizens’ DNS lookups might be sent to an American tech giant.
Some privacy-focused users are primarily worried about the internet giants (e.g. Google, Cloudflare) and are very nervous that the rise of DoH will result in browsers sending traffic to these resolvers by default. Google has said they won’t do that in Chrome, while Firefox is experimenting with using Cloudflare by default in some locales.
Content Filtering
Historically, DNS resolutions were a convenient choke point for schools, corporations, and parents to implement content filtering policies. By interfering with DNS lookups for sites that network users are forbidden to visit (e.g adult content, sites that put the user’s security at risk, or sites that might result in legal liability for the organization), these organizations were able to easily prevent non-savvy users from connecting to unwanted sites. Using DoH to a Public DNS provider bypasses these types of content filters, leaving the organization with unappealing choices: start using lower-granularity network interception (e.g. blocking by IP addresses), installing content-filters on the user’s devices directly, or attempting to block DoH resolvers entirely and forcing the user’s devices to fall back to the filtered resolver.
Geo CDNs and Other Tricks
In the past, DNS was one mechanism that a geographically distributed CDN could use to load-balance its traffic such that users get the “best” answers for their current locale. For instance, if the resolver was answering a query from a user in Australia, it might return a different server address than when resolving a query from a user in Florida.
These schemes and others get more complicated when the user isn’t using a local DNS resolver and is instead using a central public resolver, possibly provided by a competitor to the sites that the user is trying to visit.
Don’t Despair
Despite these challenges and others, DNS-over-HTTPS represents an improvement over the status quo, and as browser and OS engineering teams and standards bodies invest in addressing these problems, we can expect that deployment and use of DoH will grow more common in the coming years.
DoH will eventually be a part of a more private and secure web.
Support for the venerable FTP protocol is being removed from Chromium. Standardized in 1971, FTP is not a safe protocol for the modern internet. Its primary defect is lack of support for encryption (FTPS isn’t supported by any popular browsers), although poor support for authentication and other important features (download resumption, proxying) also have hampered the protocol over the years.
After FTP support is removed, clicking on a FTP link will either launch the operating system’s registered FTP handler (if any), or will silently fail to do anything (as Chrome fails silently when an application protocol handler is not installed).
If your scenario depends on FTP today, please switch over to HTTPS as soon as possible.