Yesterday, I ran my first 10K in “the real world”, my first real world run in a long time. Almost eleven years ago I ran a 5K, and 3.5 years ago I ran a non-competitive 5 miler on Thanksgiving.
I was a bit worried about how my treadmill training would map to real world running, but signed up for the Austin Capitol 10K as a forcing function to keep working out.
tl;dr: it went pretty well: it didn’t go as well as I’d hoped, nor as poorly as I feared. I beat my previous 10K time by about four minutes and finished ahead of 40% of my age/gender group. [I’m not sure what “previous 10K time” I’m referring to here. Maybe on the treadmill?]
What went well:
The weather was great: Sixties to low seventies. It was sunny, but early enough that it wasn’t too hot. I put my bib number on my shorts because I expected I’d have to take my shirt off to use as a towel, but I didn’t have sweat pouring into my eyes until the very end.
Because it wasn’t hot, my little SpeedDraw water bottle lasted me through the whole race.
While I expect my knees are likely to be my eventual downfall, I didn’t have any knee pain at all during the run.
After running on the perfectly flat and almost perfectly predictable treadmill for three months, I expected I was going to trip or slide in gravel when running on a real road. While I definitely had to pay a lot more attention to my foot placement, I didn’t slip at all.
Sprinting at the finish felt great.
A friend (a running expert) ran with me and helped keep me motivated.
The drive to and parking for the race was easy.
Could’ve been better/worse:
Two miles in, my stomach started threatening to get rid of the prior night’s dinner; the feeling eventually faded, but I worried for the rest of the race.
As a little kid, I had horrible allergies, but since moving to Texas they’re mostly non-existent. On Saturday, my nose started running a bit and didn’t stop all weekend. Could’ve been a disaster, but it was a mild annoyance at most.
I woke up at 4:30am, 90 minutes before my alarm, and couldn’t get back to sleep. Still, I got 5.5 hours of good sleep.
What didn’t go so well:
Pacing. Running on a treadmill is trivial– you go as fast as the treadmill, and you always know exactly how fast that is. I wore my FitBit Sense watch, but because I had to keep my eyes on the road, I barely looked at it, and it never seemed to be showing what I wanted to know.
I missed seeing the 1 mile marker, and by the time the 2 mile marker came along I was feeling nervous and demotivated. My watch had me believing that I was far behind my desired pace (I wasn’t, my first mile was my fastest and at my target pace, despite the obstacles). I didn’t get into my “groove” until the fourth mile of the race, but by then I wasn’t able to maintain it.
I spend most of my running on the treadmill around 145bpm “Cardio” heart-rate zone (even when running intervals), but I spent most of this race in “Peak”, averaging 165bpm. More importantly, when I do peak on the treadmill, it’s usually been at the very end of the workout, and I get to cool down slowly with a long walk after. I need more practice ramping down my effort while still running at a more manageable pace.
Crowds. There were 15000 participants, and when I signed up, I expected I’d probably walk most of the course. As a consequence, I got assigned to one of the late-starting corrals with other slow folks, and I had to spend the first mile or so dodging around them. Zigzagging added an extra tenth of a mile to my race.
Hills. Running hills in the real world is a lot harder than doing it on the treadmill. In the real world, you have to find ways to run around the walls of walkers; while I’d hoped I’d find this motivational, seeing so many folks just walking briskly enticed me to join them. It didn’t help that some jerk at the top of the hill at 1.5mi assured everyone that it was the last hill of the race– while I didn’t really believe him, there was a part of me that very much wanted to, and I was quite annoyed when we hit the bigger hill later.
Forgettability. Likely related to the fact that I spent almost all of my time watching the road underfoot, an hour after the race was over, I struggled to remember almost anything about it. From capitol views to bands, I know there were things to see, but have no real memories of them. I’m spoiled by running on the treadmill “in” Costa Rica, although I don’t remember a ton of that either. Running definitely turns off my brain.
Poor Decisions. After the race, I wanted to keep moving, so I walked to a coffee shop I’d passed on my drive into the city. While my legs and cardio had no problem with this, at the end of the four-mile post-race walk, my feet (particularly my left foot arch) were quite unhappy.
Lest you come away with the wrong conclusion, I’m already trying to find another workable 10K sometime soon. That might be hard with the Texas summer barrelling toward us.
-Eric
Notice: As an Amazon Associate, I earn from qualifying purchases from Amazon links in this post.
Back in January, I wrote about my New Years’ Resolutions. I’m now 90 days in, and things are continuing to go well.
Health and Finance: A dry January. Exceeded. I stopped drinking alcohol on any sort of regular basis; over spring break, I peaked at two drinks per day.
Health: Track my weight and other metrics.I’ve been using the FitBit Sense smartwatch to track my workouts and day-to-day, and I’ve been weighing in on a FitBit smart scale a few days per week. I’m down a bit over 25 pounds this quarter.
Health: Find sustainable fitness habits.Going great. I’ve been setting personal records for both speed (dropping 3 minutes from my mile time) and distance (I ran my first 10K on the treadmill this week).I have the Austin Capitol 10K coming up in two weeks, and I no longer plan to walk any of it.
Travel: I cruised to Mexico with the kids over Spring break, will be visiting Seattle for work in May, will be taking the kids to Maryland in July, and have booked an Alaska cruise for September.
Finance: The stock market has recovered quite a bit recently, but I’m still a bit worried about my current cash burn rate.
Life: Produce more. I’ve been blogging a bit more lately. I decided to keep going with Hello Fresh– it’s much more expensive than I’d like, but it’s more enjoyable/rewarding than I expected.
Work continues to have more “downs” than “ups”, but almost everything else in life seems to be considerably better than a few months ago.
Q: How long does Chromium cache hostnames?I know a user can clear the hostname cache using the Clear host cache button on about://net-internals/#dns, but how long it will take for the cache to be removed if no manual action is taken? After changing DNS records on my server, nslookup from a client reflects the new IP address, but Edge is still using the old address?
DNS caching is intended to be controlled via a “time-to-live” value on DNS responses—each DNS lookup response is allowed to be cached for a time period it itself defines, and after that time period expires, the entry is meant to be deemed “stale”, and a new lookup undertaken.
DNS records get cached in myriad places (inside the browser, both literally—via the Host Resolver Cache, and implicitly– in the form of already-connected keep-alive sockets), in the operating system, in your home router, in the upstream ISP, and so forth. Using nslookupto look up an address is a reasonable approach to check whether a fresh result is being returned from the OS’ DNS cache (or the upstream network), but it is worth mentioning that Chromium can be configured not to use the OS DNS resolver (e.g. instead using DNS-over-HTTPS or another DNS configuration).
Within the browser, which resolver is used can be controlled by policy (Chrome, Edge).
Beyond treating entries older than their TTL as stale, Chromium also monitors “network change” events (e.g. connecting/disconnecting from WiFi or a VPN) and when those occur, the Host Resolver Cache will treat all previously-resolved entries as stale.
A Chromium net-export will contain details of the browser’s DNS configuration, and the contents of the browser’s DNS cache, including the TTL/expiration for each entry.
Note: For a single hostname, you may see multiple resolutions and the DNS tab may show multiple results, each with a different Network Anonymization Key. Years ago, Chromium began a project to further improve user privacy by partitioning various caches, including the DNS cache, based on the context in which a given request was made. In October 2022, the DNS cache was so partitioned. When Chromium looks up a hostname the cache will be bypassed, and a new DNS lookup issued, if the Network Anonymization key is not matched against the previously-cached result.
For example, here’s the DNS cache view when visiting pages on debugtheweb.com, enhanceie.com, and webdbg.com, each of which loads an image resource from sibling.enhanceie.com:
Beyond caching behavior, you may see other side effects when switching between the built-in DNS resolver and the system DNS resolver. The built-in resolver has more flexibility and supports requesting additional record types for HTTPS Upgrades, Encrypted Client Hello, etc.
-Eric
Tip: You can use Chromium’s --host-rules or --host-resolver-rules command line arguments to the browser to override DNS lookups:
Eric, when I am on bing.com in Edge or Chrome and I type https://portal.microsoft.com in the address bar, I go through some authentication redirections and end up on the Office website. If I then click the browser’s Back button, I go back to bing.com. But if I try the same thing in a WebView2-based application, calling webView.GoBack()sends me to one of the login.microsoftonline.com pages instead of putting me on Bing.com. How does the browser know to go all the way back?
This is a neat one.
As web developers know, a site’s JavaScript can decide whether to create entries in the navigation stack (aka travellog); the window.location.replacefunction navigates to a new page without creating an entry for the current page, while just setting window.location directly does create a new entry. But in this repro, that’s not what’s going on.
If we right-click on the back button in Edge, we can clearly see that it has created navigation entries in both Edge and Chrome:
… but clicking back in either browser magically skips over the “Sign in” entries.
Notably, in the browser scenario, if you open the F12 DevTools Console on the Office page and run history.go(-1), or if you manually pick the prior entry from the back button’s context menu, you will go back to the login page.
Therefore, this does seem to be a “magical” behavior in back button itself.
If we look around the source for IDC_BACK, we find GoBack(browser_, disposition) which lands us inside NavigationControllerImpl::GoBack(). That, in turn, puts us into a function which contains some magic around some entries being “skippable”.
Let’s look at the should_skip_on_back_forward_ui() function. That just returns the value of a flag, whose comment explains everything:
Set to true if this page does a navigation without ever receiving a user gesture. If true, it will be skipped on subsequent back/forward button clicks. This is to intervene against pages that manipulate the history such that the user is not able to go back to the last site they interacted with.
Navigation here implies both client side redirects and history.pushState calls.
It is always false the first time an entry's navigation is committed and is also reset to false if an entry is reused for any subsequent navigations.
Okay, interesting– it’s expected that these entries are skipped here, because the user hasn’t clicked anywhere in the login page. If they doclick (or tap or type),the entries are not deemed as skippable.
If you retry the scenario in the browser, but this time, click anywhere in the content area of the browser while the URL bar reads login.microsoftonline.com, you now see that the entries are not skipped when you click the back button.
To make the WebView2 app behave like Edge/Chrome, you need to configure it such that the user does not need to click/type in the login page. To do that, enable the Single Sign-On property of the WebView2, such that the user’s Microsoft Account is automatically used by the login page without the user clicking anywhere:
private async void WebView_Loaded() {
var env = await CoreWebView2Environment.CreateAsync(
userDataFolder: Path.Combine(Environment.CurrentDirectory, "WebView"),
options: new CoreWebView2EnvironmentOptions(allowSingleSignOnUsingOSPrimaryAccount:true));
After you do so, clicking Back in the WebView2 will behave as it does in the browser.
One of the more common problems reported by Enterprises is that certain Edge/Chrome policies do not seem to work properly when the values are written to the registry.
For instance, when using the about:policy page to examine the browser’s view of the applied policy, the customer might complain that a policy value they’ve entered in the registry isn’t being picked up:
In a quick look at the Microsoft documentation for the policy: ExemptDomainFileTypePairsFromFileTypeDownloadWarnings, the JSON syntax looks almost right, but in one example it’s wrapped in square brackets. But in another example, the value is not. What’s going on here?
A curious and determined administrator might notice that by either adding the square brackets:
…or by changing the Exempt…Warnings registry entry from a REG_SZ into a key containing values:
…the policy works as expected:
What’s going on?
As the Chromium policy_templates.json file explains, each browser policy is implemented as a particular type, depending on what sort of data it needs to hold. For the purposes of our discussion, the two relevant types are list and dict. Either of these types can be used to hold a set of per-site rules:
* 'list' - a list of string values. Using this for a list of JSON strings is now discouraged, because the 'dict' is better for JSON.
* 'dict' - perhaps should be named JSON. An arbitrarily complex object or array, nested objects/arrays, etc. The user defines the value with JSON.
When serializing these policies to the registry: dict policies use a single REG_SZ registry string, while the intention is that listpolicies are instead stored in values of a subkey. However, that is not technically enforced, and you may specify the entire list using a single string. However, if you do represent the entire JSON list as a single string value, you must wrap the value in [] (square brackets) to represent that you’re including a whole array of values.
In contrast, if you encode the individual rules as numbered string values within a key (this is what we recommend), then you must omit the square brackets because each string value represents a single rule (not an array of rules).
Group Policy Editor
If you use the Group Policy Editor rather than editing the registry directly, each list-based policy has a Show... button that spawns a standalone list editor:
In contrast, when editing a dict, there’s only a small text field into which the entire JSON string should be pasted:
To ensure that a JSON policy string is formatted correctly, consider using a JSON validator tool.
Bonus Policy Trivia
Encoding
While JavaScript allows wrapping string values in ‘single’ quotes, JSON and thus the policy code requires that you use “double” quotes. Footgun: Make sure that you’re using plain-ASCII “straight” quotation marks (0x22) and not any “fancy/curly” Unicode quotes (like those that some editors like Microsoft Word will automatically use). If you specify a policy using curly quotes, your policy value will be treated as if it is empty.
Non-Enterprise Use
The vast majority of policies will work on any computer, even if it’s just your home PC and you’re poking the policy into the registry directly. However, to limit abuse by other software, there are a small set of “protected” policies whose values are only respected if Chromium detects that a machine is “managed” (via Domain membership or Intune, for example).
You can visit about:management on a device to see whether Chromium considers it managed.
Case-Sensitivity
Chromium treats policy names in a case-sensitive fashion. If you try to use a lowercase character where an uppercase character is required (or vice-versa), your policy will be ignored. Double-check the case of all of your policy names if the about:policy page complains about an Unknown Policy.
WebView2
The vast majority of Edge policies do not apply to the WebView2 control which is built atop the internal browser engine; only a tiny set of WebView2-targeted policies apply across all WebView2 instances. That’s because each application may have different needs, use-cases, and expectations.
An application developer hosting WebView2 controls must consider their customer-base and decide what restrictions, if any, to impose on the WebView2 controls within their app. They must further decide how those restrictions are implemented, e.g. on-by-default, controllable via a registry key read by the app on startup, etc.
For example, a WebView2 application developer may decide that they do not wish to ever allow DevTools to be used within their application, either because their customers demand that restriction, or because they simply don’t want anyone poking around in their app’s JavaScript code. They would then set the appropriate environment flag within their application’s code. In contrast, a different WebView2 host application might be a developer testing tool where the expectation is that DevTools are used, and in that case, the application might open the tools automatically as it starts.
Refresh
You might wonder when Edge reads the policy entries from the registry. Chromium’s policy code does not subscribe to registry change event notifications (Update: See below). That means that it will not notice that a given policy key in the registry has changed until:
You push the Reload Policies button on the about://policy page, or
A Group Policy update notice is sent by Windows, which happens when the policy was applied via the normal Group Policy deployment mechanism.
Chromium and Edge rely upon an event from the RegisterGPNotification function to determine when to re-read the registry.
Update: Edge/Chrome v103+ now watch the Windows Registry for change notifications under the HKLM/HKCU Policy keys and will reload policy if a change is observed. Note that this observation only works if the basePolicies\vendor\BrowserNameregistry key already existed; if it did not, there’s nothing for the observer to watch.For Dev/Canary channels, a registry key can be set to disable the observer. Update-to-the-Update: The watcher was backed out shortly after I wrote this; it turns out that it caused bugs because the way that Group Policy updates work is the old registry keys are deleted, some non-zero time passes, and then the new keys are written. With the Watcher in place, this was causing the policies to be reapplied in the middle, turning off the policies for some time. This caused side-effects like the removal and reinstallation of browser extensions.
Note that not all policies support being updated at runtime; the Edge Policy documentation notes whether each policy supports updates with the Dynamic Policy Refresh value (visualizing the dynamic_refresh flag in the underlying source code).
As a part of every page load, browsers have to make dozens, hundreds, or even thousands of decisions of varying levels of importance: should a particular API be available? Should a resource load be permitted? Should script be allowed to run? Should video be allowed to start playing automatically? Should cookies or credentials be sent on network requests? The list is long.
In Chromium, most of these decisions are controlled by per-site settings that must be manually1 configured by either the user, or administrative policy.
However, manual configuration of settings is tedious, and, for some low-impact decisions, probably not worth the effort.
Wouldn’t it be cool if each user’s individual browser were smart enough to consider clues about what behavior that specific user is likely to want, and then use those clues in picking a default behavior?
User Activation / Gestures
The first, and simplest, mechanism used to make smarter decision is called user-gestures. Certain Web APIs and browser features (e.g. the popup blocker, file download experience, full-screen API, etc) require that the user has interacted with the page before the feature can be used.
This unblocking signal is called a User Gesture or (formally) User Activation.
Requiring a User Gesture can help prevent (or throttle) simple and unwanted “drive by” behaviors, where a website uses (abuses?) a powerful API without any indication that a user wants to allow a site to use it.
Unfortunately, User Gestures are a pretty low hurdle against abuse– sites can perform a variety of trickery to induce the user to click and unlock the protected feature.
Enter Site Engagement
Chromium supports a feature called Site Engagement, which works similarly to User Activation, but stretched over time. Instead of allowing a single gesture to unblock a single API call that occurs within the subsequent 5 seconds, Site Engagement calculates a score that grows with user interactions and decays over inactive time. In this way, sites that you visit often and engage with heavily offer a streamlined experience vs. a site you’ve only visited once (e.g. while you’re clicking around in search results). If you stop engaging with a site for a while, its engagement score decays and it loses any “credit” it had accrued.
You can easily see your browser’s unique Engagement scores by visiting the url: about://site-engagement/. Here’s what mine looks like:
If you like, you can use the textboxes next to each site to manually adjust its score (e.g. for debugging).
A separate page, about://media-engagement/ tracks your engagement with Media content (e.g. video players).
The Site Engagement primitive can be used in many different features; for instance, it can weigh into things like:
May Audio/video automatically start playback without a user-gesture?
Should a permission prompt be presented as a balloon, or a more subtle icon in the toolbar?
Should an “International Domain Name” be displayed in (potentially misleading) Unicode, or should it show in Punycode?
Site Engagement is a more robust mechanism than User Activation, but it’s still just a heuristic that can suffer from both false negatives (e.g. I’m using InPrivate or have just cleared my history, and am now visiting a trusted site) and false positives (a site has tricked me into engaging over time and now starts abusive behavior). As such, each feature’s respect for Site Engagement must be carefully considered, and recovery from false-negatives and false-positives must be simple.
Bespoke Mechanisms
Beyond User Activation and User Gesture requirements, various other signals have been proposed or used as clues into what behavior the user wants.
In determining whether a given file download is likely to be desired, for instance, the Downloads code uses a Familiar Initiator heuristic, which treats downloads as less suspicious if the originator of the Download request is a site that the user has visited before the current date.
Other features have considered such signals as:
Is the site one that the user visited by navigating via the address bar (as opposed to navigations triggered by script)
Is the site’s origin amongst the user’s Bookmarks/Favorites?
Is the site an installed PWA?
Do otherusers of a given site often respond to a particular permission decision in a particular way (aka “Cloud Consent”)? This approach is used in Adaptive Notifications.
Impact on Debugging
One downside of all of these mechanisms is that they can make debugging harder for folks like me– what you saw on your browser might not be what I see on mine, and what you experienced yesterday might not be what you experience tomorrow.
Tools like the about:site-engagement page can allow me to mimic some of your configuration, but some settings (e.g. the Familiar Initiator heuristic, or the timing of your User Gestures) are harder to account for.
That said, while smarter browsers are somewhat harder to debug, they can be much more friendly for end-users.
-Eric
PS: Browser designers must carefully consider how Site Engagement may result in web-visible side-effects. For example, can another site infer whether a given user is a frequent visitor to a site based on how the browser behaves?
Back in January, I wrote about my New Years’ Resolutions. I’m now 45 days in, and things are going pretty well.
Health and Finance: A dry January. Dry January has turned into dry February. Beyond idle thoughts “What should I do right now? In the old days, I’d pour myself a drink” and ten tough seconds (someone opened a delicious-smelling bottle of wine), I haven’t missed alcohol at all.
Health: Track my weight and other metrics.Done. I’m weighing in on my smart scale twice a week. I have a blood pressure cuff now; I haven’t been using it very regularly though. All metrics are headed in the right direction.
Health: Find sustainable fitness habits.Going great. I’ve bought a fancy treadmill and started using it, along with the exercise bike I bought in August 2020 and haven’t used until now. I’m working out at least 5 days a week for an hour or more.I also signed up for the Austin Capitol 10K in April, although I expect to walk a lot of it.
Travel: Haven’t yet booked an Alaska cruise, but it’s still on the back burner. I did book a near-repeat of the Christmas cruise for me and the kids over Spring break, and I’ve made some progress on bigger travel plans.
Finance: Spend more intentionally. Between the treadmill and the cruise, I’ve spent a lot of money so far in 2022, but for the most part it doesn’t feel wasted. The stock market hasn’t been doing too well, so I’m not feeling super-duper secure here.
Life: Produce more. I haven’t done a ton of this, beyond blogging and working on tools. I’m trying Hello Fresh, which has been educational and interesting– I’ve not really cooked anything remotely fancy before. I’ve also made some progress on delayed and on-going house projects though.
Sadly, work has not been going great, but everything else in life seems to be considerably better than a few months ago.
The MHTML file format (aka “Webpage, single file”) allows a single file to contain the multiple resources that are used to load a webpage (script, css, images, etc).
Edge (Chromium) has an option to use the format when saving the current page via Ctrl+S or the Save page as... menu command:
Saving MHTML from Save Page As…
… but the browser’s code has limited support for the MHTML format, meaning that it often cannot render files that it itself did not create, and even when loading files that it did create, there are several intentional restrictions.
Restriction: No Script
Reloading a saved MHTML file in Edge/Chrome/Chromium/etc will disable script.
Interestingly, when Chromium saves an MHTML file, it omits the <script> and <noscript> blocks entirely. If you saved the MHTML file from another tool that included script, when reloaded in Chromium, its script is not executed and a notice is shown in the Developer Tools Console:
Restriction: Disabled Forms
When loading a MHTML file, form controls like text fields and buttons are disabled, preventing the user from filling or submitting a form:
Restriction: Resources May not load
Chromium uses very restrictive rules for Same-Origin-Policy evaluation that can often prevent embedded resources (including images and stylesheets from loading) properly, leading to missing content and console warnings:
Limitation: Encodings
Internet Explorer’s MHTML component supported a variety of content-encodings that are not supported in Chromium. I fixed one bug but there are numerous other limitations in MHTML support.
Workaround: IEMode
If you need to load legacy MHTML content to load in Edge, your best bet is to configure the file to load in IEMode.
Edge includes some code which attempts to automatically detect whether a given MHTML file is compatible with Edge mode, e.g. checking for a Saved by Blink marker:
UPDATE: Note that opening MHT files in Internet Explorer represents a large attack surface, because it means that a bad actor could send a victim a malicious MHT file that exploits a 0-day in Internet Explorer.
If the victim opens the downloaded MHT and it switches to load in IE Mode automatically, the attacker could possibly escape the weaker IE sandbox and cause havoc. As of Edge 118, a downloaded MHT file (identified by a Zone.Identifier alternate data stream, aka mark of the web) will not open in IE Mode automatically unless a new group policy is enabled to accept the security risk.
Previously, I’ve writtena lot about Application Protocols, which are a simple and popular common mechanism for browsers to send a short string of data out to an external application for handling. For instance, mailto is a common example of a scheme treated as an Application Protocol; if you invoke mailto:someone@somewhere.com, the browser will convert this to an OS execution of e.g.:
outlook.exe mailto:someone@somewhere.com
Application Protocols are popular because they are simple, work across most browsers on most operating systems, and because they can be added by 3rd parties without changes to the browser.
However, Application Protocols have one crucial shortcoming– they cannot directly return any data to the browser itself. If you wanted to do something like:
<img src='myScheme://photos/mypic.png' />
… there’s no straightforward way for your application protocol to send data back into the browser to render in that image tag.
You might be thinking: “Why can’t I, a third-party, simply provide a full implementation of a protocol scheme, such that my object gets a URL, and it returns a stream of bytes representing the data from that URL, just like HTTP and HTTPS do?“
Asynchronous Pluggable Protocols
Back in the early days of Internet Explorer (1990s), the team didn’t know what protocols would turn out to be important. So, they built a richly extensible system of “Asynchronous Pluggable Protocols” (APP) which allowed COM objects to supply a full implementation of a protocol. The browser, upon seeing a URL (“moniker”) would parse the URL Scheme out, then “bind” to the APP object and send/receive data from that object. This allowed Internet Explorer to handle URLs in an abstract way and support a broad range of protocols (e.g. ftp, file, gopher, http, https, about, mailto, etc).
In many cases, we think only about receiving data from a protocol, but it’s important to remember that you can also send data (beyond the url) to a protocol; consider a file upload form that uses the POST method to send a form over HTTPS, for example.
Writing an APP was extremely challenging, and very risky– because APPs are exposed to the web, a buggy APP could be exploited by any webpage, and thanks to the lack of sandboxing in early IE, would usually result in full Remote Code Execution and compromise of the system. Beyond the security concerns, there were reliability challenges as well– writing code that would properly handle the complex threading model of a browser downloading content for a web page was very difficult, and many APP implementations would crash or hang the browser when conditions weren’t as the developer expected.
Despite the complexity and risk, writing APPs provided Internet Explorer with unprecedented extensibility power. “Back in the day” I was able to do some fun things, like add support for data: URLs to IE7 before the browser itself got around to supporting such URLs.
Understanding Custom Schemes
Sending a URL to into an APP object and getting bytes back from a stream is only half of the implementation challenge, however.
The other half is figuring out how the rest of the browser and web platform should handle these URLs. For Internet Explorer, we had a mechanism that allowed the host (browser) to query the protocol about how its URLs should be handled. The IInternetProtocolInfo interface allowed the APP’s code to handle the comparison and combination of URLs using its scheme, and allowed the code to answer questions about how web content returned from the URL should behave.
For instance, to fully support a scheme, the browser needs to be able to answer questions like:
Is this scheme “standard” (allowing default canonicalization behaviors like removing \..\ sequences), or “opaque” (in which other components cannot safely touch the URL)?
Is this scheme “Secure” (Allow in HTTPS pages without mixed content warnings, allow WebAPIs that require a secure context, etc)
Does this scheme participate in CORS?
Does this scheme get sent as a referrer?
Is this scheme allowed from Sandboxed frames?
Can top-level frames be navigated to this scheme?
Can such navigations only occur from trusted contexts (app/omnibox) or is JavaScript allowed to invoke such navigations?
How do navigations to these urls interact with all of the other WebNavigation/WebRequest extensibility APIs?
How does the scheme interact with the sandbox? What process isolation is used?
What origin is returned to web content running from the scheme?
How does content from the scheme interact with the cookie store?
How does it interact with CSP?
How does it interact with WebStorage?
Implementing Protocols in Chromium
Unlike Internet Explorer, Chromium does not offer a mechanism for third-party extensibility of its protocols; the browser itself must have support for a new protocol compiled in. A subclassed URLLoaderFactory (e.g. about, blob) invokes the correct url_loader implementation that returns the data for the response.
Chromium doesn’t have an analogue for the IInternetProtocolInfo interface for protocol implementors to implement; instead, the scheme must be manually added to each of the per-behavior lists of schemes hardcoded into Chromium.
Brave documented some of the work they needed to do to add a protocol to their Chromium-based browser.
By moving from our old codebase to Chromium, the Microsoft Edge team significantly modernized our codebase and improved our compatibility with websites. As we now share the vast majority of our web platform code with the market-leading browser, it’s rare to find websites that behave differently in Edge when compared to Chrome, Brave, Opera, Vivaldi, etc. Any time we find a behavioral delta, we are keen to dive in and understand why Edge behaves differently. Sometimes, our investigation will reveal a behavior gap caused by a known and deliberate difference (e.g. we add an Edg/ token to our User-Agent string), but in the most interesting cases, something unexpected is found.
Yesterday, I came across an interesting case. In this post, I’ll share both how I approached root causing the difference, and explore how it can be avoided.
The Customer’s Issue
The Microsoft Edge team works hard to ensure that Enterprises can move fearlessly to Edge, both from legacy IE/Spartan, and from other browsers.
One such Enterprise recently contacted us to report that one of their workflows did not work as expected in Edge 96, noting that the flow in question works fine in both Google Chrome and Apple Safari.
Common causes (e.g. a different User-Agent header, Tracking Prevention set to defaults) were quickly ruled out as root causes: you can easily run Edge with a different User-Agent header (via the --user-agentcommand line argument or via the Emulation feature in the F12 Developer Tools) and Tracking Prevention reports any blocked storage or network requests to the F12 Console.
The customer’s engineers noted that the problem seemed to be that Edge was failing to send a cross-origin fetch request– NetLogs of the flow revealed that the fetch request began, but never completed. As a result of failing to send the request, a WebAPI was not invoked, and the user was unable to change the state of the web application.
This was a super-interesting finding because Edge shares essentially all of its code in this area with upstream Chromium.
The customer shared their NetLogs with us, and I took a shot at analyzing the traffic captured within. Unfortunately, the results were inconclusive: it was easy to see that the request had indeed not been sent, but it was not clear why.
Here’s a snippet of the log for the attempt to call the API. We see that the request was a POST to an invokeAPI page on a different server, and because of the request’s Content-Type (application/json) the browser was required to perform a CORS preflight request before sending the POST to the remote server.
Crucially, we see that at 9293 milliseconds (175ms after the request process began), the request was aborted (-CORS_REQUEST). Not until 118 milliseconds later did the response to the preflight (CORS_PREFLIGHT_RESULT) come back saying, in effect, “Sure, I’m okay with accepting a request with that Content-Type and the POST method.” But it was too late– the request was already cancelled. The question is: “Why did the request get cancelled?”
A quick look at the webapp’s JavaScript didn’t show any explicit code that would cancel the fetch(). My hunch was that the request got cancelled because the fetch() call was in a frame that got navigated elsewhere before the request completed, but in this complicated application, network requests were constantly firing. That made it very difficult to prove or disprove my theory from looking at the Network Traffic alone. I tried looking at an edge://tracing log supplied by the customer, but the application was so complicated that I couldn’t make any sense of it.
And the larger question loomed: Even if teardown due to navigation is the culprit, why doesn’t this repro in Chrome?
The overall workflow involves several servers, including a SharePoint site, Microsoft Outlook Online, and custom logic running on Microsoft Azure. There didn’t seem to be any good way for us to emulate this workflow, but fortunately, the customer was willing to provide us with credentials to their test environment.
I started by confirming the repro in both Edge Stable and Canary, and confirming that the problem did not repro in either Chrome Stable or Canary. As this web application used a ServiceWorker, I also confirmed that unregistering the worker (using the Application tab of the F12 DevTools) did not impact the repro.
With this confirmation in hand, I then aimed to test my hunch that navigation was the root cause of the WebAPI fetch request not being sent.
A New Debugging Tool
One significant limitation of NetLogs is that, while the URL_REQUEST entries in the log have tons of information about the content and network behavior of the request, the NetLogs don’t have much context about why a given request was made, or where its result will be used by the web application.
Fortunately, Chromium’s rich extensibility model surfaces events for both Navigations and Network Requests, including (crucially) information about which browser tab and frame generated the event.
As a part of my recent NativeMessaging Debugger project, I built a simple sample extension which watches for navigations and sends information out to a log window. For this exercise, I updated it to also watch for WebRequests. With this new logging in place, I reproduced the scenario, and it was plain that the fetch() call to the WebAPI was being cancelled due to navigation:
The first event shows a top-level navigation (tabId:60, frameId:0) begin to MyTasks.aspx, although the navigation’s request hasn’t hit the network yet.
Then, six milliseconds later, we see the WebAPI’s fetch() call begin.
Six milliseconds after that, we see the network request for the navigation get underway, effectively racing against the outstanding WebAPI call.
From our NetLog, we know the navigation wins– the WebAPI’s fetch() preflight alone took 293ms to return, and the overall fetch request was aborted after just 175ms, right after the navigation’s request got back a set of HTTP/200 response headers.
So, my hunch is confirmed: The API call was aborted because the script context in which it was requested got torn down.
I was feeling pretty smug about my psychic debugging success but for one niggling thought: Why was this only happening in Edge?
Maybe this is happening because Chrome has some experiment running that changes the behavior of CORS or the JavaScript scheduler? I checked chrome://version/?show-variations-cmd and searched for any likely experiment names. Nothing seemed very promising, although I tried using the --enable-features command line argument to turn on a few flags in Edge to see if they mattered. No dice — Edge still repro’d the issue. I tried performing the repro in my local version of Chromium, which doesn’t run with any server-controlled flags. No dice — Chromium didn’t repro the issue. I even tried running a bisect against Chromium to see if any older Chromium build has this problem. Nope, going all the way back to Chrome 60, the workflow ran just fine.
I wanted to try bisecting against Edge, but unfortunately this isn’t simple to do without being set up with an Edge development environment (I develop directly against upstream Chromium). However, I did have a standalone copy of Edge 91 handy, and the problem repro’d there as well, so at least this wasn’t a super-recent regression.
I don’t like unsolved mysteries, but by this point I at least knew how the customer could fix this on their side. Their code for submitting the form looked like this:
t.prototype.Submit = function() {
this.callAPI(this.email, t),
alert("Success"),
setTimeout(function() {console.log("Wait 1sec")},1000),
location.href = n + "/MyTasks.aspx")
};
t.prototype.callAPI = function(e, t) {
var n = JSON.stringify({data: e}), o = new Headers;
o.append("Content-type", "application/json");
var i = {body: n,headers: o};
this.post("https://logic.azure.com/invokeAPI",i).then(function(e)
{return console.log("API call complete"), e.json()})
};
As you can see, the Submit() function calls the callAPI() function to send the POST request, then shows an alert saying “Success”. Script execution pauses while the alert dialog is showing. When the user clicks “OK” in the alert(), a setTimeout call queues a callback with a one second delay, then navigates the top-level page to MyTasks.aspx.
This looks like a mistake– the location.href= call was probably meant to be inside the setTimeout callback; otherwise, that callback probably will never run because the page will have been torn down by the navigation before the callback’s console.log statement runs.
But more broadly, this code seems fundamentally wrong: We’re telling the user that the operation yielded “Success”, but we never actually waited for the WebAPI call to succeed— we just fired it off and then immediately showed the “Success” popup. The alert() and location.href='...' lines should probably be inside the then block on the .post call, and there should probably be error handling code for the case where that post call returned an error for any reason.
I wrote up this suggestion and sent it off to the customer– this should resolve their issue in Edge, and it’s more logical and correct in every browser.
…And yet… why is Edge different?
I went back to look at the event logs from my repro again. And I noticed something crucial: in Chrome, the wrBeforeRequest event signaling the start of the WebAPI call appeared before the alert dialog box, and the wrResponseHeaders event appeared while the alert box was up (3 seconds later). Only after I clicked OK on the dialog box (32 seconds after that) did the navigation event occur:
In contrast, in Edge, the alert dialog box appears before the wrBeforeRequest event– in fact, the wrBeforeRequest event isn’t even seen until after clicking OK to dismiss the dialog. At that point, the WebAPI and Navigation requests race, and the WebAPI request will almost always lose the race.
Ah ha! So we’re getting closer. Edge is failing because its fetch call is getting blocked until dismissal of the modal alert. That’s weird. Maybe it’s something about this site’s JavaScript? I tried creating a minimal repro where a fetch() would get kicked off asynchronously and then an alert() would immediately follow. In Chrome, the fetch ran as expected, while in Edge, the fetch blocked on the user clicking OK.
Now we’re cooking with gas. A working minimal repro dramatically increases the odds of solving any mystery, because it makes testing theories much simpler, and roping in other engineers much cheaper– instead of asking them to perform complicated repro steps that may take tens of minutes, you can say “Click this link. Push the button. See the bug. Weird, right?“
There’s been quite a lot of recent discussion about the Chrome team’s concerns about the 3 Web Platform modal dialog primitives (alert(), prompt(), and confirm()) and how they interact with the JavaScript event loop. Furious webdevs complain “If it ain’t broke, don’t fix it” and the web platform folks retort: “Oh, it’s very much broke, and we need to fix it!”.
Perhaps there’s some related change upstream is experimenting with? I spent another fruitless hour looking into their experimental configuration and ours, and bothering Edge’s alert() owners to find out if perhaps we might have changed anything about our alert() implementation for accessibility reasons or the like. No dice.
I wrote up my latest findings and sent them off to my engineers. With a reduced repro in hand, our Dev Manager popped up the edge://tracing tool and had a look around. He replied back: I suspect it has something to do with an Edge-specific throttle. Specifically, I see a WebURLLoader::Context::Start hitting a mojom.SafeBrowsingUrlChecker call that I don’t see in Chrome.
For context: Chromium supports the notion of throttles which allow a developer to react to events as the user browses around. For instance, throttles for resource loaders allow you to block and modify requests, which browser developers use to perform reputation scans for SafeBrowsing or SmartScreen, for example. NavigationThrottles allow developers to redirect or cancel navigations, and so on.
Way back in December 2019, one of our engineers noted that a test case behaved differently in Edge vs. Chrome. That test case was a page with three XHRs: two asynchronous and one synchronous. In Chrome, the two async calls ran in parallel with the sync call, while in Edge, the two async calls were blocked upon completion of the one sync call. The root cause was related to the Tracking Prevention throttle needing to perform a cross-process communication to check whether the XHRs’ url targets were potentially tracking servers. Even though synchronous XHRs are evil and need to die, we still fixed the testcase by updating the throttle so that it doesn’t run on same-origin requests (because they cannot be, by definition, 3rd party trackers).
In this week’s repro, the fetch call is async, but alert() is not– it’s a synchronous API that blocks JavaScript. And the WebAPI is to a 3rd-party URL, so the 2019 fix doesn’t prevent the throttle from running.
One interesting finding during the 2019 investigation was that turning off Tracking Prevention in edge://settings does not actually disable the throttle– it simply changes the throttle to not block any requests. With this recollection, I used the command line argument to disable the throttle entirely: msedge.exe --disable-features=msEnhancedTrackingPreventionEnabled
…and confirmed that the minimal test page now behaves identically in Edge vs. Chrome.
We’ve filed a bug against the throttle to figure out how to address this case, but the recommendation to the customer stands: the current logic popping the alert() box ought to change such that either there’s no alert(), or it only shows success after the fetch() call actually succeeds.