Unexpectedly HTTPS?

Last Update: March 28, 2025

While I’m a firm believer that every site should be using HTTPS, sadly, not every site is yet doing so. Looking at Chrome data, today around 92% of navigations are HTTPS:

…and the pages loaded account for around 95% of browsing time:

Browsers are working hard to get these numbers up, by locking down non-secure HTTP permissions, blocking mixed content downloads, and by attempting to get the user to a secure version of a site if possible (upgrading subresource loads, a.ka. mixed content, and upgrading navigations).

Chrome and Edge have adopted different strategies for navigation upgrades:

Chrome

In Chrome, if you don’t type a protocol in the address bar, will try HTTPS first and if a response isn’t received in three seconds, it will race a HTTP request. There’s an option to require HTTPS:

When this option is set, attempting to navigate to a site that does not support HTTP results in a warning interstitial:

Edge

In Edge, we are experimenting with an “Automatic HTTPS” feature to upgrade navigations (even if http:// was specified) to use HTTPS.

The feature defaults to a list-based upgrade approach, whereby we deliver a component containing sites believed to be compatible with TLS. The list data is stored on disk, but is unfortunately not readily human-readable due to its encoding (for high-performance read operations):

Alternatively, if Always switch is specified, all requests are upgraded from HTTP to HTTPS unless one of the following is true:

  • The URL’s hostname is dotless (e.g. http://intranet, http://localhost)
  • The URL’s hostname is an IP literal (e.g. http://192.168.1.1)
  • The URL targets a non-default port (http://example.com:8080)
  • The hostname is included on a hardcoded exemption list containing just a handful of HTTP-only hostnames that are used by features or users to authenticate to Captive Portal interceptors. kAutomaticHttpsNeverUpgradeList = {"http://msftconnecttest.com", "http://edge.microsoft.com", "http://neverssl.com", "edge-http.microsoft.com};
  • The user has previously opted-out of HTTPS upgrade for the host by clicking the link on the connection failure error page.

Update: Edge 119 picked up the upstream Chromium change to try HTTPS first. If you’d like to experiment, you can disable that feature via a command line flag:

msedge.exe --disable-features=HttpsUpgrades

Diagnostics

Beyond the browser-specific features, browsers might end up on a HTTPS site even when the user specified a http:// url because:

  • The site is on the HSTS Preload list (including preloaded TLDs)
  • The site was previously visited over HTTPS and returned a Strict-Transport-Security header to opt-in to HSTS. This might be particularly problematic for developers using multiple sites on localhost. (Update: see [1] below)
  • The DNS Server for the site returns a HTTPS Resource Record indicating that the browser should use HTTPS for all requests.
  • The site was previously visited over HTTP and returned a cacheable HTTP/3xx redirect to the HTTPS page

In some cases, such upgrades might be unexpected or problematic, but figuring out the root cause might not be entirely trivial, particularly if an end-user is reporting the problem and you do not have access to their computer.

Local Diagnostics

You can use the Network tab of the F12 Developer Tools to see whether a cached redirect response is responsible for an HTTPS upgrade.

You can see whether Edge’s Automatic HTTPS feature upgraded a request to HTTPS by looking at the F12 Console tab:

To see if HSTS is responsible for an upgrade, on the impacted client, visit about://net-internals/#hsts and enter the domain in the box and click Query. Look at the upgrade_mode values:

If the static_upgrade_mode value shows FORCE_HTTPS, the site is included in the HSTS preload list. If FORCE_HTTPS is specified in dynamic_upgrade_mode, the site sent a Strict-Transport-Security opt-in header.

You can clear out dynamic_upgrade_mode entries by using the Cached images and files: All time option in the Clear Browsing Data dialog box:

If someone accidentally HSTS pre-loaded your domain into browsers’ preload list (e.g. forgetting that this will apply to subdomains), you don’t have great options.

Remote Diagnostics

If you don’t have direct access to the client, you can ask the user to collect a NetLog capture to analyze. The NetLog will show HTTPS upgrades from HSTS and from previously cached responses.

You can see a HSTS Upgrade by using the search box to look for either TRANSPORT_SECURITY_STATE_SHOULD_UPGRADE_TO_SSL (which will appear for all URLRequests with a true or false value) or for reason = "HSTS" which will find the internal redirect to upgrade to HTTPS:

Unfortunately, at the moment there’s no clear signal that a request was upgraded by Edge’s Automatic HTTPS feature, because the rewrite of the URL happens above the network stack.

Please help secure the web by moving all sites to HTTPS!

-Eric

[1] The problem is that many developers work on sites served from their own machine (e.g. https://localhost:1234 and `http://localhost:2345). This is a problem if the HTTPS site returns a HSTS directive, because it will make the HTTP-served URLs inaccessible.

I landed a change in Chromium 132+ to ignore HSTS for [*.]localhost requests, and wrote an extension you can use in earlier versions.

Firefox already ignores HSTS for localhost, and a bug was filed for Safari.

Chromium Internals: PAK Files

Web browsers are made up of much more than the native code (mostly compiled C++) that makes up their .exe and .dll files. A significant portion of the browser’s functionality (and bulk) is what we’d call “resources”, which include things like:

  • Images (at two resolutions, regular and “high-DPI”)
  • Localized UI Strings
  • HTML, JavaScript, and CSS used in Settings, DevTools and other features
  • UI Theme information
  • Other textual resources, like credits

In ancient times, this resource data was compiled directly into resource segments of Windows DLL files, but many years ago Chromium introduced a new format, called .pak files, to hold resource data. The browser loads resource streams out of the appropriate PAK files chosen at runtime (based on the user’s locale and screen resolution) and uses the data to populate the UI of the browser. PAK files are updated as a part of every build of the browser, because every change to any resource requires rebuilding the file.

High-DPI

Over the years, devices were released with ever-higher resolution displays, and software started needing to scale resources up so that they remain visible to human eyes and tappable by human fingers.

Scaling non-vector images up has a performance cost and can make them look fuzzy, so Chromium now includes two copies of each bitmap image, one in the 100_percent resource file, and a double-resolution version in the 200_percent resource file. The browser selects the appropriate version at runtime based on the current device’s display density.

Exploring PAK Files

You can find the browser’s resource files within Chrome/Edge’s Application folder:

Unfortunately for the curious, PAK is a binary format which is not easily human readable. Beyond munging many independent resources into a single file, the format relies upon GZIP or Brotli compression to shrink the data streams embedded inside the file. Occasionally, someone goofs and forgets to enable compression on a resource, bloating the file but leaving the plaintext easy to read in a hex-editor:

If you want a better look inside of a PAK file, you can use the unpack.bat tool, but this tool does not yet support decompressing brotli-compressed data (because Brotli support was added to PAK relatively recently). If you need to see a brotli-compressed resource, use unpack.bat to get the raw file. Then strip 8 bytes off the front of the file and use brotli.exe to decompress the data.

brotli.exe --decompress --in extracted.br --out plain.txt --verbose

Despite the availability of more efficient image formats (e.g. WebP), many browser bitmap resources are still stored as PNG files. My PNGDistill tool offers a GROVEL mode that allows extracting all of the embedded PNGs out of any binary file, including PAK:

You can then run PNGDistill on the extracted PNGs and discover that our current PNG compression efficiency inside resources.pak is just 94%, with 146K of extra size due to suboptimal compression.

Fortunately, the PNGs are almost all properly-stripped of metadata with the exception of this cute little guy, whose birthday is recorded in a PNG comment:

Have fun looking under the hood!

-Eric

Real-World Running

Yesterday, I ran my first 10K in “the real world”, my first real world run in a long time. Almost eleven years ago I ran a 5K, and 3.5 years ago I ran a non-competitive 5 miler on Thanksgiving.

I was a bit worried about how my treadmill training would map to real world running, but signed up for the Austin Capitol 10K as a forcing function to keep working out.

tl;dr: it went pretty well: it didn’t go as well as I’d hoped, nor as poorly as I feared. I beat my previous 10K time by about four minutes and finished ahead of 40% of my age/gender group. [I’m not sure what “previous 10K time” I’m referring to here. Maybe on the treadmill?]

What went well:

  • The weather was great: Sixties to low seventies. It was sunny, but early enough that it wasn’t too hot. I put my bib number on my shorts because I expected I’d have to take my shirt off to use as a towel, but I didn’t have sweat pouring into my eyes until the very end.
  • Because it wasn’t hot, my little SpeedDraw water bottle lasted me through the whole race.
  • While I expect my knees are likely to be my eventual downfall, I didn’t have any knee pain at all during the run.
  • No blisters or chafing, with new Balega socks and Brooks Ricochet 3 on my feet and Body Glide for my chest.
  • After running on the perfectly flat and almost perfectly predictable treadmill for three months, I expected I was going to trip or slide in gravel when running on a real road. While I definitely had to pay a lot more attention to my foot placement, I didn’t slip at all.
  • Sprinting at the finish felt great.
  • A friend (a running expert) ran with me and helped keep me motivated.
  • The drive to and parking for the race was easy.

Could’ve been better/worse:

  • Two miles in, my stomach started threatening to get rid of the prior night’s dinner; the feeling eventually faded, but I worried for the rest of the race.
  • As a little kid, I had horrible allergies, but since moving to Texas they’re mostly non-existent. On Saturday, my nose started running a bit and didn’t stop all weekend. Could’ve been a disaster, but it was a mild annoyance at most.
  • I woke up at 4:30am, 90 minutes before my alarm, and couldn’t get back to sleep. Still, I got 5.5 hours of good sleep.

What didn’t go so well:

  • Pacing. Running on a treadmill is trivial– you go as fast as the treadmill, and you always know exactly how fast that is. I wore my FitBit Sense watch, but because I had to keep my eyes on the road, I barely looked at it, and it never seemed to be showing what I wanted to know.

    I missed seeing the 1 mile marker, and by the time the 2 mile marker came along I was feeling nervous and demotivated. My watch had me believing that I was far behind my desired pace (I wasn’t, my first mile was my fastest and at my target pace, despite the obstacles). I didn’t get into my “groove” until the fourth mile of the race, but by then I wasn’t able to maintain it.

    I spend most of my running on the treadmill around 145bpm “Cardio” heart-rate zone (even when running intervals), but I spent most of this race in “Peak”, averaging 165bpm. More importantly, when I do peak on the treadmill, it’s usually been at the very end of the workout, and I get to cool down slowly with a long walk after. I need more practice ramping down my effort while still running at a more manageable pace.
  • Crowds. There were 15000 participants, and when I signed up, I expected I’d probably walk most of the course. As a consequence, I got assigned to one of the late-starting corrals with other slow folks, and I had to spend the first mile or so dodging around them. Zigzagging added an extra tenth of a mile to my race.
  • Hills. Running hills in the real world is a lot harder than doing it on the treadmill. In the real world, you have to find ways to run around the walls of walkers; while I’d hoped I’d find this motivational, seeing so many folks just walking briskly enticed me to join them. It didn’t help that some jerk at the top of the hill at 1.5mi assured everyone that it was the last hill of the race– while I didn’t really believe him, there was a part of me that very much wanted to, and I was quite annoyed when we hit the bigger hill later.
  • Forgettability. Likely related to the fact that I spent almost all of my time watching the road underfoot, an hour after the race was over, I struggled to remember almost anything about it. From capitol views to bands, I know there were things to see, but have no real memories of them. I’m spoiled by running on the treadmill “in” Costa Rica, although I don’t remember a ton of that either. Running definitely turns off my brain.
  • Poor Decisions. After the race, I wanted to keep moving, so I walked to a coffee shop I’d passed on my drive into the city. While my legs and cardio had no problem with this, at the end of the four-mile post-race walk, my feet (particularly my left foot arch) were quite unhappy.

Lest you come away with the wrong conclusion, I’m already trying to find another workable 10K sometime soon. That might be hard with the Texas summer barrelling toward us.

-Eric

Notice: As an Amazon Associate, I earn from qualifying purchases from Amazon links in this post.

End of Q1 Check-in

tl;dr: On track.

Back in January, I wrote about my New Years’ Resolutions. I’m now 90 days in, and things are continuing to go well.

  • Health and Finance: A dry January. Exceeded. I stopped drinking alcohol on any sort of regular basis; over spring break, I peaked at two drinks per day.
  • Health: Track my weight and other metrics. I’ve been using the FitBit Sense smartwatch to track my workouts and day-to-day, and I’ve been weighing in on a FitBit smart scale a few days per week. I’m down a bit over 25 pounds this quarter.
  • Health: Find sustainable fitness habits. Going great. I’ve been setting personal records for both speed (dropping 3 minutes from my mile time) and distance (I ran my first 10K on the treadmill this week). I have the Austin Capitol 10K coming up in two weeks, and I no longer plan to walk any of it.
  • TravelI cruised to Mexico with the kids over Spring break, will be visiting Seattle for work in May, will be taking the kids to Maryland in July, and have booked an Alaska cruise for September.
  • Finance: The stock market has recovered quite a bit recently, but I’m still a bit worried about my current cash burn rate.
  • Life: Produce more. I’ve been blogging a bit more lately. I decided to keep going with Hello Fresh– it’s much more expensive than I’d like, but it’s more enjoyable/rewarding than I expected.

Work continues to have more “downs” than “ups”, but almost everything else in life seems to be considerably better than a few months ago.

Chromium’s DNS Cache

Last Update: June 24, 2024

From the mailbag:

Q: How long does Chromium cache hostnames? I know a user can clear the hostname cache using the Clear host cache button on about://net-internals/#dns, but how long it will take for the cache to be removed if no manual action is taken? After changing DNS records on my server, nslookup from a client reflects the new IP address, but Edge is still using the old address?

A: At least one minute.

Host resolution is surprisingly complicated.

DNS caching is intended to be controlled via a “time-to-live” value on DNS responses—each DNS lookup response is allowed to be cached for a time period it itself defines, and after that time period expires, the entry is meant to be deemed “stale”, and a new lookup undertaken.

DNS records get cached in myriad places (inside the browser, both literally—via the Host Resolver Cache, and implicitly– in the form of already-connected keep-alive sockets), in the operating system, in your home router, in the upstream ISP, and so forth. Using nslookup to look up an address is a reasonable approach to check whether a fresh result is being returned from the OS’ DNS cache (or the upstream network), but it is worth mentioning that Chromium can be configured not to use the OS DNS resolver (e.g. instead using DNS-over-HTTPS or another DNS configuration).

Within the browser, which resolver is used can be controlled by policy (Chrome, Edge).

If Chromium is using the System DNS resolver, the cache entry should be fresh for 60 seconds— Chromium doesn’t know the DNS server’s desired TTL because the OS’ function getaddrinfo() does not return it.

// Default TTL for successful resolutions with ProcTask.
const unsigned kCacheEntryTTLSeconds = 60;

If Chromium performs the resolution itself (via DoH, or via its built-in resolver), the Host Resolver Entry should respect the DNS response’s TTL, with a minimum of 60 seconds.

Beyond treating entries older than their TTL as stale, Chromium also monitors “network change” events (e.g. connecting/disconnecting from WiFi or a VPN) and when those occur, the Host Resolver Cache will treat all previously-resolved entries as stale.

A Chromium net-export will contain details of the browser’s DNS configuration, and the contents of the browser’s DNS cache, including the TTL/expiration for each entry.

Note: For a single hostname, you may see multiple resolutions and the DNS tab may show multiple results, each with a different Network Anonymization Key. Years ago, Chromium began a project to further improve user privacy by partitioning various caches, including the DNS cache, based on the context in which a given request was made. In October 2022, the DNS cache was so partitioned. When Chromium looks up a hostname the cache will be bypassed, and a new DNS lookup issued, if the Network Anonymization key is not matched against the previously-cached result.

For example, here’s the DNS cache view when visiting pages on debugtheweb.com, enhanceie.com, and webdbg.com, each of which loads an image resource from sibling.enhanceie.com:

Beyond caching behavior, you may see other side effects when switching between the built-in DNS resolver and the system DNS resolver. The built-in resolver has more flexibility and supports requesting additional record types for HTTPS Upgrades, Encrypted Client Hello, etc.

-Eric

Tip: You can use Chromium’s --host-rules or --host-resolver-rules command line arguments to the browser to override DNS lookups:

… but note that these two commands are not exactly the same.

The “Magical” Back Button

From the mailbag:

Eric, when I am on bing.com in Edge or Chrome and I type https://portal.microsoft.com in the address bar, I go through some authentication redirections and end up on the Office website. If I then click the browser’s Back button, I go back to bing.com. But if I try the same thing in a WebView2-based application, calling webView.GoBack() sends me to one of the login.microsoftonline.com pages instead of putting me on Bing.com. How does the browser know to go all the way back?

This is a neat one.

As web developers know, a site’s JavaScript can decide whether to create entries in the navigation stack (aka travellog); the window.location.replace function navigates to a new page without creating an entry for the current page, while just setting window.location directly does create a new entry. But in this repro, that’s not what’s going on.

If we right-click on the back button in Edge, we can clearly see that it has created navigation entries in both Edge and Chrome:

… but clicking back in either browser magically skips over the “Sign in” entries.

Notably, in the browser scenario, if you open the F12 DevTools Console on the Office page and run history.go(-1), or if you manually pick the prior entry from the back button’s context menu, you will go back to the login page.

Therefore, this does seem to be a “magical” behavior in back button itself.

If we look around the source for IDC_BACK, we find GoBack(browser_, disposition) which lands us inside NavigationControllerImpl::GoBack(). That, in turn, puts us into a function which contains some magic around some entries being “skippable”.

Let’s look at the should_skip_on_back_forward_ui() function. That just returns the value of a flag, whose comment explains everything:

Set to true if this page does a navigation without ever receiving a user gesture. If true, it will be skipped on subsequent back/forward button clicks. This is to intervene against pages that manipulate the history such that the user is not able to go back to the last site they interacted with.

Navigation here implies both client side redirects and history.pushState calls.

It is always false the first time an entry's navigation is committed and is also reset to false if an entry is reused for any subsequent navigations.

Okay, interesting– it’s expected that these entries are skipped here, because the user hasn’t clicked anywhere in the login page. If they do click (or tap or type), the entries are not deemed as skippable.

If you retry the scenario in the browser, but this time, click anywhere in the content area of the browser while the URL bar reads login.microsoftonline.com, you now see that the entries are not skipped when you click the back button.

To make the WebView2 app behave like Edge/Chrome, you need to configure it such that the user does not need to click/type in the login page. To do that, enable the Single Sign-On property of the WebView2, such that the user’s Microsoft Account is automatically used by the login page without the user clicking anywhere:

private async void WebView_Loaded() {
var env = await CoreWebView2Environment.CreateAsync(
userDataFolder: Path.Combine(Environment.CurrentDirectory, "WebView"),
options: new CoreWebView2EnvironmentOptions(allowSingleSignOnUsingOSPrimaryAccount:true));

After you do so, clicking Back in the WebView2 will behave as it does in the browser.

-Eric

Edge/Chrome Policy Registry Entries

One of the more common problems reported by Enterprises is that certain Edge/Chrome policies do not seem to work properly when the values are written to the registry.

For instance, when using the about:policy page to examine the browser’s view of the applied policy, the customer might complain that a policy value they’ve entered in the registry isn’t being picked up:

In a quick look at the Microsoft documentation for the policy: ExemptDomainFileTypePairsFromFileTypeDownloadWarnings, the JSON syntax looks almost right, but in one example it’s wrapped in square brackets. But in another example, the value is not. What’s going on here?

A curious and determined administrator might notice that by either adding the square brackets:

…or by changing the Exempt…Warnings registry entry from a REG_SZ into a key containing values:

…the policy works as expected:

What’s going on?

As the Chromium policy_templates.json file explains, each browser policy is implemented as a particular type, depending on what sort of data it needs to hold. For the purposes of our discussion, the two relevant types are list and dict. Either of these types can be used to hold a set of per-site rules:

* 'list' - a list of string values. Using this for a list of JSON strings is now discouraged, because the 'dict' is better for JSON.
* 'dict' - perhaps should be named JSON. An arbitrarily complex object or array, nested objects/arrays, etc. The user defines the value with JSON.

When serializing these policies to the registry: dict policies use a single REG_SZ registry string, while the intention is that list policies are instead stored in values of a subkey. However, that is not technically enforced, and you may specify the entire list using a single string. However, if you do represent the entire JSON list as a single string value, you must wrap the value in [] (square brackets) to represent that you’re including a whole array of values.

In contrast, if you encode the individual rules as numbered string values within a key (this is what we recommend), then you must omit the square brackets because each string value represents a single rule (not an array of rules).

Group Policy Editor

If you use the Group Policy Editor rather than editing the registry directly, each list-based policy has a Show... button that spawns a standalone list editor:

In contrast, when editing a dict, there’s only a small text field into which the entire JSON string should be pasted:

To ensure that a JSON policy string is formatted correctly, consider using a JSON validator tool.

Bonus Policy Trivia

Encoding

While JavaScript allows wrapping string values in ‘single’ quotes, JSON and thus the policy code requires that you use “double” quotes. Footgun: Make sure that you’re using plain-ASCII “straight” quotation marks (0x22) and not any “fancy/curly” Unicode quotes (like those that some editors like Microsoft Word will automatically use). If you specify a policy using curly quotes, your policy value will be treated as if it is empty.

Non-Enterprise Use

The vast majority of policies will work on any computer, even if it’s just your home PC and you’re poking the policy into the registry directly. However, to limit abuse by other software, there are a small set of “protected” policies whose values are only respected if Chromium detects that a machine is “managed” (via Domain membership or Intune, for example).

The kSensitivePolicies list can be found in the Chromium source and encompasses most, but not all (e.g. putting a Application Protocol on the URLAllowlist only works for managed machines) restrictions.

You can visit about:management on a device to see whether Chromium considers it managed.

Case-Sensitivity

Chromium treats policy names in a case-sensitive fashion. If you try to use a lowercase character where an uppercase character is required (or vice-versa), your policy will be ignored. Double-check the case of all of your policy names if the about:policy page complains about an Unknown Policy.

WebView2

The vast majority of Edge policies do not apply to the WebView2 control which is built atop the internal browser engine; only a tiny set of WebView2-targeted policies apply across all WebView2 instances. That’s because each application may have different needs, use-cases, and expectations.

An application developer hosting WebView2 controls must consider their customer-base and decide what restrictions, if any, to impose on the WebView2 controls within their app. They must further decide how those restrictions are implemented, e.g. on-by-default, controllable via a registry key read by the app on startup, etc.

For example, a WebView2 application developer may decide that they do not wish to ever allow DevTools to be used within their application, either because their customers demand that restriction, or because they simply don’t want anyone poking around in their app’s JavaScript code. They would then set the appropriate environment flag within their application’s code. In contrast, a different WebView2 host application might be a developer testing tool where the expectation is that DevTools are used, and in that case, the application might open the tools automatically as it starts.

Refresh

You might wonder when Edge reads the policy entries from the registry. Chromium’s policy code does not subscribe to registry change event notifications (Update: See below). That means that it will not notice that a given policy key in the registry has changed until:

  1. The browser restarts, or
  2. Fifteen minutes pass, or
  3. You push the Reload Policies button on the about://policy page, or
  4. A Group Policy update notice is sent by Windows, which happens when the policy was applied via the normal Group Policy deployment mechanism.

    Chromium and Edge rely upon an event from the RegisterGPNotification function to determine when to re-read the registry.

Update: Edge/Chrome v103+ now watch the Windows Registry for change notifications under the HKLM/HKCU Policy keys and will reload policy if a change is observed. Note that this observation only works if the base Policies\vendor\BrowserName registry key already existed; if it did not, there’s nothing for the observer to watch. For Dev/Canary channels, a registry key can be set to disable the observer. Update-to-the-Update: The watcher was backed out shortly after I wrote this; it turns out that it caused bugs because the way that Group Policy updates work is the old registry keys are deleted, some non-zero time passes, and then the new keys are written. With the Watcher in place, this was causing the policies to be reapplied in the middle, turning off the policies for some time. This caused side-effects like the removal and reinstallation of browser extensions.

Note that not all policies support being updated at runtime; the Edge Policy documentation notes whether each policy supports updates with the Dynamic Policy Refresh value (visualizing the dynamic_refresh flag in the underlying source code).

-Eric

Smarter Defaults by Paying Attention

As a part of every page load, browsers have to make dozens, hundreds, or even thousands of decisions of varying levels of importance: should a particular API be available? Should a resource load be permitted? Should script be allowed to run? Should video be allowed to start playing automatically? Should cookies or credentials be sent on network requests? The list is long.

In Chromium, most of these decisions are controlled by per-site settings that must be manually1 configured by either the user, or administrative policy.

However, manual configuration of settings is tedious, and, for some low-impact decisions, probably not worth the effort.

Wouldn’t it be cool if each user’s individual browser were smart enough to consider clues about what behavior that specific user is likely to want, and then use those clues in picking a default behavior?

User Activation / Gestures

The first, and simplest, mechanism used to make smarter decision is called user-gestures. Certain Web APIs and browser features (e.g. the popup blocker, file download experience, full-screen API, etc) require that the user has interacted with the page before the feature can be used.

This unblocking signal is called a User Gesture or (formally) User Activation.

Requiring a User Gesture can help prevent (or throttle) simple and unwanted “drive by” behaviors, where a website uses (abuses?) a powerful API without any indication that a user wants to allow a site to use it.

Unfortunately, User Gestures are a pretty low hurdle against abuse– sites can perform a variety of trickery to induce the user to click and unlock the protected feature.

Enter Site Engagement

Chromium supports a feature called Site Engagement, which works similarly to User Activation, but stretched over time. Instead of allowing a single gesture to unblock a single API call that occurs within the subsequent 5 seconds, Site Engagement calculates a score that grows with user interactions and decays over inactive time. In this way, sites that you visit often and engage with heavily offer a streamlined experience vs. a site you’ve only visited once (e.g. while you’re clicking around in search results). If you stop engaging with a site for a while, its engagement score decays and it loses any “credit” it had accrued.

You can easily see your browser’s unique Engagement scores by visiting the url: about://site-engagement/. Here’s what mine looks like:

If you like, you can use the textboxes next to each site to manually adjust its score (e.g. for debugging).

A separate page, about://media-engagement/ tracks your engagement with Media content (e.g. video players).

The Site Engagement primitive can be used in many different features; for instance, it can weigh into things like:

  1. May Audio/video automatically start playback without a user-gesture?
  2. Should Tracking Prevention’s “Balanced mode” block a potential tracker?
  3. Should a permission prompt be presented as a balloon, or a more subtle icon in the toolbar?
  4. Should an “International Domain Name” be displayed in (potentially misleading) Unicode, or should it show in Punycode?

Site Engagement is a more robust mechanism than User Activation, but it’s still just a heuristic that can suffer from both false negatives (e.g. I’m using InPrivate or have just cleared my history, and am now visiting a trusted site) and false positives (a site has tricked me into engaging over time and now starts abusive behavior). As such, each feature’s respect for Site Engagement must be carefully considered, and recovery from false-negatives and false-positives must be simple.

Bespoke Mechanisms

Beyond User Activation and User Gesture requirements, various other signals have been proposed or used as clues into what behavior the user wants.

In determining whether a given file download is likely to be desired, for instance, the Downloads code uses a Familiar Initiator heuristic, which treats downloads as less suspicious if the originator of the Download request is a site that the user has visited before the current date.

Other features have considered such signals as:

  1. Is the site one that the user visited by navigating via the address bar (as opposed to navigations triggered by script)
  2. Is the site’s origin amongst the user’s Bookmarks/Favorites?
  3. Is the site an installed PWA?
  4. Do other users of a given site often respond to a particular permission decision in a particular way (aka “Cloud Consent”)? This approach is used in Adaptive Notifications.

Impact on Debugging

One downside of all of these mechanisms is that they can make debugging harder for folks like me– what you saw on your browser might not be what I see on mine, and what you experienced yesterday might not be what you experience tomorrow.

Tools like the about:site-engagement page can allow me to mimic some of your configuration, but some settings (e.g. the Familiar Initiator heuristic, or the timing of your User Gestures) are harder to account for.

That said, while smarter browsers are somewhat harder to debug, they can be much more friendly for end-users.

-Eric

PS: Browser designers must carefully consider how Site Engagement may result in web-visible side-effects. For example, can another site infer whether a given user is a frequent visitor to a site based on how the browser behaves?

1 A few settings inherit from Windows Security Zones.

Mid-February Checkin

tl;dr: On track.

Back in January, I wrote about my New Years’ Resolutions. I’m now 45 days in, and things are going pretty well.

  • Health and Finance: A dry January. Dry January has turned into dry February. Beyond idle thoughts “What should I do right now? In the old days, I’d pour myself a drink” and ten tough seconds (someone opened a delicious-smelling bottle of wine), I haven’t missed alcohol at all.
  • Health: Track my weight and other metrics. Done. I’m weighing in on my smart scale twice a week. I have a blood pressure cuff now; I haven’t been using it very regularly though. All metrics are headed in the right direction.
  • Health: Find sustainable fitness habits. Going great. I’ve bought a fancy treadmill and started using it, along with the exercise bike I bought in August 2020 and haven’t used until now. I’m working out at least 5 days a week for an hour or more. I also signed up for the Austin Capitol 10K in April, although I expect to walk a lot of it.
  • Travel: Haven’t yet booked an Alaska cruise, but it’s still on the back burner. I did book a near-repeat of the Christmas cruise for me and the kids over Spring break, and I’ve made some progress on bigger travel plans.
  • Finance: Spend more intentionally. Between the treadmill and the cruise, I’ve spent a lot of money so far in 2022, but for the most part it doesn’t feel wasted. The stock market hasn’t been doing too well, so I’m not feeling super-duper secure here.
  • Life: Produce more. I haven’t done a ton of this, beyond blogging and working on tools. I’m trying Hello Fresh, which has been educational and interesting– I’ve not really cooked anything remotely fancy before. I’ve also made some progress on delayed and on-going house projects though.

Sadly, work has not been going great, but everything else in life seems to be considerably better than a few months ago.

MHTML in Chromium

The MHTML file format (aka “Webpage, single file”) allows a single file to contain the multiple resources that are used to load a webpage (script, css, images, etc).

Edge (Chromium) has an option to use the format when saving the current page via Ctrl+S or the Save page as... menu command:

Saving MHTML from Save Page As…

… but the browser’s code has limited support for the MHTML format, meaning that it often cannot render files that it itself did not create, and even when loading files that it did create, there are several intentional restrictions.

Restriction: No Script

Reloading a saved MHTML file in Edge/Chrome/Chromium/etc will disable script.

Interestingly, when Chromium saves an MHTML file, it omits the <script> and <noscript> blocks entirely. If you saved the MHTML file from another tool that included script, when reloaded in Chromium, its script is not executed and a notice is shown in the Developer Tools Console:

Restriction: Disabled Forms

When loading a MHTML file, form controls like text fields and buttons are disabled, preventing the user from filling or submitting a form:

Restriction: Resources May not load

Chromium uses very restrictive rules for Same-Origin-Policy evaluation that can often prevent embedded resources (including images and stylesheets from loading) properly, leading to missing content and console warnings:

Limitation: Encodings

Internet Explorer’s MHTML component supported a variety of content-encodings that are not supported in Chromium. I fixed one bug but there are numerous other limitations in MHTML support.

Workaround: IEMode

If you need to load legacy MHTML content to load in Edge, your best bet is to configure the file to load in IEMode.

Edge includes some code which attempts to automatically detect whether a given MHTML file is compatible with Edge mode, e.g. checking for a Saved by Blink marker:

UPDATE: Note that opening MHT files in Internet Explorer represents a large attack surface, because it means that a bad actor could send a victim a malicious MHT file that exploits a 0-day in Internet Explorer.

If the victim opens the downloaded MHT and it switches to load in IE Mode automatically, the attacker could possibly escape the weaker IE sandbox and cause havoc. As of Edge 118, a downloaded MHT file (identified by a Zone.Identifier alternate data stream, aka mark of the web) will not open in IE Mode automatically unless a new group policy is enabled to accept the security risk.

-Eric