Certificate Revocation in Microsoft Edge

When you visit a HTTPS site, the server must present a certificate, signed by a trusted third-party (a Certificate Authority, aka CA), vouching for the identity of the bearer. The certificate contains an expiration date, and is considered valid until that date arrives. But what if the CA later realizes that it issued the certificate in error? Or what if the server’s private key (corresponding to the public key in the certificate) is accidentally revealed?

Enter certificate revocation. Revocation allows the trusted third-party to indicate to the client that a particular certificate should no longer be considered valid, even if it’s unexpired.

There are several techniques to implement revocation checking, and each has privacy, reliability, and performance considerations. Back in 2011, I wrote a long post about how Internet Explorer handles certificate revocation checks.

Back in 2018, the Microsoft Edge team decided to match Chrome’s behavior by not performing online OCSP or CRL checks for most certificates by default.

Wait, What? Why?

The basic arguments are that HTTPS certificate revocation checks:

  • Impair performance (tens of milliseconds to tens of seconds in latency)
  • Impair privacy (CAs could log what you’re checking and know where you went)
  • Are too unreliable to hard-fail (too many false positives on downtime or network glitches)
  • Are useless against most threats when soft-fail (because an active MITM can block the check)

For more context about why Chrome stopped using online certificate revocation checks many years ago, see these posts from the Chromium team explaining their thinking:

Note: Revocation checks still happen

Chromium still performs online OCSP/CRL checks for Extended Validation certificates only, in soft-fail mode. Update: Chromium has announced that v106+ will no longer revocation check for EV. If the check fails (e.g. offline OCSP responder) the certificate is just treated as a regular TLS certificate without the EV treatment. Users are very unlikely to ever notice because the EV treatment, now buried deep in the security UX, is virtually invisible. Notably, however, there is a performance penalty– if your Enterprise blackholes or slowly blocks access to a major CA’s OCSP responder, TLS connections from Chromium will be 🐢 very slow.

Even without online revocation checks, Chromium performs offline checks in two ways.

  1. It calls the Windows Certificate API (CAPI) with an “offline only” flag, such that revocation checks consult previously-cached CRLs (e.g. if Windows had previously retrieved a CRL), and certificate distrust entries deployed by Microsoft.
  2. It plugs into CAPI an implementation of CRLSets, a Google/Microsoft deployed list of popular certificates that should be deemed revoked.

On Windows, Chromium prior to version ~110 used the CAPI stack to perform revocation checks. I would expect this check to behave identically to the Internet Explorer check (which also relies on the Windows CAPI stack). Specifically, I don’t see any attempt to set dwUrlRetrievalTimeout away from the default. How CAPI2 certificate revocation works. Sometimes it’s useful to enable CAPI2 diagnostics.

CRLSets are updated via the Component Updater; if the PC isn’t ever on the Internet (e.g. an air-gapped network), the CRLSet will only be updated when a new version of the browser is deployed. (Of course, in an environment without access to the internet at large, revocation checking is even less useful.)

After Chromium moved to use its own built-in verifier, it began performing certificate revocation checks using its own revocation checker. Today, that checker supports only HTTP-sourced CRLs (the CAPI checker also supports HTTPS, LDAP, and FILE). Learn more here.

Group Policy Options

Chromium (and thus Edge and Chrome) support two Group Policies that control the behavior of revocation checking.

The EnableOnlineRevocationChecks policy enables soft-fail revocation checking for certificates. If the certificate does not contain revocation information, the certificate is deemed valid. If the revocation check does not complete (e.g. inaccessible CA), the certificate is deemed valid. If the certificate revocation check successfully returns that the certificate was revoked, the certificate is deemed invalid.

The RequireOnlineRevocationChecksForLocalAnchors policy allows hard-fail revocation checking for certificates that chain to a private anchor. A “private anchor” is not a “public Certificate Authority”, but instead e.g. the Enterprise root your company deployed to its PCs for either its internal sites or its Monster-in-the-Middle MITM network traffic inspection proxy). If the certificate does not contain revocation information, the certificate is deemed invalid. If the revocation check does not complete (e.g. inaccessible CA), the certificate is deemed invalid. If the certificate revocation check successfully returns that the certificate was revoked, the certificate is deemed invalid.

If you do choose to enable revocation checks, ensure that your certificates’ revocation information is compatible with the new verifier (served over HTTP, DER encoded) in Edge 112+.

Other browsers

Note: This section may be outdated!

Here’s an old survey of cross-browser revocation behavior.

By default, Firefox still queries OCSP servers for certificates that have a validity lifetime over 10 days. If you wish, you can require hard-fail OCSP checking by navigating to about:config and toggling security.OCSP.require to true. See this wiki for more details. Mozilla also distributes a CRLSet-like list of intermediates that should no longer be trusted, called OneCRL.

For the now-defunct Internet Explorer, you can set a Feature Control registry DWORD to convert the usual soft-fail into a slightly-less-soft fail:

HKEY_CURRENT_USER\SOFTWARE\Microsoft\Internet Explorer\Main\FeatureControl\FEATURE_WARN_ON_SEC_CERT_REV_FAILED

iexplore.exe=1

Edge Legacy did not have any option for non-silent failure for revocation checks.

Under the Hood: Viewing the CRLSet’s Content

You can see the current CRLSet version number inside edge://components, but how can you see the content?

The same tool that dumps Google’s CRLSet (https://github.com/agl/crlset-tools) works with Edge, except that you need to point it to a copy in the Edge install folder:

go run crlset.go dump "%LocalAppData%\Microsoft\Edge\User Data\CertificateRevocation\6498.2023.3.1\crl-set"

2025 Update – OCSP Fading Away

The great TLS Newsletter explains that LetsEncrypt is going to stop supporting OCSP, and explains the history and justification: https://www.feistyduck.com/newsletter/issue_121_the_slow_death_of_ocsp

New Recipes for 3rd Party Cookies

Last Updated: 11 April 2025

For privacy reasons, the web platform is moving away from supporting 3rd-party cookies, first with lockdowns, and eventually with removal of support starting at 1% in Q1 2024 (was late 2023) and slated for completion in the third quarter of 2024. UPDATE: In Summer 2024, Chrome announced a new plan:

…we are proposing an updated approach that elevates user choice. Instead of deprecating third-party cookies, we would introduce a new experience in Chrome that lets people make an informed choice that applies across their web browsing, and they’d be able to adjust that choice at any time. We’re discussing this new path with regulators, and will engage with the industry as we roll this out.

The Edge team will almost certainly follow the Chrome team, perhaps on a slightly delayed timeline.

Background: What Does “3rd-Party” Mean?

A 3rd-party cookie is one that is set or sent from a 3rd-party context on a web page.

A 3rd-party context is a frame or resource whose registrable domain (sometimes called eTLD+1) differs from that of the top-level page. This is sometimes called “cross-site.” In this example:

domain2.com and domain3.com are cross-site 3rd-parties to the parent page served by domain1.com. (In contrast, a resource from sub.domain1.com is cross-origin, but same-site/1st Party to domain1.com).

Importantly, frames or images[1] from domain2.com and domain3.com cannot see or modify the cookies in domain1.com‘s cookie jar, and script running at domain1.com cannot see or set cookies for the embedded domain2.com or domain3.com contexts.

Background: Existing Restrictions

Q: Why do privacy advocates worry about 3rd-party cookies?
A: Because they are a simple way to track a given user’s browsing across the web.

Say a bunch of unrelated sites include ads from an advertising server. A 3rd-party cookie set on the content from the ad will allow that ad server to identify the set of sites that the user has visited. For example, consider three pages the user visits:

The advertiser, instead of simply knowing that their ad is running on Star Trek’s website, is also able to know that this specific user has previously visited sites related to running and a medication, and can thus target its advertisements in a way that the visitor may deem a violation of their privacy.

For this reason, browsers have supported controls on 3rd-party cookies for decades, but they were typically off-by-default or trivially bypassed.

More recently, browsers have started introducing on-by-default controls and restrictions, including the 2020 change that makes all cookies SameSite=Lax by default.

However, none of these restrictions will go as far as browsers will go in the future.

A Full Menu of Replacements

In order to support scenarios that have been built atop 3rd-party cookies for multiple decades, new patterns and technologies will be needed.

The Easy Recipe: CHIPS

In 2020, cookies were made SameSite=Lax by default blocking cookies from being set and sent in 3rd-party contexts by default. The workaround for Web Developers who still needed cookies in 3rd-party contexts was simple: when a cookie is set, adding the attribute SameSite=none will disable the new behavior and allow the cookie to be set and sent freely. Over the course of the last two years, most sites that cared about their cookies began sending the attribute.

The CHIPS proposal (“Cookies having independent partitioned state”) offers a new but more limited escape hatch– a developer may opt-in to partitioning their cookie so that it’s no longer a “3rd party cookie”, it’s instead a partitioned cookie. A partitioned cookie set in the context of domain3.com embedded inside runnersworld.com will not be visible in the context domain3.com embedded inside startrek.com. Similarly, setting the cookie in the context domain3.com embedded inside gas-x.com will have no impact on the cookie’s value in the other two pages. If the user visits domain3.com as a top-level browser navigation, the cookies that were set on that origin’s subframes in the context of other top-level pages remain inaccessible.

Using the new Partitioned attribute is simple; just add it to your Set-Cookie header like so:

Set-Cookie: __Host-id=4d5e6;Partitioned;SameSite=None; Secure;Path=/; 

Support for CHIPS is expected to be broad, across all major browsers.

I was initially a bit skeptical about requiring authors to explicitly specify the new attribute– why not just treat all cookies in 3rd-party contexts as partitioned? I eventually came around to the arguments that an explicit declaration is desirable. As it stands, legacy applications already needed to be updated with a SameSite=None declaration, so we probably wouldn’t be able to keep any unmaintained legacy apps working even if we didn’t require the new attribute.

Chromium-based browsers will support CHIPS by default in v114+. Safari added support in v18.4. Firefox does not have support as of April 1, 2025.

The Explicit Recipe: The Storage Access API

The Storage Access API allows a website to request permission to use storage in a 3rd party context. Microsoft Edge joined Safari and Firefox with support for this API in 2020 as a mechanism for mitigating the impact of the browser’s Tracking Prevention feature.

The Storage Access API has a lot going for it, but lack of universal support from major browsers means that it’s not currently a slam-dunk.

A Niche Recipe: Related Website Sets (née: First Party Sets)

In some cases, the fact that cookies are treated as “3rd-party” represents a technical limitation rather than a legal or organizational one. For example, Microsoft owns xbox.com, office.com, and teams.microsoft.com, but these origins do not today share a common eTLD+1, meaning that pages from these sites are treated as cross-site 3rd-parties to one another. Related Website Sets (formerly First Party Sets) would allow sites owned and operated by a single-entity to be treated as first-party when it comes to privacy features.

Originally, a new cookie attribute, SameParty, would allow a site to request inclusion of a cookie when the cross-origin sub-resource’s context is in the same First Party Set as the top-level origin, but a recent proposal removes that attribute.

The Authentication Recipe: FedCM API

As I explained three years ago, authentication is an important use-case for 3rd-party cookies, but it’s hampered by browser restrictions on 3P cookies. The Federated Credential Management API proposes that browsers and websites work together to imbue the browser with awareness and control of the user’s login state on participating websites. As noted in Google’s explainer:

We expect FedCM to be useful to you only if all these conditions apply:

  1. You’re an identity provider (IdP).
  2. You’re affected by the third-party cookie phase out.
  3. Your Relying Parties are third-parties.

FedCM is a big, complex, and important specification that aims to solve exclusively authentication scenarios.

Other Authentication Recipes

Update: I wrote a whole post on alternatives to 3rd Party cookies for Auth flows.

Complexity Abounds

The move away from supporting 3rd-party cookies has huge implications for how websites are built. Maintaining compatibility for desirable scenarios while meaningfully breaking support for undesirable scenarios (trackers) is inherently extremely challenging– I equate it to trying to swap out an airliner’s engines while the plane is full of passengers and in-flight.

Combinatorics

As we add multiple new approaches to address the removal of 3P cookies, we must carefully reason about how they all interact. Specifications need to define how the behavior of CHIPS, First-Party-Sets, and the Storage Access API all intersect, for example, and web developers must account for cases where a browser may support only some of the new features.

Cookies Aren’t The Only Type of Storage

Another compexity is that cookies aren’t the only form of storage– IndexedDB, localStorage, sessionStorage, and various other cookie-like storages all exist in the web platform. Limiting only cookies without accounting for other forms of storage wouldn’t get us to where we want to be on privacy.

That said, cookies are one of the more interesting forms of storage when it comes to privacy, as they

  1. are sent to the server before the page loads,
  2. operate without JavaScript enabled,
  3. operate in cases like <img> elements where no script-execution context exists
  4. etc.

Cookies Are Special

Another interesting aspect of migrating scenarios away from cookies is that we lose some of the neat features that have been added over the years.

One such feature is the HTTPOnly declaration, which prevents a cookie from being accessible to JavaScript. This feature was designed to blunt the impact of a cross-site scripting attack — if script injected into a compromised page cannot read a cookie, that cookie cannot be leaked out to a remote attacker. The attacker is forced to abuse the XSS’d page immediately (“a sock-puppet browser”) limiting the sorts of attacks that can be undertaken. Some identity providers demand that their authentication tokens be carried only via HTTPOnly cookies, and if an authentication token must be available to JavaScript directly, the provider mints that token with a much shorter validity lifetime (e.g. one hour instead of one week).

Another cookie feature is TLS Token Binding, an obscure capability that attempts to prevent token theft attacks from compromised PCs. If malware or a malicious insider steals Token-bound cookie data directly from a PC, that cookie data will not work from another device because the private key material used to authenticate the cookies cannot be exported off of the compromised client device. (This non-exportability property is typically enforced by security hardware like a TPM.) While Token binding provides a powerful and unique capability for cookies, for various reasons the feature is not broadly supported.

Deprecating 3rd-Party Cookies is Not a Panacea

Unfortunately, getting rid of 3rd-party cookies doesn’t mean that we’ll be rid of tracking. There are many different ways to track a user, ranging from the obvious (they’re logged in to your site, they have a unique IP address) to the obscure (various fingerprinting mechanisms). But getting rid of 3rd-party cookies is a valuable step as browser makers work to engineer a privacy sandbox into the platform.

It’s a fascinating time in the web platform privacy space, and I can’t wait to see how this all works out.

Nov ’23 Update: The Chrome team published some additional guidance and resources in Preparing for the end of 3P Cookies. This includes details of various opt-out mechanisms for Enterprises.

-Eric

[1] Interestingly, if domain1.com includes a <script> element pointed at a resource from domain2.com or domain3.com, that script will run inside domain1.com‘s context, such that calls to the document.cookie DOM property will return the cookies for domain1.com, not the domain that served the script. But that’s not important for our discussion here.

My Next Opportunity

This is the farewell email I sent to my Edge teammates yesterday.


IWebBrowser3::BeforeNavigate()

When I left the Internet Explorer team in 2012 to work on Fiddler full-time, I did so with a measure of heartbreak, absolutely certain that I would never be quite as good at anything else. When I came back to the Edge team in 2018, I looked back with amusement at the naïveté of my earlier melancholy. I had learned a huge amount during my six years away, and I brought new skills and knowledge to bear on the ambitious challenge of replatforming Edge atop Chromium. While it’s still relatively early days, our progress over these last four years has truly been amazing—we’ve adopted tens of millions of lines of code as our own, grown the team, built a batteries-included product superior to the market leader, and started winning share for the first time in years. More importantly, we’ve modernized our team culture: more inclusive, heavily invested in learning, with faster experimentation and more transparent public communication. It’s been an inspiring journey.

In the fifty months since my return, I’ve written 124 blog posts and landed 168 changelists in upstream Chromium (plus one or two downstream :), dwarfing the 94 CLs I landed back when I was a Chrome engineer. I had the honor of leading PMs in both the Pixels and Bytes subteams in Web Platform, presented the Edge Privacy Story, and travelled around the world (Lyon and Fukuoka) for W3C TPAC meetings. I’ve had the opportunity to help many other teams as a member of “Microsoft’s team in Chromium”, and to engage directly with Enterprise customers as they migrated off of IE and onto a modern standards-based web platform. I’ve helped to interview and hire a set of awesome new PMs. Throughout it all, I’ve strived to maximize my impact to benefit the billions of humans who browse the web.

This Friday (July-22-2022) will be the last day of my current tour. I leave things in good hands: Erik is an amazing engineering manager, and I’ll miss racing him to discover the root cause of gnarly networking problems. I’ve spent this second tour doing my very best to write everything down– if anything, I’m but a caching proxy server for my archive of blog posts. I didn’t write an encyclopedic guide on ramping up on browser dev, or an opinionated set of career advice just for fun— I’ve been quietly working to keep my bus factor as low as possible. I encourage everyone to take full advantage of the democratization of knowledge-sharing provided by our internal wikis and public docs site—seize every opportunity to “leave it better than you found it.”

Thank you all for the years of awesome collaborations on building a browser to delight our users.

Next Monday, I’ll be moving over to join some old friends on Microsoft’s Web Protection team, working to help protect users from all manner of internet-borne threats.

I’m not going far; please stay in touch via Twitter, LinkedIn, or good old-fashioned email.

Until next time,

-@ericlaw

Edge URL Schemes

The microsoft-edge: Application Protocol

Microsoft Edge implements an Application Protocol with the scheme microsoft-edge: that is designed to launch Microsoft Edge and pass along a web-schemed URL and/or additional arguments. A basic invocation might be as simple as:

microsoft-edge:http://example.com/

However, as is often the case with things I choose to write about, there’s a bit of hidden complexity that may not be immediately obvious.

Non-Public

The purpose of this URL scheme is to enable Windows and cooperating applications to invoke particular user-experiences in the Edge browser.

This scheme is not considered “public” — there’s no official documentation of the scheme, and the Edge team makes no guarantees about its behavior. We can (and do) add or modify functionality as needed to achieve desired behaviors.

Over the last few years, we’ve added a variety of functionality to the scheme, including the ability to invoke UX features, launch into a specific user profile, and implement other integration scenarios. By way of example, Windows might advertise the Edge Surf game and, if the user chooses to play, the game is launched by ShellExecuting the URL microsoft-edge:?ux=surf_game.

Because of the non-public and inherently unstable (not backward-compatible) nature of this URL scheme, it is not an extensibility point and it is not supported to configure the handler to be anything other than Microsoft Edge.

Under the hood: handling of this scheme can be found in Edge’s non-public version of the StartupBrowserCreator::ProcessCmdLineImpl function I wrote about recently as a part of my post on Chromium Startup.

Tricky Bits

One (perhaps surprising) restriction on the microsoft-edge scheme is that it cannot be launched from inside Edge itself. If a user inside Edge clicks a link to the microsoft-edge: scheme, nothing visibly happens. Only if they open the F12 Console will they see an error message:

The microsoft-edge protocol is blocked inside Edge itself to avoid “navigation laundering” problems, whereby going through the external handler path would result in loss of context. Losing the context of a navigation can introduce vulnerabilities in both security and abuse. For example, a popup blocker bypass existed on Android when Android Chrome failed to block the Chrome version of this protocol.

Similarly, anti-CSRF restrictions on SameSite=Strict cookies could be circumvented because such cookies are deliberately suppressed on “cross-site” navigations but not suppressed for “external” navigations. Android had a similar issue.

The Edge WebView2 control also blocks navigation to the microsoft-edge protocol, although I expect that an application which wants to allow such navigations could probably do so with the appropriate event handlers.

Another tricky bit concerns the fact that a user may have multiple different channels of Edge (Stable, Beta, Dev, Canary) installed, but the microsoft-edge protocol can only be claimed by one of them. This can be potentially confusing if a user has different channels selected to handle HTTPS: and microsoft-edge links:

…because some links will open in Edge Canary while others will open in Edge Beta.


The edge: Built-In Scheme

Beyond the aforementioned application protocol, Microsoft Edge also supports a Built-In Scheme named edge:. In contrast to the microsoft-edge: application protocol, this scheme is only available within the browser. You can not invoke an edge: URL elsewhere in Windows, or pass it to Edge as a command-line argument.

The edge: scheme is simply an alias for the chrome: and about: schemes used in Chromium to support internal pages like about:flags, about:settings, and similar (see edge:about for a list).

For security reasons, regular webpages cannot navigate to or load subresources from the edge/chrome schemes. Years ago, a common exploit pattern was to navigate to chrome:downloads and then abuse its privileged WebUI bindings to escape the browser sandbox. There are also special debug urls like about:inducebrowsercrashforrealz that will do exactly as they say.

End of Q2 Check-in

Back in January, I wrote about my New Years’ Resolutions. I’m now 177 days in, and things are continuing to go well.

  • Health and Finance: A dry January. Exceeded. I went from 2 or 3 drinks a night six times a week to around 6 drinks per month, mostly while on vacations.
  • Health: Track my weight and other metrics. I’ve been using the FitBit Sense smartwatch to track my workouts and day-to-day, and I’ve been weighing in on a FitBit smart scale a few days per week. I’m down a bit over 50 pounds. This is considerably beyond where I expected to end up (-25 pounds).
  • Health: Find sustainable fitness habits. Going great. I’ve been setting personal records for both speed and distance.
  • TravelI cruised to Mexico with the kids over Spring break, went to Seattle for work in Maywill be taking the kids to Maryland in July, and have booked an Alaska cruise for September.
  • Finance: The stock market is way down, and inflation is way up. Getting divorced turns out to be really terrible for feeling financially stable. Uncertainty at work has made things significantly worse.
  • LifeProduce moreI’ve been blogging a bit more lately. I decided to keep going with HelloFresh– it’s much more expensive than I’d like, but it’s more enjoyable/rewarding than I expected.

Fitness – Mechanics

When you get right down to it, losing weight is simple (which is different than easy). Every pound of fat is 3500 calories. To lose two pounds of fat per week, burn 1000 calories more than you eat for each day of the week. There are two ways to do this: intentional eating, and increased exercise. I embarked upon both.

When I first moved into my new house, I designated the old dining room as a library and installed a sofa and three large bookshelves. But over the first year here, I found that I almost never used the room, so when I resolved to start working out, it was a natural place to put my fitness equipment. So my Library has become my Gym.

Over the last two years, I’ve accumulated a variety of equipment related to improving fitness:

…of these, I’d rate the Treadmill, Bike, Fitbit Watch, Paper Calendar, and Scale as the most important investments; everything else is a nice-to-have at best.

My Gym has the major advantage of being directly in the middle of my house, between my bedroom and my home office, so there’s simply no ignoring it on my morning “commute.”

Action shows with cliffhangers are a wonderful “nudge” to get on the exercise bike on days when I’m feeling on the fence and looking for excuses not to work out. I asked my Tweeps for suggestions on what TV shows I should watch and got a bunch of good suggestions. After I finished The Last Ship, I moved on to Umbrella Academy, then The Orville, and now I’m sweating my way through Ozark.

For running, I’m using the training programs on iFit. They’re expensive (hundreds of dollars a year) but for me, have proven entirely worth it. I’ve done training series in Costa Rica, the Azores, one-off 5K and 10K races all over, did a half-marathon (split over two days) in the shadow of Kilimanjaro, and on June 29th ran my first full half-marathon (in NYC). While it’s true that they sometimes feel a bit cheesy (having a recorded trainer who can’t see you cheering you on) it’s still quite motivational to have the ability to run in a new and exotic place at any hour of the day, in the comfort of my A/C with three different fans blowing at me. I got started slowly (various short walks with comedians, who were only slightly funny) and then ramped up into a weight-loss series with Chris and Stacie Clark. Leah Rosenfeld got me ready for my first 10K, and now I’m running with Knox Robinson.

Intentional Eating

A lot of getting in shape turns out to be mental, and on this front things have been going pretty well. While I’ve had a lot of stress in my life this year, much of it has been conducive to switching things up, and changing my diet and adding lots of exercise fits into that new approach.

  • I’ve stopped my longstanding practice of “magical eating” where I don’t look at the calorie counts for stuff I want to eat when I know it’s “bad.” Sometimes the number is not as bad as I think, sometimes I realize that there’s something else I’ll like more that’s not as bad, and sometimes I just think “I don’t really want this that bad” and go eat something healthy instead.
  • I rarely deny myself anything: I’ll just procrastinate or save something for a special occasion. “I’ll just have that later” is much easier on the willpower than “No.
  • Eating unhealthy food less often results in greater pleasure (and quicker satisfaction) when you do indulge. One of my heroes describes this sort of change as stepping off of the hedonic treadmill.
  • Avoiding alcohol has allowed me to be a lot more intentional about what I choose to eat at night.
  • I hate wasting food, but I’ve stopped finishing whatever’s left on my kids’ plates when they’re done.
  • Cooking (HelloFresh) a few times per week makes it much simpler to keep meal calories under control– I pick lower calorie menus, and cut down on ingredients (like butter) that I don’t really care for. There’s no question that there’s an Ikea-effect at play– food that I prepare simply tastes better than it would if I didn’t make it myself.
  • Seeing progress week-to-week has been hugely motivating. Rigor in tracking has been really important in proving to myself that the choices I make day-to-day are inexorably reflected in my outcomes.

Progress

There are different ways to view progress.

Beyond letting my devices track my weight and workouts, I also have a paper calendar and a notepad; these are both a backup, and a tangible reminder of the progress I’ve made.

Bike, Treadmill and other workouts

Numbers don’t tell the whole story, of course. I can look at clothes that stopped fitting as I went from a 40″ to a 35″:

…or my profile (traced to avoid scarring you for life)

…or just poke my legs, which went from squishy to extremely firm.

What’s Next

In large part, I’ve achieved the first phase of my plan– proving to myself that eating well and exercising a lot determines the shape of my body. It seems idiotic to think otherwise, of course, but I assure you that at the start of this process I was skeptical of the strength of the relationship, and of my ability to control it.

For the next phase, I’d like to start adding more upper-body workouts (my reduced weight means I can do unassisted pull ups for the first time in decades) and continue to add more endurance workouts in preparation for an ambitious fitness and life adventure that I expect to commit to soon. [Update: ProjectK]

-Eric

Captive Portals

When you join a public WiFi network, sometimes you’ll notice that you have to accept “Terms of Use” or provide a password or payment to use the network. Your browser opens or navigates to a page that shows the network’s legal terms or web log on form, you fill it out, and you’re on your way. Ideally.

How does this all work?

Wikipedia has a nice article about Captive Portals, but let’s talk about the lower-level mechanics.

Operating Systems’ Portal Detection

When a new network connection is established, Windows will send a background HTTP request to www.msftconnecttest.com/connecttest.txt. If the result is a HTTP/200 but the response body doesn’t match the string the server is known to always send in reply (“Microsoft Connect Test“), the OS will launch a web browser to the non-secure HTTP URL www.msftconnecttest.com/redirect. The expectation is that if the user is behind a Captive Portal, the WiFi router will intercept these HTTP requests and respond with a redirect to a page that will allow the user to log on to the network. After the user completes the ritual, the WiFi router stores the MAC Address of the device’s network card to avoid repeating the dance on every subsequent connection.

This probing functionality is a part of the Network Connectivity Status Indicator feature of Windows, which will also ensure that the WiFi icon in your task bar indicates if the current connection does not yet have access to the Internet at large. Beyond this active probing behavior, NCSI also has a passive polling behavior that watches the behavior of other network APIs to detect the network state.

Other Windows applications can detect the Captive Portal State using the Network List Manager API, which indicates NLM_INTERNET_CONNECTIVITY_WEBHIJACK when Windows noticed that the active probe was hijacked by the network. Enterprises can reconfigure the behavior of the NCSI feature using registry keys or Group Policy.

On MacOS computers, the OS offers a very similar active probe: a non-secure probe to http://captive.apple.com is expected to always reply with (“Success“).

Edge Portal Detection

Chromium includes its own Captive Portal detection logic whereby a probe URL is expected to return a HTTP/204 No Content response.

Edge specifies a probe url of http://edge-http.microsoft.com/captiveportal/generate_204

Chrome uses the probe URL http://www.gstatic.com/generate_204.

Avoiding HTTPS

Some Captive Portals perform their interception by returning a forged server address when the client attempts a DNS lookup. However, DNS hijacking is not possible if DNS-over-HTTPS (DoH) is in use. To mitigate this, the detector bypasses DoH when resolving the probe URL’s hostname.

Similarly, note that all of the probe URLs specify non-secure http://. If a probe URL started with https://, the WiFi router would not be able to successfully hijack it. HTTPS is explicitly designed to prevent a Monster-in-the-Middle (MiTM) like a WiFi router from changing any of the traffic, using cryptography and digital signatures to protect the traffic from modification. If a hijack tries to redirect a request to a different location, the browser will show a Certificate Error page that indicates that either the router’s certificate is not trusted, or that the certificate the router used to encrypt its response does not have a URL address that matches the expected website (e.g. edge-http.microsoft.com).

This means, among other things, that new browser features that upgrade non-secure HTTP requests to HTTPS must not attempt to upgrade the probe requests, because doing so will prevent successful hijacking. To that end, Edge’s Automatic HTTPS feature includes a set of exclusions:

kAutomaticHttpsNeverUpgradeList {
    "msftconnecttest.com, edge.microsoft.com, "
    "neverssl.com, edge-http.microsoft.com" };

Unfortunately, this exclusion list alone isn’t always enough. Consider the case where a WiFi router hijacks the request for edge-http.microsoft.com and redirects it to http://captiveportal.net/accept_terms. The browser might try to upgrade that navigation request (which targets a hostname not on the exclusion list) to HTTPS. If the portal’s server doesn’t support HTTPS, the user will either encounter a Connection Refused error or an Untrusted Certificate error.

If a user does happen to try to navigate to a HTTPS address before authenticating to the portal, and the router tries to hijack the secure request, Chromium detects this condition and replaces the normal certificate error page with a page suggesting that the user must first satisfy the demands of the Captive Portal:

For years, this friendly design had a flaw– if the actual captive portal server specified a HTTPS log on URL but that log on URL sent an invalid certificate, there was no way for the user to specify “I don’t care about the untrusted certificate, continue anyway!” I fixed that shortcoming in Chromium v101, such that the special “Connect to Wi-Fi” page is not shown if the certificate error appears on the tab shown for Captive Portal login.

-Eric

Extending Fiddler’s ImageView

Fiddler’s ImageView Inspector offers a lot of powerful functionality for inspecting images and discovering ways to shrink an image’s byte-weight without impacting its quality.

Less well-known is the fact that the ImageView Inspector is very extensible, such that you can add new tools to it very simply. To do so, simply download any required executables and add registry entries pointing at them.

For instance, consider Guetzli, the JPEG-optimizer from the compression experts at Google. It aims to shrink JPEG images by 20-30% without impacting quality. The tool is delivered as a command-line executable that accepts an image’s path as input, generating a new file containing the optimized image. If you pass no arguments at all, you get the following help text:

To integrate this tool into Fiddler, simply:

  1. Download the executable (x64 or x86 as appropriate) to a suitable folder.
  2. Run regedit.exe and navigate to HKEY_CURRENT_USER\Software\Microsoft\Fiddler2\ImagesMenuExt\
  3. Create a new key with the caption of the menu item you’d like to add. (Optionally, add a & character before the accelerator key.)
  4. Create Command, Parameters and Types values of type REG_SZ. Read on for details of what to put in each.

Alternatively, you could use Notepad to edit a AddCommand.reg script and then double-click the file to update your registry:

Windows Registry Editor Version 5.00

[HKEY_CURRENT_USER\Software\Microsoft\Fiddler2\ImagesMenuExt\&JPEGGuetzli]
"Command"="C:\\YOURPATHToTools\\guetzli_windows_x86-64.exe"
"Parameters"="{in} {out:jpg}"
"Types"="image/jpeg"

When you’re done, the registry key should look something like:

After you’ve set up your new command, when you right-click on a JPEG in the ImageView, you’ll see your new menu item in the Tools submenu:

When you run the command, the tool will run and a new entry will be added to the Web Sessions list, containing the now optimized image:

Registry Values

The required Command entry points to the location of the executable on disk.

The optional Parameters entry specifies the parameters to pass to the tool. The Parameters entry supports two tokens, {in} and {out}. The {in} token is replaced with the full path to the temporary file Fiddler uses to store the raw image from the ImageView Inspector before running the tool. The {out} token is replaced with the filepath Fiddler requests the tool write its output to. If you want the output file to have a particular extension, you can specify it after a colon; for example {out:jpg} generates a filename ending in .jpg. If you omit the Parameters value, a default of {in} is used.

The optional Types parameter limits the MIME types on which your tool is offered. For example, if your tool only analyzes .png files, you can specify image/png. You can specify multiple types by separating them with a comma, space, or semicolon.

The optional Options value enables you to specify either <stdout> or <stderr> (or both) and Fiddler will collect any textual output from the tool and show it in a message box. For instance, the default To WebP Lossless command sets the <stderr> option, and after the tool finishes, a dialog box is shown:

Inspect and optimize all the things!

-Eric

“Batteries-Included” vs “Bloated”

Fundamentals are invisible. Features are controversial.

One of the few common complaints against Microsoft Edge is that “It’s bloated– there’s too much stuff in it!

A big philosophical question for designers of popular software concerns whether the product should include features that might not be useful for everyone or even a majority of users. There are strong arguments on both sides of this issue, and in this post, I’ll explore the concerns, the counterpoints, and share some thoughts on how software designers should think about this tradeoff.

But first, a few stories

  1. I started working in Microsoft Office back in 1999, on the team that was to eventually ship SharePoint. Every few months in the early 2000s, a startup would appear, promising a new office software package that’s “just the 10% of Microsoft Office that people actually use.” All of these products failed (for various reasons), but they all failed in part because their development and marketing teams failed to recognize a key fact: Yes, the vast majority of customers use less than 10% of the features of the Microsoft Office suite, but it’s a different 10% for each customer.
  2. I started building the Fiddler Web Debugger in 2003, as a way for my team (the Office Clip Art team) to debug client/server traffic between the Microsoft Clip Art client app and the Design Gallery Live webservice that hosted the bulk of the clipart Microsoft made available. I had no particular ambitions to build a general purpose debugger, but I had a problem: I needed to offer a simple way to filter the list of web requests based on their URLs or other criteria, but I really didn’t want to futz with building a complicated filter UI with dozens of comboboxes and text fields.

    I mused “If only I could let the user write their queries in code and then Fiddler would just run that!” And then I realized I could do exactly that, by embedding the JScript.NET engine into Fiddler. I did so, and folks from all over the company started using this as an extensibility mechanism that went far beyond my original plans. As I started getting more feature requests from folks interested in tailoring Fiddler to their own needs, I figured “Why not just allow developers to write their own features in .NET?” So I built in a simplistic extensibility model that allowed adding new features and tabs all over. Over a few short years, a niche tool morphed into a wildly extensible debugger used by millions of developers around the world.
  3. The original 2008 release of the Chrome browser was very limited in terms of features, as the team was heavily focused on performance, security, and simplicity. But one feature that seemed to get a lot of consideration early on was support for Mouse Gestures; some folks on the team loved gestures, but there was recognition that it was unlikely to be a broadly-used feature. Ultimately, the Chrome team decided not to implement mouse gestures, instead leaving the problem space to browser extensions.

    Years later, after Chrome became my primary browser, I lost my beloved IE Mouse Gestures extension, so I started hunting for a replacement. I found one that seemed to work okay, but because I run Fiddler constantly, I soon noticed that every time I invoked a gesture, it sent the current page’s URL and other sensitive data off to some server in China. Appalled at the hidden impact to my privacy and security, I reported the extension to the Chrome Web Store team and uninstalled it. The extension was delisted from the web store.

    Some time later, now back on the Edge team, a new lead joined and excitedly recommended we all try out a great Mouse Gestures extension for Chromium. I was disappointed to discover it was the same extension that had been removed previously, now with a slightly more complete Privacy Policy, and now using HTTPS when leaking users’ URLs to its own servers. (Update: As of 2023, Edge has built-in Mouse Gestures. Hooray!)

With these background stories in hand, let’s look at the tradeoffs.

Bloated!”

There are three common classes of complaint from folks who point at the long list of features Edge has added over upstream Chromium and furiously charge “It’s bloated!“:

  1. User Experience complexity
  2. Security/reliability
  3. Performance

UX Complexity

Usually when you add a feature to the browser, you add new menu items, hotkeys, support articles, group policies, and other user-visible infrastructure to support that feature. If you’re not careful, it’s easy to accidentally break a user’s longstanding workflows or muscle memory.

One of the Windows 7 Design Principles was “Change is bad, unless it’s great!” and that’s a great truth to keep in mind– entropy accumulates, and if you’re not careful, you can easily make the product worse. Users don’t like it when you move their cheese.

A natural response to this concern is to design new features to be unobtrusive, by leaving them off-by-default, or hiding them away in context menus, or otherwise keeping them out of the way. But now we’ve got a problem– if users don’t even know about your feature, why bother building it? If potential customers don’t know that your unique and valuable features exist, why would they start using your product instead of sticking with the market leader, even if that leader has been stagnant for years?

Worse still, many startups and experiments are essentially “graded” based on the number of monthly or daily active users (MAU or DAU)– if a feature isn’t getting used, it gets axed or deprioritized, and the team behind it is reallocated to a more promising area. Users cannot use a feature if they haven’t discovered it. As a consequence, in an organization that lacks powerful oversight there’s a serious risk of tragedy, whereby your product becomes a sea of banners and popups each begging the user for attention. Users don’t like it when they think you’re distracting them from the cheese they’ve been enjoying.

Security/reliability risk

Engineers and enthusiasts know that software is, inescapably, never free of errors, and intuitively it seems that every additional line of code in a product is another potential source of crashes or security vulnerabilities.

If software has an average of, say, two errors per thousand lines of code, adding a million lines of new feature code mathmatically suggests there are now two thousand more bugs that the user might suffer.

If users have to “pay” for features they’re not using, this feels like a bad deal.

Performance

Unlike features, Performance is one of the “Universal Goods” in software– no user anywhere has ever complained that “My app runs too fast!” (with the possible exception of today’s gamers trying to use their 4ghz CPUs to run retro games from the 1990s).

However, we users also note that, even as our hardware has gotten thousands of times faster over the decades, our software doesn’t seem to have gotten much faster at all. Much like our worry about new features introducing code defects, we also worry that introducing new features will make the product as a whole slower, with higher CPU, memory, or storage requirements.

Each of these three buckets of concerns is important; keep them in mind as we consider the other side.

Batteries Included!

We use software to accomplish tasks, and features are the mechanism that software exposes to help us accomplish our tasks.

We might imagine that ideal software would offer exactly and only the features we need, but this is impractical. Oftentimes, we may not recognize the full scope of our own needs, and even if we do, most software must appeal to broad audiences to be viable (e.g. the “10% of Microsoft Office” problem). And beyond that, our needs often change over time, such that we no longer need some features but do need other features we didn’t use previously.

One school of thought suggests that the product team should build a very lightweight app with very few features, each of which is used by almost everyone. Features that will be used by fewer users are instead relegated to implementation via an extensibility model, and users can cobble together their own perfect app atop the base.

There’s a lot of appeal in such a system– with less code running, surely the product must be more secure, more performant, more reliable, and less complex. Right?

Unfortunately, that’s not necessarily the case. Extension models are extremely hard to get right, because until you build all of the extensions, you’re not sure that the model is correct or complete. If you need to change the model, you may need to change all of the extensions (e.g. witness the painful transition from Chromium’s Manifest v2 to Manifest v3).

Building features atop an extension model sometimes entails major performance bugs, because data must flow through more layers and interfaces, and if needed events aren’t exposed, you may need to poll for updates. Individual extensions with common needs may have to do redundant work (e.g. each extension scanning the full text of each loaded page, rather than the browser itself scanning the whole thing just once).

As we saw with the Mouse Gestures story above, allowing third-party extensions carries along with it a huge amount of complexity related to security risk and misaligned incentives. In an adversarial ecosystem where good and bad actors both participate, you must invest heavily in security and anti-abuse mechanisms.

Finally, regression testing and prevention gets much more challenging when important features are relegated to extensions. Product changes that break extensions won’t block the commit queue, and the combinatorics involved in testing with arbitrary combinations of extensions quickly hockey sticks upward to infinity.

Extensions also introduce complexity in the management and update experience, and users might miss out on great functionality because they never discovered extension exists to address a need they have (or didn’t even realize they have). You’d probably be surprised by the low percentage of users that have any browser extensions installed at all.

With Fiddler, I originally took the “Platform” approach where each extra feature was its own extension. Users would download Fiddler, then download four or five other packages after/if they realized such valuable functionality existed. Over time, I realized that nobody was happy with my Ikea-style assemble-your-own debugger, so I started shipping a “Mondo build” of Fiddler that just included everything.

Extensions, while useful, are no panacea.

Principles

These days, I’ve come around to the idea that we should include as many awesome features as we can, but we should follow some key principles:

  • To the extent possible, features must be “pay to play.” If a user isn’t using a given feature, it should not impact performance, security, or reliability. Even small regressions quickly add up.
  • Don’t abuse integration to avoid clean layering and architecture. Just because your feature’s implementation can go poke down into the bowels of the engine doesn’t mean it should.
  • Respect users and carefully manage UX complexity. Remember, “change is bad unless it’s great.” Invest in systems that enable you to only advertise new features to the right users at the right time.
  • Remove failed experiments. If you ship a feature and find that it’s not meeting your goals, pull it out. If you must accommodate a niche audience that fell in love, consider whether an extension might meet their needs.
  • Find ways to measure, market, and prioritize investments in Fundamentals. Features usually hog all the glory, but Fundamentals ensure those features have the opportunity to shine.

-Eric

Chromium Startup

This morning, a Microsoft Edge customer contacted support to ask how they could launch a URL in a browser window at a particular size. I responded that they could simply use the --window-size="800,600" command line argument. The customer quickly complained that this only seemed to work if they also specified a non-default path in the --user-data-dir command line argument, such that the URL opened in a different profile.

As I mentioned back in my post about Edge Command Line Arguments, most command-line arguments are ignored if there’s already a running instance of Edge, and this is one of them. Even if you also pass the --new-window argument, Edge simply opens the URL you’ve supplied inside a new window with the same size and location as the original window.

Now, in many cases, this is a reasonable limitation. Many of the command line arguments you can pass into Chromium have a global impact, and having inbound arguments change the behavior of an already-running browser instance would introduce an impractical level of complexity to the code and user experience. Similarly, there’s an assumption in the Chromium code that only one browser instance will interact with a single profile folder at one time, so we could not simply allow multiple browser instances with different behaviors to use a single profile in parallel.

In this particular case, it feels reasonable that if a user passes both --new-window and either --window-size or --window-position (or all three), the resulting window will have the expected dimensions, even if there was already one or more browser windows open in the browsing session. Because the arguments do not impact more than the newly-created window, there’s none of the compexity of trying to change the behavior of any other part of the already-running browser. I filed a bug suggesting that we ought to look at enabling this.

In the course of investigating this bug, I had the opportunity to learn a bit more about how Chromium handles the invocation of a URL when there’s already a running instance. When it first starts, the new browser process searches for an existing hidden message window of Class Chrome_MessageWindow with a Caption matching the user-data-dir of the new process.

If it fails to find one, it creates a new hidden messaging window (using a mutex to combat race conditions) with the correct caption for any future processes to find.

However, if the code did find an existing messaging window, there’s a call to a AttemptToNotifyRunningChrome function that sends a WM_COPYDATA message to pass along the command line from the (soon-to-exit) new process.

In the unlikely event that the existing process fails to accept the message (e.g. because it is hung), the user will be prompted to kill the existing process so that the new process can handle the navigation.

This code is surprisingly simple, and feels very familiar– the startup code inside Fiddler is almost identical except it’s implemented in C#.

In our customer scenario, we see that the existing browser instance correctly gets the command line from the new process:

[11824:4812:0615/092252.689:startup_browser_creator.cc(1391)] ProcessCommandLineAlreadyRunning "C:\src\c\src\out\default\chrome.exe" --new-window --window-size=400,510 --window-position=123,34 --flag-switches-begin --flag-switches-end example2.com

And shortly after that, there’s the expected call to the UpdateWindowBoundsAndShowStateFromCommandLine function that sets the window size and position from any arguments passed on the command line.

The stack trace of that call looks like

-chrome::internal::UpdateWindowBoundsAndShowStateFromCommandLine
-chrome::GetSavedWindowBoundsAndShowState
-BrowserView::GetSavedWindowPlacement

Unfortunately, when we look at the GetSavedWindowBoundsAndShowState function we see the problem:

void GetSavedWindowBoundsAndShowState(const Browser* browser,
                                      gfx::Rect* bounds,
                                      ui::WindowShowState* show_state) {
  //...
  const base::CommandLine& parsed_command_line =
      *base::CommandLine::ForCurrentProcess();

internal::UpdateWindowBoundsAndShowStateFromCommandLine(parsed_command_line, bounds, show_state);
}

As you can see, the call passes the command line string representing the current (preexisting) process, rather than the command line that was passed from the newly started process. So, the new window ends up with the same size and position information from the original window.

To fix this, we’ll need to restructure the calls such that when we’re handling a command line passed to us from another process through ProcessCommandLineAlreadyRunning, we use the WM_COPYDATA-passed command line when setting the window size and position.

-Eric

Microsoft Edge Tips and Tricks

Last Updated: June 3, 2022. The intent of this post is to capture a list of non-obvious features of the browser that might be useful to you.


Q: How do I find the tab playing audio? It’s cool that Microsoft Edge shows the volume icon in the tab playing music and I can click to mute it:

…but what if I have a bunch of Edge windows? I have to go into each window to find the icon?

A: The Ctrl+Shift+A hotkey is your friend. It will show your open tabs to allow you to search across them, and those playing audio/video are listed in a group at the top:


Q: How can I move a few tabs out of the current window?

A: You can simply drag the tab’s button/title out of the tab strip to move it to a new window. Less obviously, you can Ctrl+Click *multiple* tabs and drag your selections out into a new window (unselected tabs temporarily dim). Use Shift+Click if you’d prefer to select a range of tabs.


Q: How can I duplicate a tab?

A: Hit Ctrl+Shift+K or use the “Duplicate Tab” command on the tabstrip’s context menu to duplicate the current tab. If you have a middle-mouse button, middle-click the Refresh button.

Less obviously, you can Ctrl+Click the back or forward arrow buttons to open the previous or next entry in the history in a new tab, or you can Shift+Click the buttons to open the page in a new window.


Q: How can I get back a tab I accidentally closed?

A: Hit Ctrl+Shift+T or use the “Reopen closed” option on the tabstrip’s context menu shown on right-click.

You also might be interested in the “Ask before closing a window with multiple tabs” option available inside the edge://settings page:


Q: On a desktop mouse, is middle-click useful for anything?

A: Middle-click a link to open it in a new tab. Middle-click a tab title button to close that tab rather than hunting for its [x] icon. Middle-click the refresh button to duplicate the tab.


Q: How can I easily open a given site in a different profile?

A: You can right-click a link in a page and choose “Open as” to open that link in a different profile:

If you already have the desired page open, you can right-click the tab title button and choose “Move Tab To” and pick the desired profile:

Move the current tab to a different profile

You can also use the options at edge://settings/profiles/multiProfileSettings to open particular sites using a particular profile, useful for splitting your “Work Sites” from your “Life Sites” and your “Ephemeral sites“.

Open AzDo in the Work Profile, and Hotmail in my Personal Profile

Q: How can I make any site act more like an “App” with its own window that isn’t cluttered with other tabs?

A: You can use the --app=url command line argument to give a any site its own standalone window that does not mix with your other sites. For example, if you run msedge.exe --app=https://outlook.live.com, the result looks like this:

This works great with command launchers like SlickRun, because you can then just type e.g. Mail to launch the standalone web app.


You might also enjoy this collection of not-so-frequently-asked questions about Edge.