Adding Protocol Schemes to Chromium

Previously, I’ve written a lot about Application Protocols, which are a simple and popular common mechanism for browsers to send a short string of data out to an external application for handling. For instance, mailto is a common example of a scheme treated as an Application Protocol; if you invoke, the browser will convert this to an OS execution of e.g.:


Application Protocols are popular because they are simple, work across most browsers on most operating systems, and because they can be added by 3rd parties without changes to the browser.

However, Application Protocols have one crucial shortcoming– they cannot directly return any data to the browser itself. If you wanted to do something like:

<img src='myScheme://photos/mypic.png' />

… there’s no straightforward way for your application protocol to send data back into the browser to render in that image tag.

You might be thinking: “Why can’t I, a third-party, simply provide a full implementation of a protocol scheme, such that my object gets a URL, and it returns a stream of bytes representing the data from that URL, just like HTTP and HTTPS do?

Asynchronous Pluggable Protocols

Back in the early days of Internet Explorer (1990s), the team didn’t know what protocols would turn out to be important. So, they built a richly extensible system of “Asynchronous Pluggable Protocols” (APP) which allowed COM objects to supply a full implementation of a protocol. The browser, upon seeing a URL (“moniker”) would parse the URL Scheme out, then “bind” to the APP object and send/receive data from that object. This allowed Internet Explorer to handle URLs in an abstract way and support a broad range of protocols (e.g. ftp, file, gopher, http, https, about, mailto, etc).

In many cases, we think only about receiving data from a protocol, but it’s important to remember that you can also send data (beyond the url) to a protocol; consider a file upload form that uses the POST method to send a form over HTTPS, for example.

Writing an APP was extremely challenging, and very risky– because APPs are exposed to the web, a buggy APP could be exploited by any webpage, and thanks to the lack of sandboxing in early IE, would usually result in full Remote Code Execution and compromise of the system. Beyond the security concerns, there were reliability challenges as well– writing code that would properly handle the complex threading model of a browser downloading content for a web page was very difficult, and many APP implementations would crash or hang the browser when conditions weren’t as the developer expected.

Despite the complexity and risk, writing APPs provided Internet Explorer with unprecedented extensibility power. “Back in the day” I was able to do some fun things, like add support for data: URLs to IE7 before the browser itself got around to supporting such URLs.

Understanding Custom Schemes

Sending a URL to into an APP object and getting bytes back from a stream is only half of the implementation challenge, however.

The other half is figuring out how the rest of the browser and web platform should handle these URLs. For Internet Explorer, we had a mechanism that allowed the host (browser) to query the protocol about how its URLs should be handled. The IInternetProtocolInfo interface allowed the APP’s code to handle the comparison and combination of URLs using its scheme, and allowed the code to answer questions about how web content returned from the URL should behave.

For instance, to fully support a scheme, the browser needs to be able to answer questions like:

  1. Is this scheme “standard” (allowing default canonicalization behaviors like removing \..\ sequences), or “opaque” (in which other components cannot safely touch the URL)?
  2. Is this scheme “Secure” (Allow in HTTPS pages without mixed content warnings, allow WebAPIs that require a secure context, etc)
  3. Does this scheme participate in CORS?
  4. Does this scheme get sent as a referrer?
  5. Is this scheme allowed from Sandboxed frames?
  6. Can top-level frames be navigated to this scheme?
  7. Can such navigations only occur from trusted contexts (app/omnibox) or is JavaScript allowed to invoke such navigations?
  8. How do navigations to these urls interact with all of the other WebNavigation/WebRequest extensibility APIs?
  9. How does the scheme interact with the sandbox? What process isolation is used?
  10. What origin is returned to web content running from the scheme?
  11. How does content from the scheme interact with the cookie store?
  12. How does it interact with CSP?
  13. How does it interact with WebStorage?

Implementing Protocols in Chromium

Unlike Internet Explorer, Chromium does not offer a mechanism for third-party extensibility of its protocols; the browser itself must have support for a new protocol compiled in. A subclassed URLLoaderFactory (e.g. about, blob) invokes the correct url_loader implementation that returns the data for the response.

Chromium doesn’t have an analogue for the IInternetProtocolInfo interface for protocol implementors to implement; instead, the scheme must be manually added to each of the per-behavior lists of schemes hardcoded into Chromium.

Debugging Compatibility in Edge


By moving from our old codebase to Chromium, the Microsoft Edge team significantly modernized our codebase and improved our compatibility with websites. As we now share the vast majority of our web platform code with the market-leading browser, it’s rare to find websites that behave differently in Edge when compared to Chrome, Brave, Opera, Vivaldi, etc. Any time we find a behavioral delta, we are keen to dive in and understand why Edge behaves differently. Sometimes, our investigation will reveal a behavior gap caused by a known and deliberate difference (e.g. we add an Edg/ token to our User-Agent string), but in the most interesting cases, something unexpected is found.

Yesterday, I came across an interesting case. In this post, I’ll share both how I approached root causing the difference, and explore how it can be avoided.

The Customer’s Issue

The Microsoft Edge team works hard to ensure that Enterprises can move fearlessly to Edge, both from legacy IE/Spartan, and from other browsers.

One such Enterprise recently contacted us to report that one of their workflows did not work as expected in Edge 96, noting that the flow in question works fine in both Google Chrome and Apple Safari.

Common causes (e.g. a different User-Agent header, Tracking Prevention set to defaults) were quickly ruled out as root causes: you can easily run Edge with a different User-Agent header (via the --user-agent command line argument or via the Emulation feature in the F12 Developer Tools) and Tracking Prevention reports any blocked storage or network requests to the F12 Console.

The customer’s engineers noted that the problem seemed to be that Edge was failing to send a cross-origin fetch request– NetLogs of the flow revealed that the fetch request began, but never completed. As a result of failing to send the request, a WebAPI was not invoked, and the user was unable to change the state of the web application.

This was a super-interesting finding because Edge shares essentially all of its code in this area with upstream Chromium.

The customer shared their NetLogs with us, and I took a shot at analyzing the traffic captured within. Unfortunately, the results were inconclusive: it was easy to see that the request had indeed not been sent, but it was not clear why.

Here’s a snippet of the log for the attempt to call the API. We see that the request was a POST to an invokeAPI page on a different server, and because of the request’s Content-Type (application/json) the browser was required to perform a CORS preflight request before sending the POST to the remote server.

t=9118 [st=0] +CORS_REQUEST  [dt=175]
  --> cors_preflight_policy = "consider_preflight"
  --> headers = "...content-type: application/json\r\n..."
  --> is_external_request = false
  --> is_revalidating = false
  --> method = "POST"
  --> url =
             --> preflight_required = true
             --> preflight_required_reason ="disallowed_header"
             --> status = "miss"
             --> source_dependency = 146 (URL_REQUEST)
t=9293 [st=175] -CORS_REQUEST
t=9411 [st=293]  CORS_PREFLIGHT_RESULT
   --> access-control-allow-headers = "content-type"
   --> access-control-allow-methods = "GET,HEAD,OPTIONS,POST"

Crucially, we see that at 9293 milliseconds (175ms after the request process began), the request was aborted (-CORS_REQUEST). Not until 118 milliseconds later did the response to the preflight (CORS_PREFLIGHT_RESULT) come back saying, in effect, “Sure, I’m okay with accepting a request with that Content-Type and the POST method.” But it was too late– the request was already cancelled. The question is: “Why did the request get cancelled?”

A quick look at the webapp’s JavaScript didn’t show any explicit code that would cancel the fetch(). My hunch was that the request got cancelled because the fetch() call was in a frame that got navigated elsewhere before the request completed, but in this complicated application, network requests were constantly firing. That made it very difficult to prove or disprove my theory from looking at the Network Traffic alone. I tried looking at an edge://tracing log supplied by the customer, but the application was so complicated that I couldn’t make any sense of it.

And the larger question loomed: Even if teardown due to navigation is the culprit, why doesn’t this repro in Chrome?

The overall workflow involves several servers, including a SharePoint site, Microsoft Outlook Online, and custom logic running on Microsoft Azure. There didn’t seem to be any good way for us to emulate this workflow, but fortunately, the customer was willing to provide us with credentials to their test environment.

I started by confirming the repro in both Edge Stable and Canary, and confirming that the problem did not repro in either Chrome Stable or Canary. As this web application used a ServiceWorker, I also confirmed that unregistering the worker (using the Application tab of the F12 DevTools) did not impact the repro.

With this confirmation in hand, I then aimed to test my hunch that navigation was the root cause of the WebAPI fetch request not being sent.

A New Debugging Tool

One significant limitation of NetLogs is that, while the URL_REQUEST entries in the log have tons of information about the content and network behavior of the request, the NetLogs don’t have much context about why a given request was made, or where its result will be used by the web application.

Fortunately, Chromium’s rich extensibility model surfaces events for both Navigations and Network Requests, including (crucially) information about which browser tab and frame generated the event.

As a part of my recent NativeMessaging Debugger project, I built a simple sample extension which watches for navigations and sends information out to a log window. For this exercise, I updated it to also watch for WebRequests. With this new logging in place, I reproduced the scenario, and it was plain that the fetch() call to the WebAPI was being cancelled due to navigation:

14:26:40:7164 - {"event":"navigation","destination":https://customer/MyTasks.aspx,"tabId":60,"frameId":0,"parentFrameId":-1,"timeStamp":1642624000710.5059}

14:26:40:7194 - {"event":"webRequest","method":"POST","url":,"tabId":60,"frameId":0,"parentFrameId":-1,"timeStamp":1642624000716.684,"type":"xmlhttprequest"}

14:26:40:7254 - {"event":"webRequest","method":"GET","url":https://customer/MyTasks.aspx,"tabId":60,"frameId":0,"parentFrameId":-1,"timeStamp":1642624000722.692,"type":"main_frame"}

The first event shows a top-level navigation (tabId:60, frameId:0) begin to MyTasks.aspx, although the navigation’s request hasn’t hit the network yet.

Then, six milliseconds later, we see the WebAPI’s fetch() call begin.

Six milliseconds after that, we see the network request for the navigation get underway, effectively racing against the outstanding WebAPI call.

From our NetLog, we know the navigation wins– the WebAPI’s fetch() preflight alone took 293ms to return, and the overall fetch request was aborted after just 175ms, right after the navigation’s request got back a set of HTTP/200 response headers.

So, my hunch is confirmed: The API call was aborted because the script context in which it was requested got torn down.

I was feeling pretty smug about my psychic debugging success but for one niggling thought: Why was this only happening in Edge?

Maybe this is happening because Chrome has some experiment running that changes the behavior of CORS or the JavaScript scheduler? I checked chrome://version/?show-variations-cmd and searched for any likely experiment names. Nothing seemed very promising, although I tried using the --enable-features command line argument to turn on a few flags in Edge to see if they mattered. No dice — Edge still repro’d the issue. I tried performing the repro in my local version of Chromium, which doesn’t run with any server-controlled flags. No dice — Chromium didn’t repro the issue. I even tried running a bisect against Chromium to see if any older Chromium build has this problem. Nope, going all the way back to Chrome 60, the workflow ran just fine.

I wanted to try bisecting against Edge, but unfortunately this isn’t simple to do without being set up with an Edge development environment (I develop directly against upstream Chromium). However, I did have a standalone copy of Edge 91 handy, and the problem repro’d there as well, so at least this wasn’t a super-recent regression.

I don’t like unsolved mysteries, but by this point I at least knew how the customer could fix this on their side. Their code for submitting the form looked like this:

  t.prototype.Submit = function() {
   this.callAPI(, t),
   setTimeout(function() {console.log("Wait 1sec")},1000),
   location.href = n + "/MyTasks.aspx")

 t.prototype.callAPI = function(e, t) {
  var n = JSON.stringify({data: e}), o = new Headers;
  o.append("Content-type", "application/json");
  var i = {body: n,headers: o};"",i).then(function(e)
         {return console.log("API call complete"), e.json()})

As you can see, the Submit() function calls the callAPI() function to send the POST request, then shows an alert saying “Success”. Script execution pauses while the alert dialog is showing. When the user clicks “OK” in the alert(), a setTimeout call queues a callback with a one second delay, then navigates the top-level page to MyTasks.aspx.

This looks like a mistake– the location.href= call was probably meant to be inside the setTimeout callback; otherwise, that callback probably will never run because the page will have been torn down by the navigation before the callback’s console.log statement runs.

But more broadly, this code seems fundamentally wrong: We’re telling the user that the operation yielded “Success”, but we never actually waited for the WebAPI call to succeed— we just fired it off and then immediately showed the “Success” popup. The alert() and location.href='...' lines should probably be inside the then block on the .post call, and there should probably be error handling code for the case where that post call returned an error for any reason.

I wrote up this suggestion and sent it off to the customer– this should resolve their issue in Edge, and it’s more logical and correct in every browser.

And yet… why is Edge different?

I went back to look at the event logs from my repro again. And I noticed something crucial: in Chrome, the wrBeforeRequest event signaling the start of the WebAPI call appeared before the alert dialog box, and the wrResponseHeaders event appeared while the alert box was up (3 seconds later). Only after I clicked OK on the dialog box (32 seconds after that) did the navigation event occur:

In contrast, in Edge, the alert dialog box appears before the wrBeforeRequest event– in fact, the wrBeforeRequest event isn’t even seen until after clicking OK to dismiss the dialog. At that point, the WebAPI and Navigation requests race, and the WebAPI request will almost always lose the race.

Ah ha! So we’re getting closer. Edge is failing because its fetch call is getting blocked until dismissal of the modal alert. That’s weird. Maybe it’s something about this site’s JavaScript? I tried creating a minimal repro where a fetch() would get kicked off asynchronously and then an alert() would immediately follow. In Chrome, the fetch ran as expected, while in Edge, the fetch blocked on the user clicking OK.

Now we’re cooking with gas. A working minimal repro dramatically increases the odds of solving any mystery, because it makes testing theories much simpler, and roping in other engineers much cheaper– instead of asking them to perform complicated repro steps that may take tens of minutes, you can say “Click this link. Push the button. See the bug. Weird, right?

There’s been quite a lot of recent discussion about the Chrome team’s concerns about the 3 Web Platform modal dialog primitives (alert(), prompt(), and confirm()) and how they interact with the JavaScript event loop. Furious webdevs complain “If it ain’t broke, don’t fix it” and the web platform folks retort: “Oh, it’s very much broke, and we need to fix it!”.

Perhaps there’s some related change upstream is experimenting with? I spent another fruitless hour looking into their experimental configuration and ours, and bothering Edge’s alert() owners to find out if perhaps we might have changed anything about our alert() implementation for accessibility reasons or the like. No dice.

I wrote up my latest findings and sent them off to my engineers. With a reduced repro in hand, our Dev Manager popped up the edge://tracing tool and had a look around. He replied back: I suspect it has something to do with an Edge-specific throttle. Specifically, I see a WebURLLoader::Context::Start hitting a mojom.SafeBrowsingUrlChecker call that I don’t see in Chrome.

For context: Chromium supports the notion of throttles which allow a developer to react to events as the user browses around. For instance, throttles for resource loaders allow you to block and modify requests, which browser developers use to perform reputation scans for SafeBrowsing or SmartScreen, for example. NavigationThrottles allow developers to redirect or cancel navigations, and so on.

Microsoft Edge uses a throttle to implement our Tracking Prevention feature.

Way back in December 2019, one of our engineers noted that a test case behaved differently in Edge vs. Chrome. That test case was a page with three XHRs: two asynchronous and one synchronous. In Chrome, the two async calls ran in parallel with the sync call, while in Edge, the two async calls were blocked upon completion of the one sync call. The root cause was related to the Tracking Prevention throttle needing to perform a cross-process communication to check whether the XHRs’ url targets were potentially tracking servers. Even though synchronous XHRs are evil and need to die, we still fixed the testcase by updating the throttle so that it doesn’t run on same-origin requests (because they cannot be, by definition, 3rd party trackers).

In this week’s repro, the fetch call is async, but alert() is not– it’s a synchronous API that blocks JavaScript. And the WebAPI is to a 3rd-party URL, so the 2019 fix doesn’t prevent the throttle from running.

One interesting finding during the 2019 investigation was that turning off Tracking Prevention in edge://settings does not actually disable the throttle– it simply changes the throttle to not block any requests. With this recollection, I used the command line argument to disable the throttle entirely:
msedge.exe --disable-features=msEnhancedTrackingPreventionEnabled

…and confirmed that the minimal test page now behaves identically in Edge vs. Chrome.

We’ve filed a bug against the throttle to figure out how to address this case, but the recommendation to the customer stands: the current logic popping the alert() box ought to change such that either there’s no alert(), or it only shows success after the fetch() call actually succeeds.


Recognizing Edge Windows

Yesterday, we had a customer reach out to us for help on an issue they’d encountered while writing code to interact with Microsoft Edge windows. Their script enumerated every window in the system, looking for those with Microsoft Edge in the titlebar. They were surprised to discover that the script didn’t recognize any of their browser windows, despite the fact that they could plainly see the product’s name in several windows on the taskbar and ALT+Tab overlay.

Weird, right?

After investigating further, the customer realized that the Edge window titles contained a Zero Width Space (+200B) Unicode character immediately after the word Microsoft and before the regular space character preceding the word Edge.

What possible use could that have?” the customer wondered.

When I started looking into this, I assumed it was simply a mistake, whereby someone had accidentally copied the invisible space into the IDS_BROWSER_WINDOW_TITLE_FORMAT resource within Edge’s version of Chromium. After all, if regular whitespace is a menace, invisible whitespace is at least doubly-so.

However, when I saw the source code, I realized that the developer definitely put it there on purpose:

As you can see, the zero-width space is fully visible, HTML-encoded as the constant value &#8203.

Q: Why on earth would we do that?

A: For the same reason we do almost every wacky, weird, or inexplicable thing: Compatibility.

Investigation revealed that this character was added precisely to cause existing 3rd-party software not to recognize Microsoft Edge windows. It turns out that there’s a very popular touchpad driver that applies special scrolling behavior for the (now defunct) Microsoft Edge Legacy (Spartan) browser, and this code doesn’t behave properly in the new Chromium-based Microsoft Edge. The touchpad’s software wasn’t doing any additional validation of the window’s owning process executable name or similar to limit its scope. So, the only straightforward way to prevent it from breaking Edge was to apply this trick. We filed a bug to eventually remove the character after the touchpad’s code is fixed.

If you’re writing an AutoHotkey script or other code to try to interact with Edge’s windows based on their window title, you’ll need to account for this invisible space.


Trim Your Whitespace

Leading and trailing whitespace are generally invisible. Humans are bad at dealing with things they can’t see.

If your system accepts textual codes, or any other human-generated or human-mediated input, you should trim whitespace, whether it’s leading, trailing, or inline (if not meaningful).

// Trim leading and trailing whitespace
$('inputCode').value = $('inputCode').value.trim();

It’s downright silly that web-first companies with market capitalizations in the $Billions have not yet figured out this simple trick for improving their applications. Instead, we end up with garbage error messages like this one:

Or this one, from the most valuable company in history:

Related: Browsers can do better here too. On paste into a length-limited control, we should probably trim leading whitespace first if needed to respect the limit.

Improve the world: Trim harmful whitespace!


Debug Native Messaging


Last month, an Enterprise customer reached out to report that a 3rd-party browser extension they use wasn’t working properly. Investigation of the extension revealed that the browser extension relied upon a NativeMessaging Host (NMH) companion that runs outside of the browser’s sandbox. In reviewing a Process Monitor log provided by the customer, the Support Engineer and I observed that the Native Host executable was unexpectedly exiting tens of minutes after it started. After that unexpected exit, the next time the in-browser extension tried to call it, the browser-to-native call failed, and the browser extension was unable to provide its intended functionality.

Unfortunately, I don’t have either the source (or even the binary) for the NMH executable, and there are no obvious clues in the Process Monitor log (e.g. a failed registry read or write) that reveal the underlying problem. I lamented to the Support Engineer that I really wished we could see the JSON messages being exchanged between the browser extension and the NMH to see if they might reveal the root cause.

We need, like, Fiddler, but for NMH messages instead of HTTPS messages.”

How Hard Could It Be?

Technically, I don’t really own anything related to browser extensions, so after ruling out what few possible problems I could imagine as root causes, I moved on to other tasks.

But that vision stuck with me throughout the day and the evening that followed: Fiddler, but for Native Messaging.

How hard could it be to build that? How useful would it be?

I haven’t written much C# code since leaving Fiddler and Telerik at the end of 2015, and the few exceptions (e.g. the NetLog Importer) have mostly been plugins to Fiddler rather than standalone applications. Still, Native Messaging is far less complicated than HTTPS, so it shouldn’t be too hard, right?

We want the following features in a debugger:

  1. Show messages from any Browser Extension to any Native Host
  2. Enable logging these messages to a file
  3. Allow injecting arbitrary messages in either direction
  4. (Stretch goal) Allow modification of messages

Over the following few evenings, I dusted off my Visual Studio IDE and struggled to remember how C# async programming works in modern times (Fiddler’s implementation was heavily threaded and mostly predated more modern alternatives).

Introducing the NativeMessaging Meddler

The source and (soon) compiled code for the NativeMessaging Meddler can be downloaded from GitHub.

The NativeMessaging Meddler (NMM) is a Windows application that requires .NET 4.8. A line of tabs across the bottom enables you to switch between tabs; by default, running the .exe directly just shows help text:

The NMM tool can respond to NativeMessages from a browser extension itself, or it can proxy messages between an existing extension and an existing NMH executable.

Configure the Demo

To test the basic functionality of the tool, you can install the Demo Extension.

  1. Visit about://extensions in Chrome or Edge
  2. Enable the Developer Mode toggle
  3. Push the Load Unpacked button
  4. Select the sample-ext folder
  5. A new “N” icon appears in the toolbar

After the demo extension is installed, you must now register the demo Native Host app. To do so, update its manifest to reflect where you placed it:

  1. Open the manifest.json file using Notepad or a similar editor
  2. Set the path field to the full path to the .exe. Be sure that every backslash is doubled up.
  3. Set the allowed_origins field to contain the ID value of the extension from the about:extensions page.

Next, update the registry so that the browser can find your Host:

  1. Edit the InstallRegKeys.reg file in Notepad, updating the file path to point to the location of the manifest.json file. Be sure that each backslash is doubled up.
  2. Double-click the InstallRegKeys.reg file to import it to the registry.

Run the Demo

With both the host and extension installed, you can now test out the tool. Click the “N” icon from the extension in the toolbar to navigate to its demo page. An instance of the NMM should automatically open.

Type Hello world! in the Outgoing Messages box and click Post Message to port. The message should appear on the Monitor tab inside the NMM app:

If you tick the Reflect to extension option at the top right and then send the message again, you should see the NMM tool receive the message and then send it back to the extension page, where it’s shown in the Incoming Messages section:

“Reflect to extension” copies inbound messages back to the sender

What if we want to inject a new message of our choosing from NMM?

Go to the Injector tab in NMM and type a simple JSON message in the bottom box. Then click the Send to Browser/Extension button. You’ll see the message appear inside the browser in the Incoming Messages section:

Note: Your message must be well-formed JSON, or it will never arrive.

At this point, we’ve now successfully used the NMM tool to receive and send messages from our Demo extension.

Proxying Messages

While our demo is nice for testing out Native Messaging, and it might help as a mock if we’re developing a new extension that uses Native Messaging, the point of this exercise is to spy on communications with an existing extension and host.

Let’s do that.

First, go to the Configure Hosts tab, which grovels the registry to find all of the currently-registered Native Hosts on your PC:

The plan is to eventually make intercepting any Native Host a point-and-click experience, but for now, we’re just using this tab to find the file system location of the Native Host we wish to intercept. If an entry appears multiple times, pick the instance with the lowest Priority score.

For example, say we’re interested in the BrowserCore Host which is used in some Windows-to-Web authentication scenarios in Chrome. We see the location of the manifest file, as well as the name of the EXE extracted from the manifest file:

In some cases, you might find that the Exe field shows ??? as in the vidyo entry above. This happens if the manifest file fails to parse as legal JSON. Chromium uses a bespoke JSON parser in lax mode for parsing manifests, and it permits JavaScript-style comments. The NMM tool uses a strict JSON parser and fails to parse those comments. It doesn’t really matter for our purposes.

Note the location of the manifest file and open it in your editor of choice. Note: If the file is in a privileged location, you may need to open your editor elevated (as Administrator).

Tip: You can Alt+DblClick an item or hit Alt+Enter with it selected to open Windows Explorer to the manifest’s location.

Within the manifest, change the path field by introducing the word .proxy before the .exe at the end of the filename:

Save the file.

Note: In some cases, not even an Administrator will be able to write the file by default. In such cases, you’ll need to use Administrator permissions to take ownership of the file to grant yourself permission to modify it:

There are other approaches that do not require changing filesystem permissions, but we won’t cover those here.

Next, copy the nmf-view.exe file into the folder containing the Native Host and rename it to the filename you wrote to the manifest:

At this point, you’ve successfully installed the NMM proxy. Whenever the browser extension next tries to launch the Native Host, it will instead activate our NMM debugger, which will in turn spawn the original Native Host (in this example, BrowserCore.exe) and proxy all messages between the two.

Now, visit a site where you can log in, like Click the login button at the top-right and observe that our debugger spawns, collects a request from the Windows 10 Accounts extension, passes it to BrowserCore.exe, reads the Host’s reply, and passes that back to the extension. Our debugger allows us to read the full text of the JSON messages in both directions:

Note: This screenshot is redacted because it contains secret tokens.

Pretty neat, huh?

Tampering with Messages

When I got all of this working, I was excited. But I was also disappointed… plaintext rendering of JSON isn’t super readable, and building a UI to edit messages was going to be a ton of extra work. I lamented sheesh… I already wrote all of the code I want fifteen years ago for Fiddler. It has both JSON rendering and message editing… and I briefly bemoaned the fact that I no longer own Fiddler and can’t just copy the source over.

And then I had the epiphany. I don’t need to reimplement parts of Fiddler in NMM. The tools can simply work together! NMM can pass the messages it receives from the Browser Extension and Native Host up to Fiddler as they’re received, and if Fiddler modifies the message, NMM can substitute the modified message.


Configure Tampering

First, re-edit the manifest.json file to add a .fiddler component to the path, and rename the .proxy.exe file to .proxy.fiddler.exe, like so:

This new text signals that you want NMM to start with the Tamper using Fiddler option set. To debug “single-shot” Native Hosts like BrowserCore.exe, we can’t simply use the checkbox at the top-right of NMM’s Monitor tab, because the debugger and Native Host spawn and complete their transaction much faster than we puny humans can click the mouse. Note: You can also specify the string .log. to enable the option that writes the traffic log to your Desktop.

Now, start Fiddler, perhaps using the -noattach command line argument so that it does not register as the system proxy. Type bpu ToApp in the QuickExec box beneath the Web Sessions list and hit Enter.

This creates a request breakpoint which will fire for all requests whose urls contain the string ToApp, which NMM uses to record requests sent to the original Native Host:

Using Fiddler’s Inspectors, we can examine the JSON of the message using the JSON treeview, or the TextView or SyntaxView Inspectors:

If we are satisfied with the message, click the Run to Completion button, and our NMM app will send the original, unmodified message to the original Native Host. However, if we want to tamper with the message, instead pick a success response like 200_SimpleHTML.dat from the dropdown:

A template response will appear in the Response TextView:

Overwrite that template text with the modified text you’d like to use instead:

… then push the green Run to Completion button. Fiddler will return the modified text to the NMM proxy, and the NMM proxy will then pass that modified message to the original Native Host:

In this case, the original Native Host doesn’t know what to do with the GetFiddledCookies request and returns an error which is passed back to the browser.

Tip: If your goal is to instead tamper with messages sent from the Native Host to the extension, enter bpu ToExt in Fiddler’s QuickExec box. Alternatively, you can also use any of Fiddler’s richer tampering features, such that it breaks only on messages containing certain text, automatically rewrites certain messages, etc.

Happy Meddling!


Lock down web browsing using Kiosk Mode

Browsers get used in many different environments. Today, I take a look at scenarios where there’s either no interactive user (digital signage) or a potentially malicious user (internet kiosks).

Digital Signage (fullscreen) Requirements

In the Digital Signage scenario, there’s a full-screen webpage rendering and there are no user-accessible input devices– the canonical example here would be an airport’s signage displaying arriving and departing flights and their associated gates.

Supporting this use-case is relatively easy– the browser must be full-screened, and it must avoid showing any sort of prompt, tip, hint, or feature that requires dismissal because there’s no guarantee that a mouse or keyboard is even plugged into the device.

In this scenario, the browser is typically used to load only a specific website, which itself must be carefully coded not to prompt the user for any input. Additionally, either the webapp must request a wakelock, or the OS must be configured to let the computer sleep or hibernate. Similarly, the OS must be configured not to prompt the user for input or show modal dialogs (OS update prompts, etc).

Kiosk (public-browsing) Requirements

While supporting digital signage is reasonably straightforward, providing a true internet kiosk is considerably harder. The set of potential customer requirements is much broader– some kiosk owners want to allow the user to browse anywhere and download any files, etc, while other kiosk owners want to tightly lock down the experience to a small number of supported web pages. Making matters far more complicated, in some kiosk scenarios we cannot assume that the user is well-intentioned– they might want to abuse their access or even hack the kiosk itself. Computers are relatively less protected against malicious local users.

Generally, an interactive kiosk aims to offer a few capabilities:

  • Allow the user to load one or more webpages, filling out forms or performing search queries
  • Offer most of the “digital signage” behaviors (e.g. avoid prompting the user with announcements, requesting that they explore new features, log into the browser itself)
  • Prevent the user from navigating to arbitrary sites
  • Prevent the user from tampering with loaded web app(s) using the Developer Tools
  • Prevent the user from exiting the browser or modifying its persistent state
  • Prevent the user from gaining access to the underlying OS to run other programs or modify persistent state

Of these, preventing access to underlying OS is the most critical, because if a malicious local user can execute commands in the OS, they can typically defeat all of the other restrictions intended for the kiosk.

Way back in my past life, I was the Security PM for Internet Explorer. At the 2008 Hack-in-the-Box security conference, my session on IE security improvements was preceded by a packed session wherein the presenter walked through two dozen popular “Kiosk browsing” software packages, breaking out of each to get access to the underlying system in under two minutes. Applause ranging from enthusiastic (for clever hacks) to bemused (for silly hacks) followed each attack.

Edge’s Kiosk Mode

Microsoft Edge offers a kiosk mode which can be simply activated by starting msedge.exe with the --kiosk command line argument. By default, this starts Edge with a full-screen InPrivate window with no address bar, no context menus, various hotkeys (like F12) disabled and so on. It’s a fine approach for something as simple as digital signage. But if you want to build a true kiosk, you’ll want to set some more options.

There’s a great documentation page on Configuring Edge Kiosk Mode that explains the various scenarios and configuration options. As explained on that page, one of the key things you’ll want to do is enable the Windows 10 “Assigned Access” feature so that Windows is locked down to limit the user to only the designated scenario.

You’ll likely also want to set a bunch of other Microsoft Edge policies to tighten things down.

Start with the Kiosk Mode Settings policies, then look at more general policies.

For instance, you almost certainly want to pass the --no-first-run command line argument or set the HideFirstRunExperience policy.

You probably want to use the URLBlocklist policy to block all URLs (e.g. a rule of *) and then use the URLAllowlist policy to exempt only those URLs patterns (e.g. that you wish to support. This helps prevent users from using the browser to browse the local file system (file:///c:/), from viewing web page source code (e.g. via CTRL+U), and from launching installed applications via App Protocols. Similarly, you may wish to restrict what a user can download, or configure downloaded files to be deleted on exit.

One very common vector for abusing kiosks is to use the File Picker dialog shown when the user hits Ctrl+O or pushes the Choose file button on a file upload control. The File Picker dialog is provided by Windows and by default exposes the ability to download URLs, navigate the local file system, and even launch files. This dialog can be blocked by disabling the AllowFileSelectionDialogs policy, with the obvious caveat that doing so will block any web app scenario that requires the user upload a file.

In some cases, you might want to prevent the user from using a Microsoft Edge hotkey that is not otherwise restricted. To implement such a restriction, you can use a Windows Keyboard Filter, with the caveat that the restriction will block the hotkey(s) across all of Windows.

Extreme Lockdown

In extreme cases, you might decide that you don’t want a browser at all. In such cases, building a simple Win32, .NET, or UWP app atop the Microsoft Edge WebView2 control might be your best bet, because you’ll have more complete control of the behavior of the application, with the Edge engine rendering your content under the hood.



New Years’ Resolutions aren’t really my jam.

Over the years, I usually idly ponder some vague notion (usually “get in better shape“) in late December, and mostly forget about it by the second week of January or so.

This year, I’m taking things a bit more seriously. It’s time to get busy living.

Rather than get hyper-focused on just specifics, I’ve got two themes, and a few targets.


  1. Live more intentionally.
  2. Get less comfortable.

Perhaps ironically for someone in my line of work, I’m not a planner. Virtually nothing in my life has gone to plan because virtually nothing in my life was actually planned. I’ve stumbled from one milestone to the next with only the vaguest sense of where I’d like to be, and while this lack-of-strategy strategy has turned out relatively successfully for me, it’s to the point where I reasonably wonder whether setting out a goal and achieving it is really something that I can do. So I’m gonna go do that for a while.

Secondly, I’m going to do more to get out of my comfort zone. After careful reflection, it turns out that it’s not, in fact, very comfortable at all. Boredom is dangerous.

As for near-term targets:

  • Health and Finance: A dry January. I could count on two hands the number of drinks I had before 27, when I met my ex-wife. I’m curious to see what a month without alcohol is like. So far so good.
  • Health: Track my weight and other metrics. I feel like I’ve put on a ton of weight in recent years, but looking back, 2020 added about twelve pounds, and 2021 about the same. Definitely heading the wrong direction, but perhaps moving in the right direction won’t be as hard as I’ve made it out to be.
  • Health: Find sustainable fitness habits. I have not been successful with fitness routines in the past, with one exception– I love to take long walks. That’s not super-pleasant throughout the year in Austin, but in Redmond, I used to do regular hour-long sessions on the treadmill while watching trashy TV (Alias, 24, etc) at the ProClub. I’ve bought a fancy treadmill and the associated subscription content to take “virtual” walks around the world. I’m gonna make this one stick, and springboard off it into other fitness things.
  • Travel. Beyond my recent cruise, I’ve booked a family cruise with my kids and their cousins for late in this year– they’re all growing up too fast. Post-pandemic, I want to go to Alaska for the first time, and to Hawaii again. My long-term ambition is to get to every continent.
  • Finance: Spend more intentionally. I’ve lived well below my means for over ten years, and relative frugality has served me well. But it’s also led me to avoid spending money on some things that would’ve been good for me. At the same time, I’ve bled quite a lot of money on things that aren’t good for me (alcohol, food). I’m going to try to pare those back.
  • Life: Produce more. Whether it’s writing (hi!), a new side-project, or something else entirely, I’m most fulfilled when I’m making things. I want to spend more time doing that. TV is for the treadmill.


All of these roll up into prep for a larger, longer-term goal for 2023 that I’m not ready to talk about just yet. If things go well, expect to read plenty more on that topic.

Hope y’all have a great 2022!


Edge Command Line Arguments

Microsoft Edge offers broad variety of configuration options via Group Policy (for Enterprises), the edge://settings page, the edge://flags page (mostly experimental options), and finally via command-line arguments that are passed to the msedge.exe executable. This list of sources is roughly in order of stability and supportability– earlier choices change less often (and with more notice) than options I listed later.

List of all command-line arguments for Edge?

Unfortunately, Edge has not published a list of implemented command line arguments, although in principle we could use the same tool Chromium does to parse our source and generate a listing.

As of January 2022, the list of command-line arguments generated out of the upstream source code is outdated: it was last updated at the end of 2020. The snapshot from then can be viewed here: Chromium Command line arguments. You can check the Last automated update occurred on text in that page to see whether it has been updated recently.

In general, Edge’s command-line arguments are the same as Chromium‘s, with the exception of marketing names (e.g. Chrome uses --incognito while msedge.exe uses --inprivate) and restricted words (Edge replaces blacklist with denylist and whitelist with allowlist).

Enabling or disabling features?

Many Edge features are controlled by named “features” that can be enabled or disabled using the --enable-features or --disable-features command line argument. Each argument accepts a comma-delimited list of feature names, like so:

msedge.exe --enable-features=msEdgeDeleteBrowsingDataOnExit,msEdgeOptionB --disable-features=FeatureC

Unfortunately, there’s no documented list of feature names, although the features inherited from upstream can be found in the Chromium-source code, often in one of many files named

An argument I tried didn’t seem to do anything? Why not?

The most common reason a command line flag does not work is that nearly all command-line flags only take effect if they are passed on the command line when all Edge instances are closed.

Before trying to launch a new instance with a command line, close all Edge browser windows, then check the OS task manager (taskmgr.exe or Control+Shift+Esc in Windows) to kill any background Edge processes you see.

In particular, Edge’s “Startup Boost” feature means that there’s often a hidden msedge.exe instance hanging around in the background even when all browser windows are closed. You can disable Startup Boost if you like using the option in edge://settings:

You can verify that the current Edge instance you’re using has a desired command-line argument by visiting edge://version and looking for the Command-line value in the page:

Can I set the “Default” command-line?

Sometimes, users would like to set a “default” command line for Edge to enable or disable options every time the browser is launched. There is presently no mechanism to do this– you can edit the Edge shortcuts pinned to your desktop or taskbar, and edit launch arguments for file associations inside the Windows Registry, but such changes won’t impact Edge launches for Startup Boost or via other mechanisms.

Instead, you should look for a more supported option for setting the desired behavior (e.g. a Group Policy, or edge://settings or edge://flags entry). If there’s no such item available, you should send us feedback using the Send feedback command found in the > Help and Feedback menu. If persistence of a command line option is broadly desired, we may promote it to a persistently-configurable location (e.g. we made Edge’s Cipher-suite Deny list settable via a Group Policy).


Cruising Solo

For Christmas 2020, I was home alone. The highlight of my day was discovering that Jack in the Box was open. I enjoyed my Christmas cheeseburger dinner at a picnic table in a park down the street.

Unexpectedly, my Christmas plans fell through for 2021, and I faced a repeat of 2020. But making Jack in the Box a holiday tradition seemed a bit more grim than I was ready to accept, so I started hunting for other options with just five days left to go.

I’d recently finished booking a family holiday cruise for New Years 2023, and I idly wondered whether there were any cheap last-minute cruises out of Galveston this holiday that would work with my schedule. And, sure enough, Royal Caribbean had one leaving on Christmas Eve for relatively cheap (~3x their cheapest off-season rate).

So, that seemed like a definite possibility. I love cruising (no internet, under five minute walk to every meal and show), even if I lament the ecological impact and cringe at the economic inequality– rich tourists served by crews from poor countries, taking excursions in beautiful but impoverished areas. (Improving environmental impact mostly awaits better technology, but I believe the best approach for addressing the economic inequality isn’t in abstaining but instead tipping as heavily as you can afford.)

Still… could going on a cruise “alone” (with thousands of strangers) really be a good idea??

Over the last decade I’ve concluded that I am not, in fact, the introvert I’d always considered myself (thinking back to how as a grade schooler I was too shy to even ask for ketchup in McDonalds), but I am certainly not a social butterfly either. Beyond my normal anxiety about being out of my comfort zone, this trip promised to be one of BIG FEELINGS… After all, I did get engaged on a cruise1 and on this cruise I’d be bringing along the final paperwork for my imminent divorce. So, yeah. Weighty.

But after a moment pondering a Christmas whose highlight otherwise would be finishing my binge-rewatch of Friday Night Lights (apparently, I’m a cliche), I decided almost anything would be better and booked the trip. I went through CostCo Travel (same prices as direct, and they send you a $140 gift card after returning) and booking for one was remarkably straightforward. (It was hard to overlook the fact that inviting a cabinmate would cost nearly nothing extra, but doing that wasn’t in the cards.)

I wavered a bit about whether to splurge on a balcony, a luxury I’d never enjoyed before, and ultimately decided “What the heck, it’s Christmas!”

As you can imagine, the value of a balcony is related to how pretty the view outside happens to be.

On December 22nd, I realized that beyond full-vaccination, the cruise line also requires a negative COVID test dated within two days of the cruise. Alarmingly, I couldn’t find anywhere in Austin willing to do a test in the next five days. But fortunately the cruise terminal itself offered “testing of last resort” and there were still hundreds of testing slots open. So that was sorted.

I spent most of December 23rd shopping for clothing for the trip– I haven’t worn anything remotely formal in close to two years, and unfortunately most of my “fancy” pants no longer fit. In the course of packing, I realized I no longer owned a suit/garment bag (lost either in the move to Texas ages ago, or buried in my soon-to-be-ex’s house) so I spent a fair bit of time hunting for a new one. All of the options were expensive (~$150-$300) and most had terrible reviews on Amazon. After checking online and two malls in person, I swung by Goodwill and snagged an old red Samsonite in aged but workable condition for the princely sum of $11.

I procrastinated on finishing packing until late Thursday night, and set off for the four-hour drive to Galveston on Friday morning. It was a bit nerve-wracking to realize that if I had any sort of car trouble, I was going to miss the entire trip without a chance of a refund, and if I got to the terminal but failed the COVID test I was going to have to get back in the car and drive straight back to Austin. These thoughts occupied my mind for the first few hours of the drive.

I told myself that at least this situation couldn’t be as bad last year. On 12/23/2020, I’d set off on a quixotic last second seven-hour drive to visit a friend. Only upon arrival at the hotel did I realize that I’d left my well-packed suitcase behind atop my library sofa at my house in Austin. It was after 9PM on Christmas Eve Eve, 40F and windy out, and I had nothing to wear but basketball shorts and a light t-shirt. Adventure, amirite?

I chuckled at how dumb last year’s situation was until I abruptly realized with bemused horror that, while I had carefully ensured that all three pieces of luggage (garment bag, suitcase, daybag) were in my car, I’d never actually gotten around to putting my dress shirts into the garment bag. Fortunately, I had an hour to spare, and a Kohl’s outside of Houston got me sorted out in half that.

Fortunately, my COVID rapid test at the terminal turned out negative.

Repeatedly throughout the check-in and boarding process folks would ask “How many in your party?” and I would sheepishly respond “one,” but I soon felt a lot better about it because the universal reaction was summed up in the response I got from the first person helping me: “Great, you’ll be easy then.

Shortly after 1pm I was aboard. I explored the ship a bit while waiting for the staterooms to be released; I gawked at the massive three-story Christmas tree on the Deck 4 promenade.

At 2pm, we got the notification that our rooms were ready, and I headed to mine. I eagerly awaited the arrival of my luggage, as it wasn’t among the bags lining the hallway.

And I waited. And waited. With mounting dismay I pondered spending the entire cruise in just the workout clothes I wore for the drive. “I really need to stop travelling in such shabby stuff” I berated myself. I went out to continue exploring the ship, checking back at my stateroom periodically. One of my favorite discoveries was the peek-a-boo view down into the bridge:

Very cool

I hadn’t eaten yet all day, and at 4:30 I headed to the Windjammer restaurant to grab a bite only to realize that it had closed at 4pm; the old saw “You’ll never be hungry on a cruise ship” turns out not to be entirely accurate. Unlike Disney, the top deck doesn’t have small stands offering light fare all day; only small soft-serve ice cream cones were available2. I had two and enjoyed the beautiful weather and cool breeze on the deck.

Fortunately, at 5pm my main suitcase arrived and my stress level dropped considerably. It didn’t have my fanciest clothing, but I would be okay.

I headed to the “Welcome Aboard” show in the Lyric Theater at 7pm– the Cruise Director opened by asking the audience the usual questions: “How many of you are celebrating a birthday? A honeymoon? An anniversary?” with the expected jokes about the sparse attendance of honeymooners (“They’re busy in their rooms, I suspect”) and muted enthusiasm of the anniversary celebrators (“They’re a lot less excited than the honeymooners”). She finally joked “How many of you are celebrating a divorce?” and I couldn’t help but hoot and raise my now mostly-empty glass, the Old Fashioned it formerly contained now lightening my mood. There was a bit of laughter in the audience as she quipped “Well, y’all know where to find him this whole trip, up in the Blue Moon club” while miming a disco dance routine.

After the tame but amusing comedy show (Jackson Perdue), I went to dinner (I’d picked the “My Time” dining option, which turned out to mean “Pick from any of the few extra-late dinner slots”) and had a delicious braised beef dish at a quiet, out of the way table. As with the other staff before him, my waiter seemed happy about the relative ease of dealing with a solo traveler.

After dinner, I got back to my room and happily discovered that my garment bag had made it. Phew. I spent another hour or so exploring the ship, before heading to the “Solo Travellers” meetup at the aforementioned club. It was odd scene, to put it mildly– there were some late 30s guys milling about, and a handful of early 20s women who were travelling with their families but had aged out of the ship’s Teens club.

I nursed a drink while people-watching; after a while, a social butterfly (an Israeli immigrant CS professor at a Texas university) started pulling people together and introducing everyone to the folks he’d chatted with thus far. At one point, he asked if I was “on the prowl” and I almost giggled at the thought, but I just gestured at the kids around us and said “No.” He replied “Well, if you change your mind and need a wingman, I’m here.” I ended up drinking and chatting with him and another Israeli transplant (a real-estate agent from Chicago) until around midnight.

The next morning, Christmas, I woke up and headed upstairs for a picture-perfect breakfast:

…and finished my coffee on my balcony, enjoying the screensaver-perfect weather:

At eleven, I headed up to the ship’s fitness center. Lamely, they’d disabled the water dispenser, suggesting that exercisers head to the nearest bar to ask for water (Two years in, people, we know that COVID is airborne and water-fountains aren’t the problem!) but other than that, the gym was nice. I spent a bit over an hour on a treadmill at the bow, sweating my way across the Gulf.

Wearing a mask while working out wasn’t very pleasant, but not as bad as I’d feared, and I have some ambitious fitness goals for the coming year and reminded myself that I needed to start getting less comfortable if I’m to achieve them.

After the gym, I spent a half hour walking the track on deck in the strong breeze to cool down…

…then grabbed a quick shower in my stateroom and headed to lunch. One of the Israelis from the night before invited me to join him and his family and after mild protestations I gratefully did so. We had a great lunch; his six year old delighted in my Dad jokes that my own kids no longer appreciate. They’d apparently missed their original cruise (a seven-night cruise that had left a few days before) because they’d taken their pre-trip COVID tests a day too early (!!!).

I putzed around all afternoon, opened a bottle of wine, left a holiday/thank you card for the cabin steward, put on a fancy shirt and slacks, and headed to dinner.

Enjoying cheap wine from my trusty mug :)
I’m classy like that.

After dinner, I wandered around topside for hours, enjoying the perfect breeze and night sky.

A new friend greeted me back at my cabin

I finished reading my book and fell asleep around midnight. It was an almost perfect day.

The next two days were slated for adventure.

We arrived at Puerto Costa Maya around noon on December 26th under perfect skies.

I’d booked a “Chill River Rafting” excursion, which involved a longish (90 minutes) van ride out to a ranch where we put on life jackets and boarded rafts in groups of 4 to 6. The activity was described as “Activity Level: Moderate” which most of us took to mean that it wasn’t going to be white-water rafting, but it would at least involve some paddling. Alas, it did not; it was more akin to a gondola ride in Venice, with a guide pushing the raft down the smooth river with a long pole. My seatmate (a ripped senior majoring in history at Kansas State and sporting massive tattoos) and I chatted and snarked at the tameness of our river “adventure.” I was a bit disappointed that I hadn’t brought my phone because I was afraid of losing it in rough waters. Lol.

We rode the small river about half a mile through the jungle, past a public park where the locals snorted and called out to their swimming kids in Spanish “Look, the Americans are afraid of the water”. To be fair, we did look a bit ridiculous. We mused about whether the rafting company was paying for access to the river and concluded “Naw, but they sell tickets to the locals to see the Americans in the Zoo.”

Eventually, we reached a small lagoon where we were offered a glass of sparkling wine and a thirty-minute break for swimming. Most of us doffed our wildly unnecessary life jackets and got down into the water (it wasn’t too cold) and swam around a bit. One couple remained on their raft; I suspect they were newly engaged. Most of us were dressed in swimsuits and other athletic wear, but they looked like they’d just stepped out of central casting for a 40s movie– her with bright red lips and him in an Irish flat cap; both were wearing clothes and hairstyles that could’ve easily been of that era. I suddenly wondered whether they were famous– the last time I’d encountered some suspiciously attractive people, I much later learned that they were super famous. I almost swam over to talk to them, but decided I had no business bothering them. I idly spent the rest of the cruise elaborating an imagined backstory about how this anachronistic couple ended up on our ship.

With our swimming break over, we headed back to our departure point for sandwiches and Pepsi before getting back in the van for the long ride back to the ship. It was a relaxing and pleasant excursion, but definitely not what I’d expected.

Back on the boat, I showered and went topside to enjoy the weather. The Cowboys vs. Washington Football team game was on the big screen and I watched my family’s favorite team get absolutely demolished by Texas while I sipped a discounted Mai Tai (the drink of the day).

Dinner was Beef Stroganoff which was prepared much differently than I’d had as a kid– it was really good. After dinner, I went to the late (10:15) song and dance show featuring Bobby Brooks Wilson, son of Jackie Wilson, a famous performer of the early 50s to 70s.

Sadly, the show was sparsely attended, but we cheered louder than our numbers would have predicted for his flamboyant performance of music of his fathers’ era. The final song was by Bruno Mars, a bandmate of Bobby’s decades ago when Wilson was in the Navy stationed in Hawaii. After the show, I tried to get to bed before midnight because I had a 6:15am wakeup for the next day’s excursion.

The next morning we arrived on the island of Cozumel, but my excursion group wasn’t staying– We immediately boarded a ferry for a 20 minute ride to the mainland. It was a pleasant ride although much rockier than on the dramatically larger cruise ship. Seated topside, I only found out later that some folks below deck were puking their guts out.

We then had a very long ride (over two hours) out to the Chichen Itza ruins, but it was made more pleasant by an hour-long lecture by our guide about the history of the Mayan city.

When we finally arrived, the entry was extremely crowded (but apparently much less crowded than the prior day) but once we got out to the ruins, everyone was able to spread out.

The pyramid was really impressive, and I was astonished that the chirping bird echo really happens. Very cool. We also spent a lot of time talking about the various symbolic aspects of the pyramid (its serpents light up on the equinox, there are 91 steps on each side, and 1 on top for a total of 365, etc).

Given our time constraints, we weren’t supposed to have any time for souvenir shopping (vendors ringed the site, and some of their wares were pretty cool looking) but I quickly overpaid $20 for a cool lucite pyramid:

… on the walk from the temple over to the Great Ball Court.

One of the two hoops at the Great Ball Court.
Victors of the game won the honor of being beheaded as human sacrifices.

We finished our hour-long tour and headed back to the van for the long drive back to the ferry. On the ferry, tired travellers napped; I almost drowned in nostalgia at the sight of a 3yo boy sleeping on his dad’s chest, and the lovebird time travellers from the 1940s dozed nuzzled into each other on the next bench.

Back on the boat, I enjoyed the sunset with a Mai Tai.

Before dinner, I headed to the “Invitation to Dance” song and dance show, and was amazed at the rapid fire spectacle of back-to-back numbers (some of the costume changes must’ve taken under fifteen seconds). It was a really impressive performance. I’ve always loved live performances of almost every form (plays > music > magic > comedy > opera > ballet) and while I don’t think I’d ever just randomly go downtown to see a song and dance show, I was incredibly glad that I didn’t miss this one. As with all of the shows I saw on the ship, I easily got second-row seats (the front row was closed off in a nod to social distancing).

After dinner, I went back topside to enjoy the night air. Because there are few lights above the front of the ship, the view of the night sky from the helipad on Deck Five Forward was amazing. I spent an hour looking skyward as a cool breeze blew over the deck. It’s hard to capture the majesty of the sky, even with the Pixel 6’s cool “Night Sight” mode:

I’ve never thought of myself as a city boy, but in a world rife with light pollution, it’s notable that I can think of almost every time I could see the stars reaching all the way down to the horizon. In the summer of 1991 after Grade 6, laying out on a lakeside dock in Michigan with my “girlfriend”3; in 1996 in rural Minnesota, driving cross-country with my friend Anson; in February 2010, the night before my wedding in a beachside hot tub with our friends. And now, December 2021, sailing across the Gulf of Mexico.

After my stargazing, I went to Jackson Perdue’s “Adult” comedy show, which was much funnier than his tame opening night all-ages show; this was more like the standup I’m used to from Netflix specials and the like.

The show was followed by a disappointing announcement from the cruise director that the Ice Show I’d booked for the following day (there’s a full ice skating rink on-board) had been canceled due to technical problems. Nevertheless, I went to bed looking forward to another relaxing day at sea as we headed back to Galveston.

In the morning, I had breakfast and headed to the gym, determined to push myself. After over an hour on the treadmill, I walked another two miles or so on the deck, happily enjoying the breeze as sweat poured everywhere.

The trick to burning a thousand calories on a treadmill is a 10.0 elevation and being heavy AF.

I ended up sore for the rest of the trip. Ah well. A Pina Colada helped.

That night, I packed my bags before heading to my usual dining room for appetizers only (eating far too much) before a late dinner at the on-ship steakhouse (yummy, but probably not worth the upcharge).

The Farewell show’s comedian was Cary Long (whose act appears to be pretty standard, with a bunch of word-for-word overlap with this YouTube video from eight years ago), but he did one pretty impressive trick– he spent five minutes of the act greeting folks (and chatting briefly) in their native languages (apparently, there were speakers of over 30 languages aboard). Not comedy perhaps, but it was very cool to see.

The next morning, we docked in Galveston and I reluctantly turned cell service back on (and was promptly flooded with messages). After a quick breakfast at the Windjammer buffet, it was time to leave the ship for the long drive back to Austin.

All in all, the trip went better than I dreamed.

Happy New Year, y’all. May 2022 bring you great things!


1 A seven-night Carnival cruise from Rome around the Mediterranean, paid for by Fiddler’s Engineering Excellence Award; a $5000 expense account accompanied the glass trophy.

2 It wasn’t until the very last evening of the cruise that I realized that the coffeeshop on Deck 5 midship offers free pizza all day.

3 Amy H. asked me to be her boyfriend. I had no particular feelings for her, but I said “sure” because I was twelve and what else are you supposed to do?

Thirty years later, I still remember my feeling of wonder that night out on the dock– “I’ve never felt this way before. Is this love after all?” In the cold light of morning I realized, “Nope, that was the beginnings of hypothermia.”

Microsoft Edge’s Many Processes

Chromium-based browsers like Microsoft Edge use a multi-process architecture for reliability and security reasons.


For reliability, Process isolation means that if one process crashes, the entire browser need not go down. For example, if a page on has a memory leak that’s so bad that its tab crashes with an out-of-memory error, your other tabs remain functional.

For security, Process Isolation means that each processes’ sandbox can be tailored to the minimal privileges needed for its task, ensuring that in the event of a compromise, the badness is limited to the privileges of that processes’ sandbox. A renderer sandbox cannot read or write files on your disk, for example.

Additionally, Process Isolation enables isolating data by site, such that if a tab at manages to get arbitrary native code execution (allowing it to read all of the memory in its own process), content from another site (e.g. is in a different process and thus not accessible to steal.

A blog post from 2020 helps explain what each of Edge’s processes is used for.

You can view all of the active processes in the browser’s task manager, opened by hitting Shift+Esc (or on the system menu shown after hitting Alt+Spacebar):

The new Windows 11 Task Manager exposes similar process detail information from Microsoft Edge. (The API mechanisms used to expose the enhanced process purpose information to the task manager are not yet documented.)

Beyond the information shown in the Task Managers, you can also see information about the security restrictions used to sandbox each process by visiting edge://sandbox: