Restrictions on File Urls

Last Update: October 1, 2025

For security reasons, Microsoft Edge 76+ and Chrome impose a number of restrictions on file:// URLs, including forbidding navigation to file:// URLs from non-file:// URLs.

If a browser user clicks on a file:// link on an https-delivered webpage or PDF, nothing visibly happens. If you open the Developer Tools console on the webpage, you’ll see a note: “Not allowed to load local resource: file://host/page“.

In contrast, Edge18 (like Internet Explorer before it) allowed pages in your Intranet Zone to navigate to URLs that use the file:// url protocol; only pages in the Internet Zone were blocked from such navigations1.

No option to disable this navigation blocking is available in Chrome or Edge 76+, but (UPDATE) a Group Policy IntranetFileLinksEnabled was added to Edge 95+.

What’s the Risk?

The most obvious problem is that the way Windows retrieves content from file:// can result in privacy and security problems. Because Windows will attempt to perform Single Sign On (SSO), fetching remote resources over SMB (the file: protocol) can leak your user account information (username) and a hash of your password to the remote site. This is a long-understood threat, with public discussions in the security community dating back to 1997.

What makes this extra horrific is that if you log into Windows using an MSA account, the bad guy gets both your global userinfo AND a hash he can try to crack1. Crucially, SMB SSO is not restricted by Windows Security Zone as HTTP/HTTPS SSO is:

By default, Windows limits SSO to only the Intranet Zone for HTTP/HTTPS protocols

An organization that wishes to mitigate this attack can drop SMB traffic at the network perimeter (gateway/firewall), disable NTLM or SMB, or use a new feature in Windows 11 to prevent the leak.

Beyond the data leakage risks related to remote file retrieval, other vulnerabilities related to opening local files also exist. Navigating to a local file might result in that file opening in a handler application in a dangerous or unexpected way. The Same Origin Policy for file URLs is poorly defined and inconsistent across browsers (see below), which can result in security problems.

Workaround: IE Mode

Enterprise administrators can configure sites that must navigate to file:// urls to open in IE mode. Like legacy IE itself, IE mode pages in the Intranet zone can navigate to file urls.

Workaround: Extensions

Unfortunately, the extension API chrome.webNavigation​.onBeforeNavigate does not fire for file:// links that are blocked in HTTP/HTTPS pages, which makes working around this blocking via an extension difficult.

One could write an extension that uses a Content Script to rewrite all file:// hyperlinks to an Application Protocol handler (e.g. file-after-prompt://) that will launch a confirmation dialog before opening the targeted URL via ShellExecute or explorer /select,”file://whatever”, but this would entail the extension running on every page, which has non-zero performance implications. It also wouldn’t fix up any non-link file navigations (e.g. JavaScript that calls window.location.href=”file://whatever”).

Similarly, the Enable Local File Links extension simply adds a click event listener to every page loaded in the browser. If the listener observes the user clicking on a link to a file:// URL, it cancels the click and directs the extension’s background page to perform the navigation to the target URL, bypassing the security restriction by using the extension’s (privileged) ability to navigate to file URLs. But this extension will not help if the page attempts to use JavaScript to navigate to a file URI, and it exposes you to the security risks described above.

Workaround: Edge 95+ Group Policy

A Group Policy (IntranetFileLinksEnabled) was added to Edge 95+ that permits HTTPS-served sites on your Intranet to open Windows Explorer to file:// shares that are located in your Intranet Zone. This scenario does not accommodate all scenarios, but allows you to configure Edge to behave more like Internet Explorer did.

Necessary but not sufficient

Unfortunately, blocking file:// urls in the browser is a solid security restriction, but it’s not complete. There are myriad formats which have the ability to hit the network for file URIs, ranging from Office documents, to emails, to media files, to PDF, MHT, SCF files, etc, and most of them will do so without confirmation. Raymond Chen explores this in Subtle ways your program can be Internet-facing.

In an enterprise, the expectation is that the Organization will block outbound SMB at the firewall. When I was working for Chrome and reported this issue to Google’s internal security department, they assured me that this is what they did. I then proved that they were not, in fact, correctly blocking outbound SMB for all employees, and they spun up a response team to go fix their broken firewall rules. In a home environment, the user’s router may or may not block the outbound request.

Various policies are available to better protect credential hashes, but I get the sense that they’re not broadly used.

Navigation Restrictions Aren’t All…

Subresources

This post mostly covers navigating to file:// URLs, but another question which occasionally comes up is “how can I embed a subresource like an image or a script from a file:// URL into my HTTPS-served page.” This, you also cannot do, for security and privacy reasons (we don’t want a web page to be able to detect what files are on your local disk). Blocking file:// embeddings in HTTPS pages can even improve performance.

Chromium and Firefox allow HTML pages served from file:// URIs to load (execute) images, regular (non-Worker) scripts, and frames from any file same path. However, modern browsers treat file origins as unique, blocking simple read of other files via fetch(), DOM interactions between frames containing different local files, or getImageData() calls on canvases with images loaded from other local files:

Legacy Edge (v18) and Internet Explorer are the only browsers that consider all local-PC file:// URIs to be same-origin, allowing such pages to refer to other HTML resources on the local computer. (This laxity is one reason that IE has a draconian “local machine zone lockdown” behavior that forbids running script in the Local machine zone). Internet Explorer’s “SecurityID” for file URLs is made up of three components (file:servername/sharename:ZONEID).

In contrast, Chromium’s Same-Origin-Policy treats file: URLs as unique origins, which means that if you open an XML file from a file: URL, any XSL referenced by the XML is not permitted to load and the page usually appears blank, with the only text in the Console ('file:' URLs are treated as unique security origins.) noting the source of the problem.

This behavior impacts scenarios like trying to open Visual Studio Activity Log XML files and the like. To workaround the limitation, you can either embed your XSL in the XML file as a data URL:

…or you can transform the XML to HTML by applying the XSLT outside of the browser first.

…or you can launch Microsoft Edge or Chrome using a command line argument that allows such access:

msedge.exe --allow-file-access-from-files

Note: For this command-line flag (and most others) to take effect, all Edge instances must be closed; check Task Manager. Edge’s “Startup Boost” feature means that there’s often a hidden msedge.exe instance hanging around in the background. You can visit edge://version in a running Edge window to see what command-line arguments are in effect for that window.

However, please note that this flag doesn’t remove all restrictions on file, and only applies to pages served from file origins.

Test Case

Consider a page loaded from a file on disk that contains three IFRAMEs to text files (one in the same folder, one in a sub folder, and one in a different folder altogether), three fetch() requests for the same URLs, and three XmlHttpRequests for the same URLs. In Chromium and Firefox, all three IFRAMEs load.

However, all of the fetch and XHR calls are blocked in both browsers:

If we then relaunch Chromium with the --allow-file-access-from-files argument, we see that the XHRs now succeed but the fetch() calls all continue to fail*.

*UPDATE: This changed in Chromium v99.0.4798.0 as a side-effect of a change to support an extensions scenario. The fetch() call is now allowed when the command-line argument is passed.

With this flag set, you can use XHR and fetch to open files in the same folder, parent folder, and child folders, but not from a file:// url with a different hostname.

In Firefox, setting privacy.file_unique_origin to false allows both fetch and XHR to succeed for the text file in the same folder and a subfolder, but the text file in the unrelated folder remains forbidden.

Extensions

By default, extensions are forbidden to run on file:// URIs. To allow the user to run extensions on file:-sourced content, the user must manually enable the option in the Extension Options page:

When this permission is not present, you’ll see a console message like:

extension error injecting script :
Cannot access contents of url "file:///C:/file.html". Extension manifest must request permission to access this host.

Historically, any extension was allowed to navigate a tab to a file:/// url, but this was changed in late 2023 to require that the extension’s user tick the Allow access to file URLs checkbox within the extension’s settings.

Bugs

While Chrome attempts to include a file:// URL’s path in its Same Origin Policy computation, it’s currently ignored for localStorage calls, one of the oldest active security bugs in Chromium. This bug means that any HTML file loaded from an file:/// can read or overwrite the localStorage data from any other file:///-loaded page, even one loaded from a different internet host.

-Eric

1 Interestingly, some alarmist researchers didn’t realize that file access was allowed only on a per-zone basis, and asserted that IE/Edge would directly leak your credentials from any Internet web page. This is not correct. It is further incorrect in old Edge (Spartan) because Internet-zone web pages run in Internet AppContainers, which lack the Enterprise Authentication permission, which means that the processes don’t even have your credentials.

Under the hood

Under the hood, Chromium’s file protocol implementation is pretty simple. The FileURLLoader handles the file scheme by converting the file URI into a system file path, then uses Chromium’s base::File wrapper to open and read the file using the OS APIs (in the case of Windows, CreateFileW and ReadFile).

Aw, snap! What if Every Tab Crashes?

Update: I wrote a more comprehensive post about troubleshooting browser crashes.

For a small number of users of Chromium-based browsers (including Chrome and the new Microsoft Edge) on Windows 10, after updating to 78.0.3875.0, every new tab crashes immediately when the browser starts.

Impacted users can open as many new tabs as they like, but each will instantly crash:

EveryTabCrashes

EdgeHavingAProblem

As of Chrome 81.0.3992, the page will show the string Error Code: STATUS_INVALID_IMAGE_HASH.

What’s going wrong?

This problem relates to a security/reliability improvement made to Chromium’s sandboxing. Chromium runs each of the tabs (and extensions) within locked down (“sandboxed”) processes:

JAIL

In Chrome 78, a change was made to prevent 3rd-party code from injecting itself into these sandboxed processes. 3rd-party code is a top source of browser reliability and performance problems, and it has been a longstanding goal for browser vendors to get this code out of the web platform engine.

This new feature relies on setting a Windows 10 Process Mitigation policy that instructs the OS loader to refuse to load binaries that aren’t signed by Microsoft. Edge 13 enabled this mitigation in 2015, and the Chromium change brings parity to the new Edge 78+. Notably, Chrome’s own DLLs aren’t signed by Microsoft so they are specially exempted by the Chromium sandboxing code.

Unfortunately, the impact of this change is that the renderer is killed (resulting in the “Aw snap” page) if any disallowed DLL attempts to load, for instance, if your antivirus software attempts to inject its DLLs into the renderer processes. For example, Symantec Endpoint Protection versions before 14.2 are known to trigger this problem.

If you encounter this problem, you should follow the following steps:

Update any security software you have to the latest version.

Other than malware, security software is the other likely cause of code being unexpectedly injected into the renderers.

Temporarily disable the new protection

You can temporarily launch the browser without this sandbox feature to verify that it’s the source of the crashes.

  1. Close all browser instances (verify that there are no hidden chrome.exe or msedge.exe processes using Task Manager)
  2. Use Windows+R to launch the browser with the command line override:
  msedge.exe --disable-features=RendererCodeIntegrity

or

  chrome.exe --disable-features=RendererCodeIntegrity

Ensure that the tab processes work properly when code integrity checks are disabled.

If so, you’ve proven that code integrity checks are causing the crashes.

Hunt down the culprit

Navigate your browser to the URL chrome://conflicts#R to show the list of modules loaded by the client. Look for any files that are not Signed By Microsoft or Google.

If you see any, they are suspects. (There will likely be a few listed as Shell Extensions; e.g. 7-Zip.dll, that do not cause this problem)– check for an R in the Process types column to find modules loading in the Renderers.

You should install any available updates for any of your suspects to see if doing so fixes the problem.

Check the Event Log

The Windows Event Log will contain information about modules denied loading. Open Event Viewer. Expand Applications and Services Logs > Microsoft > Windows > CodeIntegrity > Operational and look for events with ID 3033. The detail information will indicate the name and location of the DLL that caused the crash:CodeIntegrity

Optional: Use Enterprise Policy to disable the new protection

If needed, IT Adminstrators can disable the new protection using the RendererCodeIntegrity policy for Chrome and Edge. You should outreach to the software vendors responsible for the problematic applications and request that they update them.

Other possible causes

Note that it’s possible that you could have a PC that encounters symptoms like this (all subprocesses crash) but not a result of the new code integrity check. In such cases, the Error Code on the crash page will be something other than STATUS_INVALID_IMAGE_HASH.

  • For instance, Chromium once had an obscure bug in its sandboxing code that caused all sandboxes to crash depending on the random memory mapping of Address Space Layout Randomization.
  • Similarly, Chrome and Edge still have an active bug where all renderers crash on startup if the PC has AppLocker enabled and the browser is launched elevated (as Administrator).
  • If the error code is STATUS_ACCESS_VIOLATION, consider running the Windows Memory diagnostic (which tests every bit of your RAM) to rule out faulty memory.
  • Check whether you have any Accessibility software unexpectedly running.

Update: I wrote a more comprehensive post about troubleshooting browser crashes.

-Eric

Web-to-App Communication: DirectInvoke

Note: This post is part of a series about Web-to-App Communication techniques.

Background

Typically, if you want your website to send a document to a client application, you simply send the file as a download. Your server indicates that a file should be treated as a download in one of a few simple ways:

  • Specifying a nonwebby type in the Content-Type response header.
  • Sending a Content-Disposition: attachment; filename=whatever.ext response header.
  • Setting a download attribute on the hyperlink pointing to the file.

These approaches are well-supported across browsers (via headers for decades, via the download attribute anywhere but IE since 2016).

The Trouble with Plain Downloads

However, there’s a downside to traditional downloads — unless the file itself contains the URL from which the download originated, the client application will not typically know where the file originated, which can be a problem for:

  • Security – “Do I trust the source of this file?
  • Functionality – “If the user makes a change to this file, to where should I save changes back?“, and
  • Performance – “If the user already had a copy of this 60mb slide deck, maybe skip downloading it again over our expensive trans-Pacific link?

Maybe AppProtocols?

Rather than sending a file download, a solution developer might instead just invoke a target application using an App Protocol. For instance, the Microsoft Office clients might support a syntax like:

ms-word:ofe|u|https://example.com/docx.docx

…which directs Microsoft Word to download the document from example.com.

However, the AppProtocol approach has a shortcoming– if the user doesn’t happen to have Microsoft Word installed, the protocol handler will fail to launch and either nothing will happen or the user may get a potentially confusing error message. That brokenness will occur even if they happen to have another client (e.g. WordPad) that could handle the document.

DirectInvoke

To address these shortcomings, we need a way to instruct the browser: “Download this file, unless the client’s handler application would prefer to just get its URL.” Internet Explorer and Microsoft Edge support such a technology.

While a poorly-documented precursor technology existed as early as the late 1990s[1], Windows 8 reintroduced this feature as DirectInvoke. When a client application registers itself indicating that it supports receiving URLs rather than local filenames, and when the server indicates that it would like to DirectInvoke the application using the X-MS-InvokeApp response header:

DirectInvoke

…then the download stream is aborted and the user is instead presented with a confirmation prompt:

UIPrompt

If the user accepts the prompt, the handler application is launched, passing the URL to the web content.

Mechanics of Launch

The browser launches the handler by calling ShellExecuteEx, passing in the SEE_MASK_CLASSKEY flag, with the hkeyClass set to the registry handle retrieved from IQueryAssociations::GetKey when passed ASSOCKEY_SHELLEXECCLASS for the DirectInvoke’d resource’s MIME type.

Note: This execution will fail if security software on the system breaks ShellExecuteEx‘s support for SEE_MASK_CLASSKEY. As of September 2021, “HP’s Wolf Security” software (version 4.3.0.3074) exhibits such a bug.

Application Opt-in

Apps can register to handle URLs via the SupportedProtocols declaration for their verbs. HKCR\Applications\<app.exe>\SupportedProtocols or HKCR\CLSID\<verb handler clsid>\SupportedProtocols can be populated using values that identify the Uniform Resource Identifier (URI) protocol schemes that the application supports or * to indicate all protocols. Windows Media Player’s verbs registration looks like this:

HKCR\CLSID\ 
{45597c98-80f6-4549-84ff-752cf55e2d29}\SupportedProtocols
    rtspt    REG_SZ
    rtspu    REG_SZ
    rtsp    REG_SZ
    mms    REG_SZ
    http    REG_SZ

Apps registered to handle URLs via the old UseUrls mechanism can be easily enumerated from the command line:

reg query "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\App Paths" /f "UseUrl" /s

Now, for certain types, the server doesn’t even need to ask for DirectInvoke behavior via the X-MS-InvokeApp header. The FTA_AlwaysUseDirectInvoke bit can be set in the type’s EditFlags registry value. The bit is documented on MSDN as: 

FTA_AlwaysUseDirectInvoke 0x00400000
Introduced in Windows 8. Ensures that the verbs for the file type are invoked with a URL instead of a downloaded version of the file. Use this flag only if you’ve registered the file type’s verb to support DirectInvoke through the SupportedProtocols or UseUrl registration.

Microsoft’s ClickOnce deployment technology makes use of the FTA_AlwaysUseDirectInvoke flag.

A sample registry script for a type that should always be DirectInvoke’d might look like this:

Windows Registry Editor Version 5.00
[HKEY_CLASSES_ROOT\.fuzzle]
"Content Type"="application/x-fuzzle"
@="FuzzleProgID"
[HKEY_CLASSES_ROOT\MIME\Database\Content Type\application/x-fuzzle]
"Extension"=".fuzzle"
[HKEY_CLASSES_ROOT\FuzzleProgID]
@="FakeMIME for Testing DirectInvoke"
"EditFlags"=dword:00410000
[HKEY_CLASSES_ROOT\FuzzleProgID\shell]
[HKEY_CLASSES_ROOT\FuzzleProgID\shell\open]
@=""
[HKEY_CLASSES_ROOT\FuzzleProgID\shell\open\command]
@="C:\\windows\\alert.exe \"%1\""
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\App Paths\alert.exe]
"UseURL"=dword:00000001

To test it, first set up the registry, install a handler to C:\Windows, and then click this example link

TraditionalVsDI.png

Requiring ReadOnly Behavior

To accommodate scenarios where the server wants to communicate to the client that the target URL should be considered “read only” (for whatever meaning the client and server have for that concept), an additional token can be added to the X-MS-Invoke-App header, RequireReadOnly:

undefined

When this token is present, the Windows Shell verb used to invoke the handler for the URL is changed from Open to OpenAsReadOnly

However, crucially, the handler’s registration for that type must advertise support that verb– if it doesn’t, the DirectInvoke request will be ignored (even if FTA_AlwaysUseDirectInvoke was specified) and the file will be treated as a traditional download.

If you update your registry to claim support for that verb:

undefined

… you’d find that the scenario starts working again. Of course, for deployability reasons, it’s probably more straightforward to remove the RequireReadOnly directive if you do not expect your client application to support that verb.

Caveats

In order for this architecture to work reliably, you need to ensure a few things.

App Should Handle Traditional Files

First, your application needs to have some reasonable experience if the content is provided as a traditional (non-DI) file download, as it would be using Chrome or Firefox, or on a non-Windows operating system.

By way of example, it’s usually possible to construct a ClickOnce manifest that works correctly after download. Similarly, Office applications work fine with regular files, although the user must take care to reupload the files after making any edits.

App Should Avoid Depending On Browser State

If your download flow requires a cookie, the client application will not have access to that cookie and the download will fail. The client application probably will not be able to prompt the user to login to otherwise retrieve the file.

If your download flow requires HTTP Authentication or HTTPS Client Certificate Authentication, the client application might work (if it supports NTLM/Negotiate) or it might not (e.g. if the server requires Digest Auth and the client cannot show a credential prompt.

App Should Ensure Robust URL Support

Many client applications have limits in the sorts of URLs that they can support. For instance, the latest version of Microsoft Excel cannot handle a URL longer than 260 characters. If a .xlsx download from SharePoint site attempts to DirectInvoke, Excel will launch and complain that it cannot retrieve the file.

App Should Ensure Network Protocol Support

Similarly, if the client app registers for DirectInvoke of HTTPS URLs, you should ensure that it supports the same protocols as the browser. If a server requires a protocol version (e.g. TLS/1.2) that the client hasn’t yet enabled (say it only enables TLS/1.0), then the download will fail.

Server Must Not Send |Content-Disposition: attachment|

As noted in the documentation, a Content-Disposition: attachment response header takes precedence over DirectInvoke behavior. If a server specifies attachment, DirectInvoke will not be used.

Note: If you wish to use a Content-Disposition header to set the default name for the file, you can do so using Content-Disposition: inline; filename="fuzzle.fuzzle"

Conclusion

As you can see, there’s quite a long list of caveats around using the DirectInvoke WebToApp communication scheme, but it’s still a useful option for some scenarios.

In future posts, I’ll continue to explore some other alternatives for Web-to-App communication.

-Eric

Note: Edge 79 didn’t support FTA_AlwaysUseDirectInvoke, but this was fixed in a later release.

Note: I have a few test cases.

[1] Archaeology fun fact: fossils of the Win2K era mechanism are hard to find. SEE_MASK_FILEANDURL was once used by clients including InfoPath.

Livin’ on the Edge: Root Causing Regressions

As we’ve been working to replatform the new Microsoft Edge browser atop Chromium, one interesting outcome has been early exposure to a lot more bugs in Chromium. Rapidly root-causing these regressions (bugs in scenarios that used to work correctly) has been a high-priority activity to help ensure Edge users have a good experience with our browser.

Stabilization via Channels

Edge’s code stabilizes as it flows through release channels, from the developer’s-only ToT/HEAD (The main branch’s tip-of-tree, the latest commit in the source repository) to the Canary Channel build (updated daily) to the Dev Channel updated weekly, to the Beta Channel (updated a few times over its six week lifetime) to our Stable Channel.

Until recently, Microsoft Edge was only available in Canary and Dev channels, which meant that any bugs landed in Chromium would almost immediately impact almost all users of Edge. Even as we added a Beta channel, we still found users reporting issues that “reproduce in all Edge builds, but not in Chrome.

As it turns out, most of the “not in Chrome” comparisons turn out to mean that the problems are “not repro in Chrome Stable.” And that’s because the either the regressions simply haven’t made it to Stable yet, or because the regressions are hidden behind Feature Flags that are not enabled for Chrome’s stable channel.

A common example of this is LayoutNG, a major update to the layout engine used in Chromium. This redesigned engine is more flexible than its predecessor and allows the layout engineers to more easily add the latest layout features to the browser as new standards are finalized. Unfortunately, changing any major component of the browser is almost certain to lead to regressions, especially in corner cases. Google had enabled LayoutNG by default in the code for Chrome 76 (and Edge picked up the change), but then subsequently Google used the Feature Flag to disable LayoutNG for the stable channel three days before Chrome 76 shipped. As a result, the new LayoutNG engine is on-by-default for Chrome Beta, Dev, and Canary and Edge Beta, Dev, and Canary.

The key difference was that until January 2020, Edge didn’t yet have a public stable channel to which any bug-impacted users can retreat. Therefore, reproducing, isolating, and fixing regressions as quickly as possible is important for Edge engineers.

Isolating Regressions

When we receive a report of a bug that reproduces in Microsoft Edge, we follow a set of steps for figuring out what’s going on.

Check Latest Canary

The first step is checking whether it reproduces in our Edge Canary builds.

If not, then it’s likely the bug was already found and fixed. We can either tell our users to sit tight and wait for the next update, or we can search on CRBug.com to figure out when exactly the fix went in and inform the user specifically when a fix will reach them.

Check Upstream

If the problem still reproduces in the latest Edge Canary build, we next try to reproduce the problem in the equivalent build of Chrome or plain-vanilla Chromium.

If Not Repro in Chromium?

If the problem does not reproduce in Chromium, this implies that the problem was likely caused by Microsoft’s modifications to the code in our internal branches. However, it might also be the case that the problem is hidden in upstream Chrome behind an experimental flag, so sometimes we must go spelunking into the browser’s Feature Flag configuration by visiting:

chrome://version/?show-variations-cmd 

The Command-Line Variations section of that page reveals the names of the experiments that are enabled/disabled. Launching the browser with a modified version of the command line enables launching Chrome in a different configuration2.

If the issue really is unique to Edge, we can use git blame and similar techniques on our code to see where we might have caused the problem.

If Repro in Chromium?

If the problem does reproduce in Chrome or Chromium, that’s strong evidence that we’ve inherited the problem from upstream.

Sanity Check: Does it Repro in Firefox?

If the problem isn’t a crashing bug or some other obviously incorrect behavior, I will first check the site’s behavior in the latest Firefox Nightly build, on the off chance that the browsers are behaving correctly and the site’s markup or JavaScript is actually incorrect.

Try tweaking common Flags

Depending on the area where the problem occurs, a quick next step is to try toggling Feature Flags that seem relevant on the chrome://flags page. For instance, if the problem is in layout, try setting chrome://flags/#enable-layout-ng to Disabled. If the problem seems to be related to the network, try toggling chrome://flags/#network-service-in-process, and so on.

Understanding whether the problem can be impacted by flags enables us to more quickly find its root cause, and provides us an option to quickly mitigate the problem for our users (by adjusting the flag remotely from our experimental configuration server).

Bisecting Regressions

The gold standard for root-causing regressions is finding the specific commit (aka “change list” aka “pull request” aka “patch”) that introduced the problem. When we determine which commit caused a problem, we not only know exactly what builds the problem affects, we also know what engineer committed the breaking change, and we know what exactly code was changed– often we can quickly spot the bug within the changed files.

Fortunately, Chromium engineers trying to find regressing commits have a super power, known as bisect-builds.py. Using this script is simple: You specify the known-bad Chromium version and a guesstimate of the last known good version. You specify the version using either a milestone (if it has reached stable) e.g. M80, a full version string (1.2.3.4), or the commit position (a six digit integer you can find using OmahaProxy or on the chrome://version page in Chromium builds):

python tools/bisect-builds.py -a win -g M80 -b 92.0.4505.0 --verify-range --use-local-cache -- --no-first-run https://example.com

The script then automates the binary search of the builds between the two positions, requiring the user to grade each trial as “Good” or “Bad”, until the introduction of the regression is isolated.

The simplicity of this tool belies its magic— or, at least, the power of the log2 math that underlies it. OmahaProxy informs me that the current Chrome tree contains 692,278 commits. log2(692278) is 19.4, which means that we should be able to isolate any regressing change in history in just 20 trials, taking a few minutes at most1. And it’s rare that you’d want to even try to bisect all of history– between stable and ToT we see ~27250 commits, so we should be able to find any regression within such a range in just 15 trials.

On CrBug.com, regressions awaiting a bisect are tagged with the label Needs-Bisect, and a small set of engineers try to burn down the backlog every day. But running bisects is so easy that it’s usually best to just do it yourself, and Edge engineers are doing so constantly.

One advantage available to Googlers but not the broader Chromium community is the ability to do what’s called a “Per-Revision Bisect.” Inside Google, Chrome builds a new build for every single change (so they can unambiguously flag any change that impacts performance), but not all of these builds are public. Publicly, Google provides the Chromium Build Archive4, an archive of builds that are built approximately every one to fifteen commits. That means that when we non-Googlers perform bisects, we often don’t get back a single commit, but instead a regression range. We then must look at all of the commits within that range to figure out which was the culprit. In my experience, this is rarely a problem– most commits in the range obviously have nothing to do with the problem (“Change the icon for some far off feature”), so there are rarely more than one or two obvious suspects.

The Edge team does not currently expose our Edge build archives for bisects, but fortunately almost all of our web platform work is contributed upstream, so bisecting against Chromium is almost always effective.

A Recent Bisect Case Study

Last Thursday, we started to get reports of a “missing certificate” problem in Microsoft Edge, whereby the browser wasn’t showing the expected Lock icon for HTTPS pages that didn’t contain any mixed content:

The certificate is missing

While the lock was missing for some users, it was present for others. After also reproducing the issue in Chrome itself, I filed a bug upstream and began investigating.

Back when I worked on the Chrome Security team, we saw a bunch of bugs that manifested like this one that were caused by refactoring in Chrome’s navigation code. Users could hit these bugs in cases where they navigated back/forward rapidly or performed an undo tab close operation, or sites used the HTML5 History API. In all of these cases, the high-level issue is that the page’s certificate is missing in the security_state: either the ssl_status on the NavigationHandle is missing or it contains the wrong information.

This issue, however, didn’t seem to involve navigations, but instead was hit as the site was loaded and thus it called to mind a more recent regression from back in March, where sites that used AppCache were missing their lock icon. That issue involved a major refactoring to use the new Network Service.

One fact that immediately jumped out at me about the sites first reported to hit this new problem is that they all use ServiceWorker (e.g. Hotmail, Gmail, MS Teams, Twitter). Like AppCache, ServiceWorker allows the browser to avoid hitting the network in response to a fetch. As with AppCache, that characteristic means that the the browser must somehow have the “original certificate” for that response from the network so it can set that certificate in the security_state when it’s needed. In the case of navigations handled by ServiceWorkers, the navigated page inherits the certificate of the controlling ServiceWorker’s script.

But where does that script’s certificate get stored?

Chromium stores the certificate for a given HTTPS response after the end of the cache entry, so it should be available whenever the cached resource is used3. A quick disk search revealed that Edge’s ServiceWorker code stores its scripts inside the folder:

%localappdata%\Microsoft\Edge SxS\User Data\Default\Service Worker\ScriptCache

Comparing the contents of the cache file on a “good” and a “bad” PC, we see that the certificate information is missing in the cache file for a machine that reproduces the problem:

The serialized certificate chain is present in the “Good” case

So, why is that certificate missing? I didn’t know.

I performed a bisect three times, and each time I ended up with the same range of a dozen commits, only one of which had anything to do with anything to do with caching, and that commit was for AppCache, not ServiceWorker.

More damning for my bisect suspect was the fact that this suspect commit landed (in 78.0.3890) the day after the build (3889) upon which the reproducing Edge build was based. I spent a bunch of time figuring out whether this could be the off-by-one issue in Edge build numbering before convincing myself that no, it couldn’t be: build number misalignment just means that Edge 3889 might not contain everything that’s in Chrome 3889.

Unless an Edge Engineer had cherry-picked the regressing commit into our 3889 (unlikely), the suspect couldn’t be the culprit.

Edge 3889 doesn’t include all of the commits in Chromium 3889.

I posted my research into the bug at 10:39 PM on Friday and forty minutes later a Chrome engineer casually revealed something I didn’t realize: Chrome uses two different codepaths for fetching a ServiceWorker script– a new_script_loader and an updated_script_loader.

And instantly, everything fell into place. The reason the repro wasn’t perfectly reliable (for both users and my bisect attempts) was that it only happened after a ServiceWorker script updates.

  • If the ServiceWorker in the user’s cache is missing, it is downloaded by the new_script_loader and the certificate is stored.
  • If the ServiceWorker script is present and is unchanged on the server, the lock shows just fine.
  • But if the ServiceWorker in the user’s cache is present but outdated, the updated_script_loader downloads the new script… and omits the certificate chain. The lock icon disappears until the user clears their cache or performs a hard (CTRL+F5) refresh, at which point the lock remains until the next script update.

With this new information in hand, building a reliable reduced repro case was easy– I just ripped out the guts of one of my existing PWAs and configured it so that it updated itself every five seconds. That way, on nearly every load, the cached ServiceWorker would be deemed outdated and the script redownloaded.

With this repro, we can kick off our bisect thusly:

python tools/bisect-builds.py -a win -g 681094 -b 690908 --verify-range --use-local-cache -- --no-first-run --user-data-dir=/temp https://webdbg.com/apps/alwaysoutdated/

… and grade each build based on whether the lock disappears on refresh:

Grading each build based on whether the lock disappears

Within a few minutes, we’ve identified the regression range:

A culprit is found

In this case, the regression range contains just one commit— one that turns on the new ServiceWorker update check code. This confirms the Chromium engineer’s theory and that this problem is almost identical to the prior AppCache bug. In both cases, the problem is that the download request passed kURLLoadOptionNone and that prevented the certificate from being stored in the HttpResponseInfo serialized to the cache file. Changing the flag to kURLLoadOptionSendSSLInfoWithResponse results in the retrieval and storage of the ssl_info, including the certificate.

The fix was quick and straightforward; it will be available in Chrome 78.0.3902 and the following build of Edge based on that Chromium version. Notably, because the bug is caused by failure to put data in a cache file, the lock will remain missing even in later builds until either the ServiceWorker script updates again, or you hard refresh the page once.

-Eric

1 By way of comparison, when I last bisected an issue in Internet Explorer, circa 2012, it was an extraordinarily painful two-day affair.

2 You can use the command line arguments in the variations page (starting at --force-fieldtrials=) to force the bisect builds to use the same variations.

Chromium also has a bisect-variations script which you can use to help narrow down which of the dozens of active experiments is causing a problem.

If all else fails, you can also reset Chrome’s field trial configuration using chrome.exe --reset-variation-state to see if the repro disappears.

3 Aside: Back in the days of Internet Explorer, WinINET had no way to preserve the certificate, so it always bypassed the cache for the first request to a HTTPS server so that the browser process would have a certificate available for its subsequent display needs.

4 I’ve copied a dozen of these builds to a folder on my Desktop to do quick manual checks when I don’t want to get all the way into a full bisect.


Web-to-App Communication: App Protocols

Note: This post is part of a series about Web-to-App Communication techniques.
Last updated: June 4, 2025

Just over eight years ago, I wrote my last blog post about App Protocols, a class of URI schemes that typically1 open another program on your computer instead of returning data to the web browser. A valid scheme name is an ASCII letter followed by one or more ASCII Letters, Digits, Hyphens, Dots, or Plus characters. (RFC7595 Guidelines).

App Protocols2 are both simple and powerful, allowing client app developers to easily enable the invocation of their apps from a website. For instance, ms-screenclip is a simple app protocol built into Windows 10 that kicks off the process of taking a screenshot:

    ms-screenclip:?delayInSeconds=2

When the user invokes this url, the handler waits two seconds, then launches its UI to collect a screenshot. Notably, App Protocols are fire-and-forgetmeaning that the handler has no direct way to return data back to the browser that invoked the protocol. App Protocols can be invoked from almost every browser, and many other surfaces (e.g. email clients, Start > Run, etc.)

The power and simplicity of App Protocols comes at a cost. They are the easiest route out of browser sandboxes and are thus terrifying, especially because this exploit vector is stable and available in every browser from legacy IE to the very latest versions of Chrome/Firefox/Edge/Safari.

What’s the Security Risk?

A number of issues make App Protocols especially risky from a security point-of-view.

Careless App Implementation

The primary security problem is that most App Protocols were designed to address a particular scenario (e.g. a “Meet Now” page on a videoconferencing vendor’s website should launch the videoconferencing client) and they were not designed with the expectation that the app could be exposed to potentially dangerous data from the web at large.

We’ve seen apps where the app will silently reconfigure itself (e.g. sending your outbound mail to a different server) based on parameters in the URL it receives. We’ve seen apps where the app will immediately create or delete files without first confirming the irreversible operation with the user. We’ve seen apps that assumed they’d never get more than 255 characters in their URLs and had buffer-overflows leading to Remote Code Execution when that limit was exceeded. We’ve seen apps that reboot the computer without prompt. The list goes on and on.

Poor API Contract

In most cases3, App Protocols are implemented as a simple mapping between the protocol scheme (e.g. alert) and a shell command in the registry, e.g. 

AlertProtocol

Formal details can be found on MSDN. Beyond the typical shell\open\command\ option for launch,

You can quickly find the handlers from the command prompt:

    reg query HKCR /f "URL Protocol" /s

…or using the URLProtocolView tool.

When the protocol is invoked by the browser, it simply bundles up the URL and passes it on the command line to the target application. The app doesn’t get any information about the caller (e.g. “What browser or app invoked this?“, “What origin invoked this?“, etc) and thus it must make any decisions solely on the basis of the received URL.

Until recently, there was an even bigger problem here, related to the encoding of the URL. Browsers, including Chrome, Edge, and IE, were willing to include bare spaces and quotation marks in the URL argument, meaning that an app could launch with a command line like:

alert.exe "alert:This is an Evil URL" --DoSomethingDangerous --Ignore=This"

The app’s code interpreted the –DoSomethingDangerous text as an argument, failed to recognize it as a part of the URL, and invoked dangerous functionality. This attack led to remote code execution bugs in App Protocol handlers many times over the years (one of the biggest examples being Electron in 2018). 

Chrome began %-escaping spaces and quotation marks8 back in Chrome 64, and Edge 18 followed suit in Windows 10 RS5.

Chromium limits URLs to 2048 characters, but still shows the confirmation prompt for longer URLs.

You can see how your browser behaves using the links on this test page.

Future Opportunity: A richer API contract that allows an App Protocol handler to determine how specifically it was invoked would allow it to better protect itself from unexpected callers.

Moving the App Protocol URL data from the command line to somewhere else could help reduce the possibility of parsing errors. For example, Windows 10’s UWP architecture has an explicit URI Activation contract whereby an OnActivated function is called with the IActivatedEventArgsKind property set to ActivationKind.Protocol and the URI passed in that args’ Uri.AbsoluteUri property. This makes incorrect parsing much less likely, but UWP apps often forget that their protocols are reachable from untrusted HTML and thus they may expose dangerous functionality to attackers.

Sandbox Escape

The application that handles the protocol typically runs outside of the browser’s sandbox. This means that a security vulnerability in the app can be exploited to steal or corrupt any data the user can access, install malware, etc. If the browser is running Elevated (at Administrator), any App Protocol handlers it invokes are launched Elevated; this is part of UAC’s design.

Because most apps are written in native code, the result is that most protocol handlers end up in the DOOM! portion of this diagram:

RuleOfTwo

Prompting

In most cases, the only4 thing standing between the user and disaster is a permission prompt.

In Internet Explorer (or a Web Browser Control host application with FEATURE_SHOW_APP_PROTOCOL_WARN_DIALOG enabled), the prompt looked like this:

IEPermission


As you can see, the dialog provides a bunch of context about what’s happening, including the raw URL, the name of the handler, and a remark that allowing the launch may harm the computer.

Such information is lacking in more modern browsers, including Firefox:

FirefoxPermission

…and Edge/Chrome:

ChromePermission

Browser UI designers (reasonably) argue that the vast majority of users are poorly equipped to make trust decisions, even with the information in the IE prompt, and so modern UI has been greatly simplified5

Unfortunately, lost to evolution is the crucial indication to the user that they’re even making a trust decision at all [Issue].

Update: I wrote a whole post about prompting for AppProtocol launches.

In contrast to browsers, Microsoft Office offers a prompt which makes it clear that a security decision is being made, although it’s still unclear whether a user is equipped to make the correct decision.

Eliminating Prompts

Making matters more dangerous, everyone hates security prompts that interrupt non-malicious scenarios. A common user request can be summarized as “Prompt me only when something bad is going to happen. In fact, in those cases, don’t even prompt me, just protect me.

Unfortunately, the browser cannot know whether a given App Protocol invocation is good or evil, so it delegates control of prompting in two ways:

In Internet Explorer and Edge Legacy (version <= 18), the browser respects a per-protocol WarnOnOpen boolean in the registry, such that the App itself may tell the browser: “No worries, let anyone launch me, no prompt needed.

In Firefox, Chrome, and Edge (version >= 76), the browser instead allows the user to suppress further prompts with an “Always open links of this type in the associated app” checkbox.

If the user selects this option, the protocol will silently launch in the future without the browser first asking permission.

However, Edge/Chrome version 77.0.3864 removed the “Always open these types of links in the associated app” checkbox.

NoCheckbox


The stated reason for the removal is found in Chrome issue #982341:

No obvious way to undo “Always open these types of links” decision for External Protocols.

We realized in a conversation around issue 951540 that we don’t have settings UI
that allows users to reconsider decisions they’ve made around external protocol
support. Until we work that out, and make longer-term decisions about the
permission model around the feature generally, we should stop making the problem
worse by removing that checkbox from the UI.

A user who had ticked the “Always open” box has no way to later review that decision6, and no obvious way to reverse it. Almost no one figured out that using the “Clear Browsing Data > Cookies and other site data” dialog box option directs Chrome to delete all previous “Always open” decisions for the user’s profile. 

Particularly confusing is that the “Always open” decision wasn’t made on a per-site basis– it applies to every site visited by the user in that browser profile.

Update 1 of 2An Enterprise policy for v79+ allows administrators to restore the checkbox. End users can import this registry script.

Future Opportunity: Much of the risk inherent in open-without-prompting behavior comes from the site that any random site (http://evil.example.com) can abuse ambient permission to launch the protocol handler. If browsers changed the option to “Always allow this site to open this protocol”, the risk would be significantly reduced, and a user could reasonably safely allow, e.g. https://teams.microsoft.com to open the msteams protocol without a prompt.

Update 2 of 2: Microsoft Edge 82 introduced a ‘Allow Site/Scheme’ checkbox, and Edge 85 added a policy to allow IT Admins the same level of control.

Alternatively, perhaps the Registry-based provisioning of a protocol handler should explicitly list the sites allowed to launch the protocol, akin to the SiteLock protection for legacy ActiveX controls.

For some schemes7 , Chrome will not even show a prompt because the protocol is included on a built-in allow or deny list.

Some security folks have argued that browsers should not provide any mechanism for skipping the permission prompt. Unfortunately, there’s evidence to suggest that such a firm stance might result in vendors avoiding the prompt by choosing even riskier architectures for Web-to-App communication. More on this in a future post.

Admin Blocking

I wrote a whole post about how system admins can attempt to block protocol handlers.

Flood Prevention – User Gesture Requirement

Most browsers contain one additional protection against abuse– the requirement that the user interact with a page before an App Protocol may be invoked. If no user-gesture is present (and neither of two browser policies have disabled the requirement), the invocation is silently blocked with only a DevTools console message: Not allowed to launch because a user gesture is required revealing what happened:

Currently, flood prevention in Chromium does not work very well, suffering both false positives and false negatives. The problem is that the “allow” state is implemented as a simple boolean per browser (not per-markup).

Some flows you might reasonably expect to be allowed (e.g. launching https://webdbg.com/test/protocol/ twice from your OS shell) are only allowed once per browser instance until the user interacts with the browser.

The AutoLaunchProtocolsFromOrigins policy and the user-visible “Always allow example.com to open links of this type” checkbox bypass only the pre-launch prompt and do not bypass the preceding flood prevention check. To bypass the gesture requirement, an Edge Policy is available, but you must carefully consider the security implications before using that policy: An allow-listed site could open an unlimited number of instances of the handler app on the client without user recourse.

Best Practices for Web Developers

A common complaint from Web Developers is that their Application Protocols are not launching as expected from browsers. Often, the problem is intermittent, owing to different configurations of user’s browsers (e.g. with extensions or without) or systems (user forgot to install the native app that installed the URL protocol).

To that end, it is important to follow best-practices when building a web page to launch application protocols (for example, the Microsoft Teams Join Meeting page).

Specifically, any page that attempts to launch an external handler should:

  1. Do so as a result of an explicit user action (e.g. a click on a link or button in the page)
  2. Offer the user an explanation of the expected behavior (“We’re trying to launch the app now.“)
  3. Offer an option to retry (e.g. “Your application should launch now. If it does not, click here.”)
  4. Perform the navigation from the top-level page (and definitely NOT from a sandboxed iframe)

By following these rules, you can build an understandable launch experience that will work properly for your users.

Zero-Day Defense

Even when a zero day vulnerability in an App Protocol handler is getting exploited in the wild (e.g. this one), browsers have few defenses available to protect users. 

Unlike, say, file downloads, where the browser has multiple mechanisms to protect users against newly-discovered threats (e.g. file type policies and SmartScreen/SafeBrowsing), browsers do not presently have rapid update mechanisms that can block access to known vulnerable App Protocol handlers.

Future Opportunity: Use SafeBrowsing/SmartScreen or a file-type-policies style Component Update to supply the client with a list of known-vulnerable protocol handlers. If a page attempts to invoke such a protocol, either block it entirely or strongly caution the user.

Update: Guess what landed for Edge 96? ^ That.

To improve the experience even further, the blocklist could contain version information such that blocking/additional warnings would only be shown if the version of the handler app is earlier than the version number of the app containing the fix. 

Antivirus programs typically do monitor all calls to CreateProcess and could conceivably protect against malicious invocation of app protocol handlers, but I am not aware of any having ever done so.

IT Administrators can block users from launching protocols by listing them as rules in URLBlocklist policy:

REG ADD "HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Edge\URLBlockList" /v "1" /t REG_SZ /d "exampleBlocked:*" /f

Today, Edge does not offer a trivial mechanism to prevent the launch of all Application Protocols; if you don’t want to block them individually, you could set a URLBlocklist rule of * and then use the URLAllowlist to allow http:* https:* blob:* about:* and possibly edge:* extension:* mailto:* depending on your needs.

Protocol Reachability

As mentioned in passing above7, most browsers include a built-in allow or deny list of schemes that cannot be launched from the browser. Maintaining a manual list isn’t ideal, so various other approaches have been proposed (e.g. a local+ prefix could preclude web launches), although presently no standard exists.

Outside of browser scenarios, Windows’ “Modern” sandboxed apps can only launch a TrustedInstaller-owned URI scheme (e.g. schemes included in Windows) if the scheme’s registration EditFlags include the FTA_SafeForElevation bit. Browsers presently have no awareness or enforcement of this flag.

Privacy Concerns (Try To) Prevent Protocol Detection

One of the most common challenges for developers who want to use App Protocols for Web-to-App communication is that the web platform does not expose the list of available protocol handlers to JavaScript. This is primarily a privacy consideration: exposing the list of protocol handlers to the web would expose a significant amount of fingerprintable entropy and might even reveal things about the user’s interests and beliefs (e.g. a ConservativeNews App or a LGBTQ App might expose a protocol handler for app-to-app communication). In May 2021, the FingerprintJS folks put together a practical exploit. It’s also something of a security measure– knowing what software is installed on a client provides information about what an attacker might target on that system.

Internet Explorer and Edge <= 18 supply a non-standard JavaScript function msLaunchUri that allows a web page to detect that a user didn’t have a to-be-invoked protocol handler installed, but this function is not available in other browsers; sometimes Web Developers try unreliable hacks.

Back in the old old days (pre-2010), a common workaround was to have installers that installed a protocol handler also update the Internet Explorer User-Agent string or Version Vector information; a page could check for the flag before attempting to launch the protocol. These hacks are not available in modern browsers (although you could mimic them with a modern browser extension if you really wanted. But if you’re willing to do that, you could also forgo the protocol handler approach in favor of Native Messaging).

UX When a Protocol Isn’t Installed

Browser behavior varies if the user attempts to invoke a link with a scheme for which no protocol handler is registered.

Firefox shows an error page:

FirefoxNotInstalled

On Windows 8 and later, IE and Edge Legacy show a prompt that offers to take the user to the Microsoft Store to search for a protocol handler for the target scheme:

Win10NotInstalled

Unfortunately, this search is rarely fruitful because most apps are not available in the Microsoft Store.

Interestingly, Chrome and Edge76+ show nothing to the user at all when attempting to invoke a link for which no protocol handler is installed, in part to avoid leaking the non-existence of the protocol (as noted, a privacy concern).

Debugging Launches

In Chrome 85, I added logging to the Chromium Developer Tools to record when a Application Protocol launches or is blocked:

ConsoleLogging

Upcoming change – Require HTTPS to Invoke

Chrome is considering requiring that a page be served over HTTPS (“a Secure Context”) in order for it to invoke an application protocol.

In future posts, I’ll explore some other alternatives for Web-to-App communication.

-Eric


Notes

1 In some browsers, it’s possible to register web-based handlers for “AppProtocols” (e.g. maps: and mailto: might go to Google Maps and GMail respectively). This mechanism is currently little-used.

2 App Schemes is probably a more accurate name, but “Protocols” is commonly used. Within Chromium, App Protocols are called “External Protocols” or “External Handlers.” Igalia’s engineer did an awesome writeup on the new-for-2022 architecture.

3 There are other ways to launch protocol schemes, including COM and the Windows 10 App Model’s URI Activation mechanism, but these are uncommon.

4 As an anti-abuse mechanism, the browser may require a user-gesture (e.g. a mouse click) before attempting to launch an App Protocol, and may throttle invocations to avoid spamming the user with an infinite stream of prompts. This is discussed in the Flood Prevention section of this post.

5 Chrome’s prompt used to look much like IE’s.

6 Short of opening the Preferences for the profile in Notepad or another text editor. E.g. after choosing “Always open” for Microsoft Teams and Skype for Business, the JSON file %localappdata%\Microsoft\Edge SxS\user Data\default\preferences contains:

"protocol_handler":{"excluded_schemes":{"msteams":false, "ms-sfb": false}}

To see the list in IE/Edge<=18, you can run a registry query to find protocols with WarnOnOpen set to 0:

reg query "HKCU\SOFTWARE\Microsoft\Internet Explorer\ProtocolExecute" /s
reg query "HKLM\SOFTWARE\Microsoft\Internet Explorer\ProtocolExecute" /s

7 Hardcoded schemes:

kDeniedSchemes[] = {“afp”,”data”,”disk”,”disks”,”file”,”hcp”,”ie.http”,”javascript”,”ms-help”,”nntp”,”res”,”shell”,”vbscript”,”view-source”,”vnd.ms.radio”}
kAllowedSchemes[] = {“mailto”, “news”, “snews”};

Firefox maintains a different block list:

hcp,vbscript,javascript,data,ie.http,iehistory,ierss,mk,ms-cxh,ms-cxh-full,ms-help,ms-msdt,res,search,search-ms,shell,

vnd.ms.radio,help,disk,disks,afp,moz-icon,firefox,firefox-private,ttp,htp,ttps,tps,ps,htps,ile,le

The EscapeExternalHandlerValue function:

// Escapes characters in text suitable for use as an external protocol handler command.
// We %XX everything except alphanumerics and -_.!~*'() and the restricted
// characters (;/?:@&=+$,#[]) and a valid percent escape sequence (%XX). EscapeExternalHandlerValue()